Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/07/2022 in all areas

  1. wouldn't that mean that you need to investigate why it isn't working in order to find and fix what's causing the incorrect operation?
    1 point
  2. if you are interested in displaying a range of links around the current page number (there's no guarantee the OP even saw the post suggesting that), here's some example code - <?php // build pagination links: prev, 1, range around current page, total_pages, next // number of +/- links around the current page number $range = 3; // number of items per page $limit = 12; // get total number of rows using a SELECT COUNT(*) ... query $total_rows = 3121; // calculate total number of pages $total_pages = ceil($total_rows / $limit); // get current page, default to page 1 $page = $_GET['page'] ?? 1; // limit page to be between 1 and $total_pages $page = max(1,min($total_pages,$page)); // produce array of pagination numbers: 1, range around current page, total_pages, without duplicates, between 1 and total_pages $links = array_filter(array_unique(array_merge([1],range($page-$range, $page+$range),[$total_pages])), function ($val) use ($total_pages) { return $val >= 1 && $val <= $total_pages; }); // build pagination links $pagination_links = ''; // get a copy of any existing get parameters $get = $_GET; // produce previous $get['page'] = $page - 1; $qs = http_build_query($get,'', '&amp;'); $pagination_links .= $page == 1 ? 'prev ' : "<a href='?$qs'>prev</a> "; // produce numerical links foreach($links as $link) { $get['page'] = $link; $qs = http_build_query($get,'', '&amp;'); $pagination_links .= $link == $page ? "$link " : "<a href='?$qs'>$link</a> "; } // produce next $get['page'] = $page + 1; $qs = http_build_query($get,'', '&amp;'); $pagination_links .= $page == $total_pages ? 'next' : "<a href='?$qs'>next</a>"; // output pagination echo $pagination_links; note: your query to get the total number of rows should NOT select all the columns and all the rows of data. you should use a SELECT COUNT(*) ... query, then fetch the counted value.
    1 point
  3. There is no "Group #" in your data - does that come from a member table along with the name? Are those the combinations of date/serid that occur in the data? (So if there 7 services D1/D2/D3/PM/WS/PBB/LS there could potentially be up to 49 columns for each week, or can a serid occur a maximium of once per week?). Dynamic headings created from your supplied test data would be What is the purpose of calid column? The date gives you the week number. (Same goes for your month and year columns)
    1 point
  4. Actually that is rarely the case. If I understand you, part of the data you are looking up is "user" data from a user table. That data certainly is not always changing, and in most system designs, you absolutely know/control when it is changing. Let's say for example, there is user data + profile data perhaps in 2 related tables. What a memcache based cache would provide is --- your code checks for the existence of the data in the cache. You have to figure out what the cache key is going to be to be able to resolve it, and usually that is some sort of string like: '/user/{userid}'. If the key exists, you just read the data from memcache. This eliminates the need to query the database. If the key doesn't exist you perform the query, and save the result in a new memcache key. You have the option of specifying a ttl value for the key. Now this query could also be a query that joins to the user profile table. In your routines that change/update the user or profile data, you invalidate the memcache data for that user when you save it to the database. The next read/select of the data will create a new cache entry. The main trick that memcache does is to allow for it to be clustered, and this is why facebook originally used it, given their enormous graph. There are very few systems that have that level of scalability issues, but most applications get a significant performance and scalability boost out of using some sort of in-memory cache. A lot of projects I've worked on have used Redis. You do need to do some thinking about the type of data in your system, and whether or not portions of it are relatively static. It also tends to show you if you've made significant design mistakes. An example might be putting some sort of de-normalized counter in a table, such that you made something that is relatively static, like a user table, non-static. Putting a "last login" column, or summary columns like "friend_count" or "topics" all reduce database concurrency, and then cause people to move away from caching because they put these types of fields that require frequent updates into the main user table. So to conclude, let's say you have something that does get updated with some frequency but then requires a join to another table for context/data availability. An example might be a message system, where you need to join to the user table to get the to/from usernames associated with messages. Again, depending on your design, even though messages could be added with some rapidity, that doesn't mean there aren't users in the system who could have a cached version of the message data. It also doesn't mean that you can't use the cached user data to decorate the messages. You can also store a query that includes a join as in the example of a query meant to return "all private messages sent to user 5" That query can certainly be cached, and in doing so you will reduce load on your database. You just need to understand which queries like this need to be invalidated from the cache. So long as you have cache name schemes that make sense, it's not that hard to understand what caches you need to remove if data was added or changed. Most of those schemes are similar to way you might design a rest api.
    1 point
This leaderboard is set to New York/GMT-05:00
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.