Jump to content

kicken

Gurus
  • Posts

    4,694
  • Joined

  • Last visited

  • Days Won

    177

Everything posted by kicken

  1. PHP saying there is a syntax error on line X doesn't necessarily mean that the error is actually on line X. It could in fact be anywhere from line 1 through X. The line PHP reports is just where PHP finally realized there is a problem. With issues such as missing quotation marks or missing closing braces PHP will be able to continue parsing the source for quite a while before realizing something is amiss. Most of the time, the line number reported gets you close, but if nothing appears wrong there you need to start back tracking until you figure it out.
  2. The order you want depends mostly on how you'll be using the index in your queries. I'd venture that in an order system you're most likely going to be joining orders to the details then to the products to generate invoices and such, which would have queries like: select * from orders o inner join order_details od on od.order_id=o.id inner join products p on p.id=od.product_id A query such as that would want your UK defined as (order_id,product_id). With that order the UK can be used to enforce your foreign key relationship on the order_id column and be used to speed up the join between the orders and order_details tables. If on the other hand you were doing a query based on products, such as to find popular products you might do queries like: select * from products p inner join order_details od on od.product_id=p.id For that kind of query you might consider the (product_id,order_id) order so you could use the UK to handle your product_id foreign key and the join to the products table. In either case, assuming both order_id and product_id are foreign keys then your UK can satisfy the index requirement for one of them and the other would need it's own index. Neither order is particularly bad or good in this situation as you need at least two indexes anyway. In such a table design you don't necessarily even need your ID column so you could just make your PRIMARY KEY be (order_id, product_id). Some people prefer always having a auto_increment primary key but in the scenario proposed it's not necessary and could be removed. More indexes mean more index management and disk space usage. That can potentially lead to slower performance when inserting/updating/deleting data, but wouldn't have any significant impact on selects. As such, you should try and limit your indexes to only those that you absolutely need to make your system function well. Until your system grows to a very large scale (millions of rows), it's unlikely you'd notice any problems from extra indexes though. There's no efficiency to be gained from having the separate PK and UK there as far as I am aware. A single multi-column primary key would be just as effective and save an index.
  3. It is in the case you setup in your original post. I know you had some other thread ended up talking about this whole object-oriented sub/super type thing but I didn't really pay attention to it so am unfamiliar with the details. Looking at just this thread in it's own context the answer that your unique constraint is unnecessary is factual. In your original post you show that ID is a primary key (signaled by the (pk) bit). By definition, a primary key is unique across the table which means it's the only thing needed to find any particular row in the table when doing a join or a search. As a result, including it in another unique index is mostly pointless as discussed before. Your link describes a somewhat different situation from what you originally posted. The unique constraint is there because it's necessary for a foreign key in another table, and that foreign key has more to do with enforcing row types than to support joining of tables (as implied in your question "..using that as a join to another table"). In all the tables in that stack exchange example, the veh_id column is the only thing strictly necessary for the DB to handle all the joins and link the tables together. The rest of the columns have to do with your type enforcement. I'm not sure why they used separate primary/unique keys, I think the whole thing could have been done by making a single multi-column primary key which might make the whole thing easier to understand. Because indexes work essentially left-to-right, so if you want to take advantage of a column being part of an index, it either has to be the left-most column of the index, or you also have to specify all the columns prior to it. If you define the order as (id, product_type_code) then you can't use that index to search for a product_type_code unless you also specify an ID (and since ID is unique, that makes the product_type_code mostly useless). However if you define the order as (product_type_code, id) then you can search for a product_type_code using that index and get all the matching ID's as a result. Applying this to your stack exchange example, it also means that you in fact cannot double-dip on your indexes and will need to create extra ones to support your foreign key definitions. Because veh_id is the value that ties all the tables together, you want it to be the first value in your unique / primary key constraint. As a result the veh_type_code will need it's own index to support that foreign key definition.
  4. The foreign key constraint needs a suitable index on the column, but it doesn't need to be an index dedicated to that column. So yes, you can double up your unique index if you create it properly. In order to re-use it, the foreign key column must be the first column in the index, so when you create it you want to do CONSTRAINT UQ_blah UNIQUE (product_type_code, id) and not CONSTRAINT UQ_blah UNIQUE (id, product_type_code). As mentioned though, in the scenario you laid out, the unique constraint is entirely unnecessary as your ID column is already unique by virtue of being the primary key so you'd just make a simple index on the product_type_code column. In a scenario where Id wasn't a primary key and a unique constraint was necessary then you could double-dip like that.
  5. The MySQL manual says: So it would seem you are stuck in that case. I generally use SQL Server these days which does allow the name to be specified and I use PK_<tablename>. I use a similar format for other constraints, just with different prefixes/suffixes. How you name them doesn't really matter though, many people just use auto-generated names and it works just fine.
  6. If column 3 is unique across the entire table, then MySQL only has to search column 3 using it's unique index to find the one matching row on your join. Column 1 and column 2 become essentially redundant and you could just exclude them entirely.
  7. If that column by itself is unique, then any set of columns that includes it will automatically be unique, there's no real need for an extra key.
  8. Don't be afraid of having to re-factor things in the future. You don't have to get the perfect layout from the get go. I find that many times even when I tried to plan for the future and design accordingly I'd usually miss something and end up doing some refactoring anyway. Best to just wait until you know what you need rather than guess at what you think you'll need. The book/component table structure sounds fine. If in the future you decide to split it up further you can. If in the future you decide to do pages/sections you could probably do so with relatively little changes to the tables.
  9. Aka, poor implementation. No JS is necessary to avoid loading the full sized version, you just link it separately. <a href="full-size.png"><img src="thumb.png"></a> I personally wouldn't, no. I have a dual-screen desktop so I'd open the book on one screen and Photoshop on the other. In the particular case of Photoshop it seems unlikely in general anyway to me. The person would need a desktop to run Photoshop on anyway so if they were trying to follow along and learn why not just open it in a browser on the desktop. Right, but if you're loading appropriately sized images then 30 images loading shouldn't really be a big deal. A full-size 1920x1080 scaled down to roughly phone-screen sized images should only be around 20-100k in size. 100k * 30 = ~3MB. 3MB on an average mobile connection would take all of 1 second to download. It's basically illegible sure, but that's where the previous point of having mobile-friendly versions comes in. Alternatively, link the illegible version to a full-sized version they can load and zoom/pan around on demand. A lot of people do things on mobile now, so it's certainly worth considering that market, but in my experience people still acknowledge when something isn't really a good fit for mobile and will move to a desktop/tablet in those scenarios (if possible). I have a friend that does practically everything from her phone mainly because for a long time she had nothing else. From time to time however she will come by to use my computer for things because they are just not mobile-friendly tasks and she recognizes that. Most people I know are still rational about what is and isn't a good fit for mobile, so I think your fear of everyone demanding a refund because your site isn't 100% mobile friendly is irrational. Sure, there may be some because people can be dicks but that's part of business. If your books deal with teaching software and that software is primarily a desktop thing I'd wager most people will interact with the site from a desktop. I'd make some considerations for mobile (responsive layout, smaller images) for those who might want to read some on their phone while away from a desktop (ie, commuting) but wouldn't spend a ton of time up front trying to make that experience perfect. I'd push that until later when everything else is up and running and more time is available to focus on that and/or real customers start requesting it.
  10. If you think the entire screen is necessary to understand the content then so be it and the user's choice of device isn't really your concern. The user can either choose to struggle on their device or move to a more capable device. Your focus needs to be in determining if the entire screen truly is necessary, or if it's just a "nice to have". For a desktop user, it might be nice to see the whole thing, but maybe they only really need to see the top left corner and bottom right corner and you can cut out all the middle stuff. Something I do frequently when creating screenshots it to resize my windows so they are as small as possible to remove wasted empty space then take the screenshot. If you can't per-emptively make the window smaller, cut the middle part later and leave some jagged edges indicating it was cropped.
  11. That's just due to a poor implementation or a slow connection it sounds like. I've had larger galleries load in much less time. If you properly thumbnail your images (ie, not just set the width/height in the html/css) then things should load fairly easily and quickly on any sort of modern connection. Then you link those thumbnails to the full size version so that it will only load if the user clicks on it. In any event, this thread sounds to me like a case of premature optimization. You're worried about performance issues without having any real data showing these is a problem to begin with. 10-12 images and whatever text is pretty much nothing for the modern web for most people on the desktop. On mobile, size can still be an issue, so it can help to reduce the image load. Start by just employing one of the lazy-loading libraries and then see how things work before investing more time into something else more complicated. You're already worried about images being illegible on mobile, so test it and see. If they are then maybe you need to create new images or re-write the text to accommodate mobile which might make the loading problem moot. That depends a lot on how the screenshot is composed. If you're taking a screen shot of a full 1920x1080 monitor to show the pivot table sidebar then yea, it'll be hard to read. If you crop that to just the sidebar and eliminate wasted space to show the specific options however then there shouldn't be an issue. This is something that the <picture> element mentioned by maxxed a few times could help with. Aside from just different resolution images for different devices, you could have entirely different images. Maybe the desktop users get the full 1920x1080 shot because it's similar to what they might see, but mobile users get the cropped version so the pivot table options are legible.
  12. Exceptions are different than errors, and there's a separate function to install a handler for them: set_exception_handler. That kind of handling is only for doing something like logging the error or whatever however, you can't use it to ignore the exception and continue. If you want your script to continue executing despite the exception, then you need to catch it with a try/catch block and handle it appropriately there.
  13. You want the Modulo operator.
  14. I had to change that to posix_kill(posix_getpid(), 11); as the SIGSEGV constant was not recognized. After doing that, and the rest of the stuff mentioned here (ulimit -c unlimited, systemd LimitCORE=infinity, core_pattern change) I was able to successfully get a core dump on ubuntu. root@web1:/tmp# ls -al core* -rw------- 1 root www-data 278056960 May 11 11:24 coredump-php-fpm7.3.19660 root@web1:/tmp# gdb -c ./coredump-php-fpm7.3.3256 /usr/sbin/php-fpm7.3 [...] Core was generated by `php-fpm: pool kicken '. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007f0b38a02187 in kill () at ../sysdeps/unix/syscall-template.S:78 78 ../sysdeps/unix/syscall-template.S: No such file or directory. (gdb) bt #0 0x00007f0b38a02187 in kill () at ../sysdeps/unix/syscall-template.S:78 #1 0x00007f0b2a4ddde3 in ?? () from /usr/lib/php/20180731/posix.so #2 0x000055d5894844eb in ZEND_DO_ICALL_SPEC_RETVAL_UNUSED_HANDLER () at ./Zend/zend_vm_execute.h:649 #3 execute_ex (ex=0xcb8) at ./Zend/zend_vm_execute.h:55503 #4 0x000055d5894886f3 in zend_execute (op_array=op_array@entry=0x7f0b356800e0, return_value=0x0, return_value@entry=0x7f0b19e77b60) at ./Zend/zend_vm_execute.h:60939 #5 0x000055d5893f93a2 in zend_execute_scripts (type=type@entry=8, retval=0x7f0b19e77b60, retval@entry=0x0, file_count=895606832, file_count@entry=3) at ./Zend/zend.c:1568 #6 0x000055d589399380 in php_execute_script (primary_file=0x7ffd8f44a570) at ./main/main.c:2639 #7 0x000055d58925299b in main (argc=<optimized out>, argv=<optimized out>) at ./sapi/fpm/fpm/fpm_main.c:1951 (gdb)
  15. If your using systemd, you may need to configure the ulimit equivalent there, LimitCORE = infinity. Create /etc/systemd/system/php7.3-fpm.service.d/coredump.conf (may need to adjust directory name if your service name is not php7.3-fpm) with the content: [Service] LimitCORE = infinity Reload the configuration with /bin/systemctl daemon-reload then restart the FPM service.
  16. I usually use it when dealing with mysql databases, but that's not very often. I'm not sure what you're expecting to do from it. It's fairly easy to setup and get connected to a DB so you can run queries and look through your tables and data. It has some design tools to help plan and diagram your database and the relationships between tables which take a little more effort to learn, but that's only if you want to use them. I haven't used phpMyAdmin in a long time so I'm not sure I could really compare the two, but I personally prefer having the separate desktop application like Workbench over a web app like phpMyAdmin. To use Workbench you need to be able to connect to the database server directly or via an SSH tunnel. On shared hosting that may or may not be possible, which is why many of them provide something like phpMyAdmin. For your own private server it shouldn't be a problem.
  17. Did you check the ulimit setting? If your script is something you just run from the CLI you don't really need a core dump though, just run it using gdb. kicken@web1:~$ gdb /usr/bin/php7.3 GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git [...] Reading symbols from /usr/bin/php7.3...(no debugging symbols found)...done. (gdb) run yourScript.php Starting program: /usr/bin/php7.3 yourScript.php
  18. You're seeing those characters because the data is UTF-8 encoded (which is generally a good thing) but whatever tool your using is not interpreting it as UTF-8. If you really want to convert it, then maybe try mb_convert_encoding, however the better thing to do is fix (or replace) your tool with one that understands UTF-8 and shows the data properly.
  19. Ok foreach ( $dept_codes as $d ) { $production_history[$d] = []; } This can be replaced with array_fill_keys $production_history = array_fill_keys($dept_codes, []); Next up, foreach ( $dept_codes as $d ) { //loop through returned db rows foreach ( get_production_history( $order_id, $pdo ) as $row ) { if( $row['dept_code'] == $d ) { This ends up querying your database count($dept_codes) times when you only really need to do so once as far as I can tell. For each $dept_codes loop you're querying the database for all your results, then just filtering them out to only the ones matching the current department. Instead of doing that, just query your database once and process the results according to the department in the current row. foreach (get_production_history($order_id, $pdo) as $row){ $d = $row['dept_code']; //set start time if ( !in_array ( $row[ 'id' ], $production_history[ $d ]) && $row[ 'status_id' ] === 1 ) { $production_history[$d][ $row['id'] ] = array( 'submit_time' => $row[ 'submit_time' ], 'finish_time' => '', ); //record id $last_recorded = $row['id']; //set finished time } elseif ( $row[ 'status_id' ] === 3 ) { $production_history[$d][ $last_recorded ]['finish_time'] = $row[ 'submit_time' ]; } } Since your thread was prompted by anonymous functions, why not use them? //find records without a finish time and unset them foreach ( $production_history as $dept => $value ) foreach ($value as $k => $v) if (empty($v['finish_time'])) unset($production_history[$dept][$k]); //find departments without records and unset them foreach ( $production_history as $dept => $value ) if (empty($value)) unset($production_history[$dept]); Could be replaced with a mix of array_filter and array_walk: array_walk($production_history, function(&$entryList){ $entryList = array_filter($entryList, function($value){ return !empty($value['finish_time']); }); }); $production_history = array_filter($production_history); First array_walk loops over the $production_history array like your foreach loop, each value is passed to the function as $entryList (using a reference so we can modify it). Then the function uses array_filter to keep only entries which have a finish_time defined. //if on first entry for dept print parent if ( $dept_arr_count == 0 ) { //generate parent JSON entry $json[] = array( 'pID' => $dept, 'pName' => get_dept_name( $dept, $pdo ), 'pStart' => '', 'pEnd' => '', 'pClass' => 'ggroupblack', 'pLink' => '', 'pMile' => 0, 'pGroup' => 2, 'pParent' => 0, //need to set this for those that are children 'pOpen' => 1, 'pDepend' => '', 'pCaption' => '', 'pNotes' => '', ); } The if statement check here is unnecessary the way your data is setup. $production_history is a unique array of departments. As such, this part of the loop will only run once per department and there's no need to try and track if you're on a new department. $submit_time = (isset($production_history[$dept][$k]['submit_time'])) ? $production_history[$dept][$k]['submit_time'] : ''; $finish_time = (isset($production_history[$dept][$k]['finish_time'])) ? $production_history[$dept][$k]['finish_time'] : ''; $k is meaningless here. It's value is going to be whatever the last entry of the //find records without a finish time and unset them loop is (or undefined in my replacement version) You're not using these variables anywhere anyway so the entire lines could be removed. I'm not sure what you were attempting with these lines. $dept .''. $dept_arr_count+1 You might have an operator precedence issue here, it's unclear what your goal is. Concatenation and addition have the same precedence and are processed left to right so that expression is ($dept.$dept_arr_count)+1. You combine $dept and $dept_arr_count into a single number, then add one to it. Your spacing however, to me, suggests you intend to add one to $dept_arr_count first, then combine the two numbers. Achieving that requires a set of parenthesis. $dept . ($dept_arr_count+1) In either case, the empty string is unnecessary and can be removed. $production_history[$dept][$k]['submit_time'] $production_history[$dept][$k]['finish_time'] $production_history[$dept][$k] is the same as $v, so you can just use $v['submit_time'] and $v['finish_time'] instead. So, with all those suggestions applied, the new code would look something like this, assuming I didn't goof up somewhere: //fetch production history data if( isset ( $action ) && $action == 'get_production_history' ) { $order_id = $_GET['order_id']; $production_history = []; $last_recorded = 0; $dept_codes = [5,6,7,8,10,11,12]; $production_history = array_fill_keys($dept_codes, []); //loop through returned db rows foreach (get_production_history($order_id, $pdo) as $row){ $d = $row['dept_code']; //set start time if ( !in_array ( $row[ 'id' ], $production_history[ $d ]) && $row[ 'status_id' ] === 1 ) { $production_history[$d][ $row['id'] ] = array( 'submit_time' => $row[ 'submit_time' ], 'finish_time' => '', ); //record id $last_recorded = $row['id']; //set finished time } elseif ( $row[ 'status_id' ] === 3 ) { $production_history[$d][ $last_recorded ]['finish_time'] = $row[ 'submit_time' ]; } } array_walk($production_history, function(&$entryList){ $entryList = array_filter($entryList, function($value){ return !empty($value['finish_time']); }); }); $production_history = array_filter($production_history); $json = []; foreach ( $production_history as $dept => $value ) { //generate parent JSON entry $json[] = array( 'pID' => $dept, 'pName' => get_dept_name( $dept, $pdo ), 'pStart' => '', 'pEnd' => '', 'pClass' => 'ggroupblack', 'pLink' => '', 'pMile' => 0, 'pGroup' => 2, 'pParent' => 0, //need to set this for those that are children 'pOpen' => 1, 'pDepend' => '', 'pCaption' => '', 'pNotes' => '', ); //print children $dept_arr_count = 0; foreach ($value as $k => $v) { $json[] = array( 'pID' => $dept .''. $dept_arr_count+1, 'pName' => '', 'pStart' => $v['submit_time'], 'pEnd' => $v['finish_time'], 'pClass' => 'ggroupblack', 'pLink' => '', 'pMile' => 0, 'pGroup' => 0, 'pParent' => $dept, //need to set this for those that are children 'pOpen' => 1, 'pDepend' => '', 'pCaption' => '', 'pNotes' => '', ); $dept_arr_count++; } } header('Content-type: text/javascript'); print(json_encode($json, JSON_PRETTY_PRINT)); }
  20. You won't be able to get determine their shift at any given moment by checking the current time. There's not enough information to do that. If it's a system the login to and stay logged into through out the day then you could determine it at login and store it in a session variable. So long as they stay logged in that'd be fine. Even that however could fail if they end up logging out but then have to login again after their shift technically ended for some last minute stuff before going home. So the only ideal solutions are to either Let your users select the shift when making their entries. Perhaps have a supervisor review the entries for accuracy Have a defined schedule you can use to look up their shift. This is less of a technical problem and more of a business process problem. First, determine how their shift is determined in the business process. Many places just use a clock-in/clock-out process where at the start of their shift the employee completes a clock-in process and at the end they complete a clock-out process.
  21. You mis-understand what I mean. Yes it sends the modified request, but that won't cause the currently displayed page to change (if you edit and re-send the document request) or trigger any javascript handlers related to the original request (if you edit and re-send an XHR request). Using that check for notifications request as an example again, if I edit and resend that request, it won't re-show the little popup saying there is a new notification. Likewise with my school example, edit and re-send wouldn't trigger the code that handles a successful login. That's why I had to use breakpoints to modify the original request rather than just edit and resend it.
  22. Because those are the name/value pairs for the form. Every form is going to have something different there. The one in my image is from a XHR request that this site uses to check for new replies to a thread. Yes. That example doesn't need break points, it's a simple form where you'd just modify the DOM with the inspector tool like you mentioned above. Find the <input> tag you want to change and modify it's value attribute. The school's website used a JS library to scan a QR code using a webcam and took then made an XHR request with the data to perform the login. That type of situation is where you need to use break points and it's done via the Debugger tab in the XHR Breakpoints panel. Click the + to add one and enter some URL text to stop on. You probably saw it in the Cookies panel. Like everything else, there's nothing to stop someone from modifying that value to whatever they want. Like the product ID though, it doesn't matter much if they do. Most likely whatever they change it to would be invalid and just result in them starting a new session. If they did happen to change it to another valid session ID then they'd inherit that session. This is why session IDs need to be long, random and should not shared.
  23. The connection will get closed eventually sure, but why leave it laying around unused until then? Close it when your done with it.
  24. Click on the request and it will open all the details in a side panel. One of the tabs of that panel is Params that shows the data that was submitted. There's lots of other info in the other panels that may be useful too. That depends a bit on how things are setup and what you want to do. Firefox has an Edit and Resend button you can use to craft a new request. This just sends the request and shows the response in the dev tools, it won't cause the page to change or trigger and result processing in javascript. If the form is a standard HTML form, just inspect it in the dom and modify the values then submit it. In the case of the schools site, the request was done via XHR so I set a break point on XHR requests (Debugger -> XHR Breakpoints) to find where the request originated from, then set another break point before the XHR request so I could modify the variables used to generate the request. No where in particular. It's just something you learn to do after being a web developer for years.
  25. If you go to the network tab of the dev tools and look at the requests it will show you exactly what was submitted by the form. Nothing on the client side of things is safe from tampering. I used all these tools/techniques a couple weeks ago to "hack" my way into my nieces school platform as their javascript QR code reader wasn't working and that's the only way she had to log in. I submitted a few bad login attempts with the dev tools open to see how they were submitting the data. After that I scanned her QR code with my phone to get the data then used the dev tools to change the data prior to submission so it was correct and get her logged in.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.