Jump to content

NotionCommotion

Members
  • Posts

    2,446
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by NotionCommotion

  1. I would go so far to say "always use PDO's prepared statements until you know why you should do differently".
  2. Hi ginerjm, I expect you are right about regretting doing so, but don't know why. Mind sharing your reasons? Thanks
  3. Thanks Kicken, Ginerjm, and Jacques1, For the temperature/humidity application, the data is used two ways: Quickly return last monitored values in a building building. Return aggregate values and trend values. My plan was to use a time series database such as InfluxDB for the aggregate values and trends. I've also considered changing to PostgreSQL for this as I understand it is more applicable than MySQL, but "think" InfluxDB (or similar) is more applicable. Getting the most current values will occur more often, and while I could just use the one database and return the last value, I thought maybe it would be much quicker also storing these values in a single column as JSON. The application would never try to add/delete/modify individual data in the JSON, but would just re-write the entire string every time a group of data is received for a given building, and a race condition will not occur. Albeit, a given request would never need all the most current values in a given building, but might need 40 out of the total 1,000 values. Don't know if this makes any more sense...
  4. Thanks Adam and Kicken, If you ask a SQL only person, they will say always normalize to at least 3 levels, and if you have performance issues, consider doing less, however, I tend to agree with both of you. Guess I really have to understand what future use might be. If I later wanted to find all emails which "bob" was in the CC list, storing JSON wouldn't be good, however, I don't expect I will ever need to do so, and if it later becomes a need, could always restructure the data. Are there rules of thumb how many pieces of data could/should be stored in JSON? For instance, for another application, I wish to store several current environmental monitored data for a bunch of buildings, the number and type of environmental data for each building will change and is not the same for every building, and the data will be save and retrieve on a per-building basis. Doing it this way makes it very nice and I would expect efficient if I never wanted to get a list of all room temperatures, but only wanted to get all current data from Building #2. But what if Building #2 had 1,000 data points? Probably not a problem, right? building_id data 1 {"Room 321 temperature":72, "Room 222 pressure": 0.02} 2 {"Room 111 humidity": 0.43, "Room 111 pressure": 0.01, "Room 333 temperature": 68} ....
  5. I have multiple databases distributed over individual distributed servers, and each server is maintained by different parties. There is a single "master" or "parent" database which includes a database_child_unique_identifier column which is some UUID which uniquely identifies each child database (i.e. database_child1, database_child2, etc). When a record is added to a child table, a corresponding record must be added to the master table. An application exists on all the child servers which will allow a record to be inserted, and a REST API will be used to add a corresponding record to the master database. The IDs will be up to 32 characters long, obviously must be unique if they are primary keys, and will be foreign keys of other tables in their corresponding databases. The ID of this record is not immutable, and the application will also allow it to be changed and will use the REST API to update the master table (along with cascade update for the foreign keys). Lastly, the ID has "meaning". When adding a record to a child database, the user must know the ID is valid, and the master server will use this ID to obtain data from yet other distributed servers via a SOAP interface. It is these other servers that ultimately define the value of the ID, they are unique in each, and are under the control of the user who maintains each child database. Using a natural key meets all my needs, however, breaks many of the best practices of a primary key (not being immutable, has meaning, and some would argue is too long). Alternatively, I can add a surrogate PK to each table and make the previous ID which has meaning UNIQUE, and pass both of them to the master server. Other than being more "proper", however, I don't know what value this will bring. Please provide recommendations. database_master.table_parent -id (PK) -database_child_unique_identifier (PK) -data database_child1.table_child -id (PK) -data database_child2.table_child -id (PK) -data database_child3.table_child -id (PK) -data ... database_childN.table_child -id (PK) -data
  6. Yes, I know being tricky and storing multiple data in a single column is usually always a bad idea. But is there ever a time for it? I have an email application that the user enters their default CC list under their user profile, and then on a per email sent bases, their actual CC list. For instance, their default email CC list might be: mary@example.com; bob@example.com; linda@example.com; But when they compose their email, they change the email CC list to: beth@example.com; bob@example.com; Would it ever be a good idea to store the data lists as semi-column separated values, or should it always be moved to another table?
  7. But the webserver already has that mapping built in, so it's not really necessary. What's not necessary? header("X-Sendfile: {$file_random_name}"); header("Content-Type: {$mime}"); header('Content-Disposition: attachment; filename="'.$file_name_with_extention.'"');
  8. I typically validate the file extension, and then store the validated filename with extension in the database and use it in the header when someone downloads the file. Why append the extension to the filename?
  9. Maybe none. I just look at my Apache log, and see suspect IPs attempting to access, and it just stresses me out.
  10. First of all, I acknowledge that my solution below is just a weak crutch for not better security, but it might be all I am capably of... Stupid idea? If even worth pursuing, recommendations of the 3rd party service to query bad IPs? <?php //index.php function verifyWhiteListedIP($ip) { //Query local database to ensure that this IP has been recently verifed as not being bad, and return true or false. return true; //or false } function verifyIP($ip) { // Query some 3rd party DB to see if it has been blacklisted, and return true if okay, else delete from whitelist DB and return false. return true; //or false } function confirmWhiteListIPs() { // Will be called on a 24 hour cron job, and will verify whitelisted IPs are still nice using the above mentioned 3rd party DB, and delete if not } if(whitelisted($_SERVER['REMOTE_ADDR']) || verifyIP($_SERVER['REMOTE_ADDR'])){ //Return HTML, JSON, etc. } else {sayGoodby();}
  11. Enough said. Thank you.
  12. My actual code is shown below. display() is a controller method. myObj() and getProp() in my fictional example is myosticket() and getTopics() (I should have called it "myClass" and not "myObj" in my fictional example). The myosticket class has multiple methods, however, for my needs in this single display() method, only one is required. Is it better implementing it the way I show below or is it okay to change the invocation of the creation of the object defined by myosticket? public function display() { $osticket=new myosticket(); $data=array( 'data'=>empty($_POST) ?array('name'=>null,'email'=>null,'topicId'=>0,'subject'=>null,'message'=>null,'i_am_human'=>null) :$this->stripArray($_POST,array('name','email','topicId','subject','message','i_am_human')), 'topics'=>$osticket->getTopics(['common','corporate']), 'title'=>'Contact Us', 'recaptcha'=>$this->getRecaptcha() ); $this->displayPage($data,dirname(__DIR__).'/templates','default.html'); }
  13. If I only need to perform a single method on an object, is there anything wrong with chaining a new object? $obj=new myObj(); $obj_prop=$obj->getProp(); $obj_prop=myStaticObj::something()->getProp(); $obj_prop=new myObj()->getProp(); //Error? $obj_prop=(new myObj())->getProp();
  14. Are cookies disabled on your browser? If so, enable them. If not, try the following: <?php session_start(); echo('session_name(): <pre>'.print_r(session_name(),1).'</pre>'); echo('session_id(): <pre>'.print_r(session_id(),1).'</pre>'); echo('session_get_cookie_params(): <pre>'.print_r(session_get_cookie_params(),1).'</pre>'); echo('$_COOKIE: <pre>'.print_r($_COOKIE,1).'</pre>'); echo('$_SESSION: <pre>'.print_r($_SESSION,1).'</pre>'); $_SESSION['test']=time();
  15. First, you need to calculate your employees per hour. You can either do this by creating a fancy query, or just retrieving your data as you show it and iterating over it to get the results. Then, I would recommend a JavaScript plugin such as http://www.jqplot.com/examples/barTest.php to create your graph.
  16. Yea, I guess I should just get in the habit of doing so.
  17. Had some JS that wasn't working, and turned out I had a whitespace at the end of a PHP file. To fix it and other bad files, came up with the following. See any problems with it? Yea, I know that it surely could be done at the command line, but made me feel safer doing it this way. Also, maybe should just get rid of the closing PHP tags altogether, but still have the issue with opening tags. Thanks <?php $dir=dirname(dirname(dirname(__FILE__))).'/classes'; echo($dir."\n\n"); $dir = new DirectoryIterator($dir); foreach ($dir as $fileinfo) { if (!$fileinfo->isDot() && $fileinfo->getExtension()=='php') { $file=$fileinfo->getPathname(); $content=file_get_contents($file); if(ctype_space(substr($content, 0, 1)) || ctype_space(substr($content, -1))){ echo("TRIM: $file\n"); file_put_contents($file, trim($content)); } } } ?>
  18. Thanks Maxxd. Yea, I agree those are the biggies which a bad design could bring about. But is it okay to "just do things right", are there things a real good database designer could do which would make the programmer's life better? I only bring up the super/sub table design pattern because that is where I have witnessed it, but I expect there are more opportunities. Yes, I know this is a database/SQL question, but it has PHP implications so please stay with me. Say you have three tables: teachers, students, and parents, and each have a first name, last name, username, email, bla, bla, bla, and also a couple of other unique to each fields. Would you create a separate table for each? I think most PHP developers would. The design is normalized so we did our duty, right? No duplicated data (feel free to pip in benanamen), right? So, all is good... But now we want to link all these type of people (i.e. teachers, students, parents) to their favorite movie. Well, that is three separate queries for each. Do we embed different queries in the application for each, and be forced to maintain them? Do we get fancy (i.e. complicated) and have the application create the query? Or, should the database developer have the foresight to create a super table for us so we could just JOIN to that? Have these questions not been asked before, or am I just asking things which aren't important?
  19. My bad, I didn't give you all the info. These methods are used with other child classes as well. Yea, I've been messing around with doing so. Is this considered a "right" way of doing this? Or is this considered the correct way, or should I be doing things totally differently?
  20. I recognize that you have much more expertise than me regarding programming, but respectfully disagree. I have dealt with some database designs which complicate the PHP code and some which make it very simple. I don't know for sure, but highly suspect that the database design often has the same impact regardless of the programming language used.
  21. I have an application which takes input from GET or POST, creates a controller, and evokes the appropriate task. While not exactly how it is done, it is pretty close to: $controller=new childController(); $task=array_merge(['task'=>'default'],$_GET,$_POST)['task']; $controller->$task(); Now, I have two tasks which are almost identical, and the only difference is one passes "foo" to documents::removeDocument(), and the other passes "bar". To implement, I did the following: class childController extends parentController { public function deleteFooDocument(){$this->deleteDocumentHelper('foo');} public function deleteBarDocument(){$this->deleteDocumentHelper('bar');} } <?php class parentController { protected function deleteDocumentHelper($type){ if(isset($_POST['id'],$_POST['doc_id'])){ if(documents::removeDocument($type,$_POST['doc_id'],$_POST['id'])) { $success=1; //Ability to replace the following line with one or more lines $this->getModel()->updateParentAudit($this->audit_table,$_POST['id']); } else {$success=0;} header('Content-Type: application/json;'); $this->dontCache(); echo(json_encode(array('success'=>$success))); } else {exit($this->missingPage());} } } ?> Now, I realize one of the methods should do $this->getModel()->updateParentAudit($this->audit_table,$_POST['id']); upon successfully deleting a document, but the other should do something else, and am struggling on how to deal with it. I think I am going down a slippery slope. Did a little research on "helper methods", and some say they are evil. Should I be doing this totally differently?
  22. Thanks ignace, The same document can be attached to different entities, so added_by, deleted_by, added_at, deleted_at needs to remain in the m2m table. I am also using a surrogate for people, companies, and projects, and have a little extra data located in superTable such as when the entity was added, etc. My hopes were not as much to debate the database design, but discuss how the schema has implications on the associated PHP code. As JonnoTheDev pointed out "You do not want to be creating separate tables for each type as there is no flexibility in the design and as you have already found out it is hard to create the business logic." By using a super table, my PHP code to add/view/delete documents becomes almost the same and the application becomes much simpler. I did not expect that the database schema would have such a great impact on the PHP code.
  23. While I agree that normalization is very important, I don't believe either of these designs are not normalized up to the 3rd normal form. Where do you feel they aren't? Why do you believe second design is no good?
  24. Each document needs to be associated with a given person, company, or project, and not just be of one of the people, companies, and projects types. Given this requirement, do you still feel it could be accomplished with two tables? You are absolutely correct! It is amazing how the database schema could complicate other issues.
  25. table1, table2, and table3 are People, Companies, and Projects. There are actually more as you suggested. Currently, I've been using the first schema, but it is making my PHP complicated. It seems like the second schema I showed might result in much simpler PHP script. You mean as I show in the second schema?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.