-
Posts
4,704 -
Joined
-
Last visited
-
Days Won
179
Everything posted by kicken
-
Why are the () and print operators misssing from precedence manual?
kicken replied to johnmerlino1's topic in PHP Coding Help
print is not an operator, it is a language constructs (like if, while, foreach, etc). That is why it is not on the list. I'm not sure whether () would be considered an operator or not, but probably not if it's not on the list. However, if it were, it would be a unary operator and fit at the top similar to where clone/new is. -
How long do I have to update my code to mysqli
kicken replied to pioneerx01's topic in PHP Coding Help
Realistically it will probably be a number of years still before you have a hard time finding a host with support for it. That doesn't mean you should take your sweet time to update the code though. There are several reasons why you should be upgrading as soon as possible besides the threat of the functions being removed. -
It is a possibility yes. The insert could fail for various reasons. You do not need a separate try/catch for that however unless you want to provide detailed error messages back to the user, which is unnecessary. All the user needs is a generic 'Something went wrong' error, while you would just log the detailed exception data for you to look at else where. All you need is the outer most try/catch in your code. The inner one can be removed. No, you do not want to do any escaping when using prepared statements with bound parameters. Escaping would only result in mangled data. Wouldn't hurt, but unless you might call require 'connect.php' again somewhere else in the code it doesn't make any real difference which you use.
-
All you have to do is make sure the include statement is within the try block. <?php try { include 'connect.php'; //rest of your query code. } catch (PDOException $e){ echo 'Database operation failed'; } If the code in your connect.php file, or any of the other code in the block throws a PDOException then it will be handled by the catch block. Throwing and catching errors is not file specific, you can throw an error in one file and catch it in another one. All that matters is that the throw is contained (directly or indirectly via includes/function calls) within a try block at some level of execution. If you route all your requests through a single front-end file, it is generally a good idea to include a try/catch for generic exceptions at that level and output some simple error page. <?php try { //start request processing //Include whatever file will handle this request include 'whatever.php'; //cleanup after the request } catch (Exception $e){ //output a generic error page. //and log the error somewhere } Then you can just let your database code throw an exception and it will be caught at the top level and cause a generic error page to be displayed. If in some instance you want to know if the database throws an exception and handle it you can use another try/catch block within the requests file's code to catch it rather than letting it bubble up to the top.
-
Does this regex need the backslash before the dot? ^\S+@\S+\.\S+$
kicken replied to appobs's topic in Regex Help
If you want to implement a DNS check you need to make sure you check for A/AAAA records as well as MX records. If a domain is missing a MX record, mail servers will fall back to the A record, so some domains might only have their A record setup and no MX records if they would point to the same IPs. I would also suggest that if the only failure is the DNS check, perhaps provide a way for the user to ignore that error and submit anyway. A temporary failure of DNS on your system could cause the check to fail for legitimate domains, possibly for an extended period if the failure response is cached by a DNS server somewhere in the lookup chain. I've had experience on my mail server with a particular domain who's NS records were improperly configured so about 50% of the time a DNS lookup would fail and email to that domain would not be sent right away. Sometimes it would end up hanging around in the mail queue for several days before the DNS lookup was successful and the mail could be sent. -
Regarding the caching, what I typically will do is cache them in a variable. They are still loaded upon each request so a change in permissions is reflected immediately but several permission inquiries within the same request will not result in several separate queries. eg: function checkUserPermission($permission) { global $db; static $cache; $hasPermission = false; if (isset($_SESSION['userid'])) { $uid = $_SESSION['userid']; if (!isset($cache[$uid])){ $permissionCheckStmt = $db->prepare(' SELECT staff_permission.permission_name FROM staff JOIN staff_roles_permissions ON staff.staff_roles_id = staff_roles_permissions.staff_roles_id JOIN staff_permissions ON staff_roles_permissions.staff_permissions_id = staff_permissions.id WHERE staff_id = ? '); $permissionCheckStmt->bind_param('i', $uid); $permissionCheckStmt->execute(); $permissionCheckStmt->bind_result($permissionName); $cache[$uid] = array(); while ($permissionCheckStmt->fetch()){ $cache[$uid][] = $permissionName; } } $hasPermission = in_array($permission, $cache[$uid]); } return $hasPermission; }
-
I'm thinking maybe something like this: SELECT `products`.`id` AS `pid` , `products`.`prod_name`, , SUM(CASE WHEN quote_responses.purchased = 0 THEN 1 ELSE 0 END) as committeed , SUM(CASE WHEN quote_responses.purchased = 1 THEN 1 ELSE 0 END) as total_per_item , SUM(CASE WHEN quote_responses.purchased = 0 AND schedule.cancelled != '' THEN 1 ELSE 0 END) as total_cancelled FROM `products` INNER JOIN `quote_deposits` ON `quote_deposits`.`product_id` = `products`.`id` LEFT JOIN `quote_responses` ON `quote_responses`.`id` = `quote_deposits`.`q_id` LEFT JOIN `schedule` ON `schedule`.`deposit_id` = `quote_deposits`.`id` WHERE `quote_deposits`.`dep_date` >= $committed_start GROUP BY `products`.`id` AS `pid` , `products`.`prod_name` ORDER BY `products`.`prod_name` ASC As you said though, hard to know for sure without being more familiar with the tables and the data within them.
- 4 replies
-
- 1
-
- subquery
- multiple subqueries
-
(and 1 more)
Tagged with:
-
The security stuff is fine. The one suggestion I would make is to change your function so that rather than query for individual permissions it queries for all permissions a user has and stores that in a variable cache somewhere, then when you need to know if a user has a permission you check that cache. That will prevent you from needing to run a bunch of queries if you need to do a number of different permission checks on the same request. Other than that, you just need some general code clean up and re-factoring to get rid of things like global $db and using echo and exit for error handling. I am going to assume this will be handled when you convert it to a class later as mentioned.
-
You might be able to use a simple INSERT INTO ... ON DUPLICATE KEY UPDATE query to handle the process rather than having to have three separate queries. You can use that with the multi insert syntax as well so you could just do one insert per page or less. Try and request as many items per page as they will allow to minimize the number of API requests you need to make. To make it more client friendly you can combine the process with some JS and Ajax requests so you can provide status updates as the items update. If this is something you just need to do periodically rather than in response to some user action you'd probably be better off making it a CLI script and running it via a cron task (or manually).
-
There are various template engines for JS, such as mustache.js that you could use to keep the HTML neatly separated either in the HTML document or as a single variable declaration in the script. I tend to prefer to drop the HTML templates into the HTML page and reference them by ID, something like: <script type="text/html" id="whateverTemplate"> <div> <p>Put whatever HTML you need here</p> <p>With template {{placeholders}} that will be replaced</p> </div> </script> var template = document.getElementById('whateverTemplate').textContent; var templateVars = { placeholders: 'variables' }; var html = Mustache.render(template, templateVars);
-
Does this regex need the backslash before the dot? ^\S+@\S+\.\S+$
kicken replied to appobs's topic in Regex Help
That is what PHP uses behind the scenes to power the email validation filter. Maybe not the same regex you saw but it's a pretty big one. There are no flaws with it you need to be concerned about. -
If your sub-classes __get (and etc) method does not have anything useful to do then just call the parents __get method. public function __get($property) { $prop = '_' . $property; if (property_exists($this, $prop )) { if ($prop !== '$this->_request' || $prop !== '$this->_responce') { return $this->$prop; } } return parent::__get($property); }
-
SQL Server to MySQL (in-house server to remote Linode VPS)
kicken replied to Zane's topic in Application Design
If you want to connect from the Linux VPS directly to SQL Server you'll need to setup ODBC. If you plan to do some kind of SQL Server -> MySQL setup then replicate to a remote Mysql on the VPS, you'll want to use the SQLSRV driver for PHP. There's no official build for 5.5 yet, but there are some third-party ones you can find. You might also be able to do an export from sql server to mysql using the SQL server management tools. A quick google search lead me to SymmetricDS which might be useful for you as well if you want to do SQL Server -> MySQL. -
Assuming the functions are returning those strings, then you first need to decode them with json_decode. After that you can access the individual properties. $r = $test->getSummonerByName($summoner_name); $r = json_decode($r, true); foreach ($r as $theirname=>$details){ echo $theirname."\r\n"; echo "ID: ".$details['id']."\r\n"; echo "Level: ".$details['summonerLevel']."\r\n"; }
-
SELECT notification_date as notification_date, notification_date + INTERVAL notification_duration DAY as notification_expires FROM teacher_notifications No need to do any kind of date math in PHP, just let MySQL handle it.
-
Your post makes it sound like your idea of above vs below the web root differs from what most people believe. That may lead to some confusion. Think of the folder structure as an upside down tree with the root folder at the top and everything else below that moving down. So if you web root was defined as /var/www/example.com/public_html then the folder /var/www/example.com/includes is considered to be above (or same-level as) the web root. The folder /var/www/example.com/public_html/includes/ would be considered to be below the web root. Placing your configuration files somewhere above the web root is ideal if possible. By doing that if the server ever happened to be mis-configured and not parse the PHP files properly then someone will be unable to read them as the web server will not serve them. If you must put them below the web root then you need to ensure they are named with a .php extension so they are parsed by PHP and will result in a blank page as output rather than dumping their contents as text. Regardless of where you put the files, you also need to ensure you protect yourself and ensure any download scripts you might have cannot be coerced into reading an arbitrary file. Even files above the web root could be read and downloaded this way if you had an exploitable download script, all someone would need to know is the path to the file.
-
SQL Server to MySQL (in-house server to remote Linode VPS)
kicken replied to Zane's topic in Application Design
You can either open the necessary ports to allow the replication, or use a SSH tunnel, you don't necessarily need to do both (unless you'd need to open ports to allow SSH). Using an SSH tunnel will let you have encrypted communications and only require an out going SSH connection from the in-house server to the linode. As mentioned you'd want to setup something to monitor the tunnel and re-establish it if necessary however. -
You're doing it wrong. Without knowing what data exactly you're trying to store and relate we can't say for sure what kind of table structure you should be using, but you should not be storing a serialized array. Instead you should be storing each of those numbers in the array as separate rows in the table. For example, your table would look like this: create table mobile_hits_received ( hitDate date , label char(2) , hitCount int ) plus whatever other ID columns you need to relate the data if any. Then you'd have separate rows in the table, eg: +-------------+---------+------------+ | hitDate | label | hitCount | +-------------+---------+------------+ | 2014-7-16 | VU | 13 | | 2014-7-16 | NO | 49 | | 2014-7-16 | AL | 45 | | 2014-7-16 | LG | 69 | | 2014-7-17 | VU | 12 | | 2014-7-17 | NO | 3 | | 2014-7-17 | AL | 5 | | 2014-7-17 | LG | 1 | +-------------+---------+------------+ Once you re-structure your application to store data properly you can get your sums with a simple select query: SELECT label, SUM(hitCount) FROM mobile_hits_received GROUP BY label See it in action
-
SQL Server to MySQL (in-house server to remote Linode VPS)
kicken replied to Zane's topic in Application Design
Unless you already have FTP setup, you can just use SFTP/SCP to transfer your dump file using your existing SSH setup. Saves having to setup another service on the linode and adds encryption to the data which is nice even if not strictly necessary. As far as your replication setup there are two options: Setup proper replication with a master and slave server. This would still require either opening the firewall up on the necessary IP's and ports or opening up a SSH tunnel so the servers can keep in contact. Setup "fake replication" by just periodically mysqldump'ing your master DB, transferring it to the linode and re-loading the slave DB. The first option will keep your data the most up to date and the updates will be more transparent and prevent possible down time. I would recommend this you try and get this option working to have the smoothest syncing and closer to real-time data on the linode. I've never had the need to setup replication so I have not tried, but it doesn't sound too hard to get a basic replication setup working. The official docs on the subject are here. Have a look through those as well as whatever tutorials you find. The second option is somewhat easy but will lag the DB by however long you wait between updates and unless your DB is small and loads quickly will cause downtime whenever you are syncing. The second option is essentially a "backup and restore" rather than actual replication so each sync will be dumping and reloading the entire database rather than just what's changed since the last sync. Just note also that both of these methods are one-way read-only. You won't be able to update your inventory status from the linode web app, such as if someone buys something from your web store. To enable that you either have to go back to connecting to the master DB directly or re-visit the idea of a web-service middle-man. It sounds like you don't expect to be doing any updates so it may not be an issue but wanted to point it out any way. -
PHP's timezone setting has no effect on mysql's date/time functions. You'd need to set the OS's timezone or set it on your connection by issuing a SET time_zone = timezone query. I prefer to make sure I store all times in the UTC timezone and then you can use the UTC_TIMESTAMP() function in mysql without having to worry about what it's timezone is configured.
-
SQL Server to MySQL (in-house server to remote Linode VPS)
kicken replied to Zane's topic in Application Design
I'd probably just set it up to only be accessible from the internet by your Linode IP via your firewall. Whether this requires placement in the DMZ or not I dunno. Building a web service just to pass queries and results seems like unnecessary work. Another alternative would be to open a SSH tunnel between the Linode and DB servers and just route the connection through there. You'd need something to monitor and re-establish the tunnel should you ever have a failure such as a temporary network outage though. -
What you should probably do is re-design your system so that you don't have dynamic table names to start with. Instead $survey_id would be a column in the table and your select would look something like: $sql = " SELECT SUM(IF(`sent` != 'N' , 1 , 0 )) as 'Emails Sent', SUM(IF(`completed` NOT IN('N','paper') , 1 , 0 )) as 'Completed Electronically', SUM(IF(`completed` = 'paper' , 1 , 0 )) as 'Completed Manually', SUM(IF(`completed` != 'N' , 1 , 0 )) as 'Total Number Completed', SUM(IF(`remindercount` = '1' , 1 , 0 )) as 'Reminder Sent Once', SUM(IF(`remindercount` = '2' , 1 , 0 )) as 'Reminder Sent Twice', SUM(IF(`remindercount` = '3' , 1 , 0 )) as 'Reminder Sent Thrice' FROM `tokens` WHERE survey_id = ? "; $statement = $dbh->prepare($sql); $statement->execute(array($survey_id)); $result = $statement->fetch(PDO::FETCH_OBJ);
-
An API basically provides a computer-friendly way for people to interact with your systems in a controlled manner. Say you had a database with a list of peoples names, but in addition to names you had things like their phone numbers, addresses, etc. Maybe you want to allow other developers to query for people's names, but not any of the rest of the data. If you just provided them with a few PHP scripts that interacted with your database then what's to stop them from just changing the code to grab the rest of the details aswell? Nothing. Not everyone uses PHP either. What if someone developing a desktop application using C# wanted to access your list of names? Should they have to embed PHP into their app just to run your PHP script that returns the list of names? No, they would rather just use .NET's existing http library to make a simple HTTP request and get a simple JSON response. If you want to write your end of the API in PHP then that is up to you, but your choice of language should not affect the choices of your potential end users. The API sits as a gateway between your data and other people. It allows users to get data safely and in a format they can easily use regardless of what type of environment they are using. The reason why JSON is commonly used with APIs is because it is an effecient and machine-friendly method of transferring data from one system to another. Most languages at this point have some means of parsing a JSON string into a native representation (or vice-versa) for further processing. JSON also contains much less bloat than something like XML for example so it saves resources such as bandwidth and processing time.
-
Whether or not someone can do something malicious with an input is entirely dependent upon what you decide to do with that input and how you treat it. If all your doing is an isset() check then it doesn't matter what they input because you are not actually using it anywhere, all you're doing is testing if some input was sent or not. For all input what you need to do is make sure you consider the context in which you will be using that input and ensure you apply the appropriate safe-guards, such as escaping (html), prepared statements (sql), new-line elimination (smtp/http header fields), etc.