Jump to content

jstrike

Members
  • Posts

    11
  • Joined

  • Last visited

    Never

Everything posted by jstrike

  1. One thing you could do would be to host the PHP platform yourself, and let each user host their own MySQL server. That way they can keep the data and you can keep the code... just keep a master list of which user/group connects to which database after they login, and connect remotely (with SSL)...
  2. Looking at it more closely, if the output of $WorkingDays = $row['WorkingDays']; is 232423 then that is the result you're getting from one row of mysql results, and has nothing to do with PHP. I don't know what so_month looks like, but if it's just three columns like that, why are you using SUM() and GROUP on it?
  3. under most circumstances, if you set $WorkingDays=0; before the loop, and then $WorkingDays+=$row['WorkingDays'] it should work. By setting it to zero first you're initializing it as a number, so there's something to add to (zero at the beginning, going up). If for some weird reason that's not working, you can specifically tell PHP that you're adding a number, not a string to the value by specifying it's an integer. $WorkingDays=0; while(...) { $WorkingDays+=(int)$row['WorkingDays']; }
  4. If you're distributing it, and it's worth money to you to prevent it being ripped off, then obfuscate it. Professionally. Use Zend Guard or something like ionCube to lock it down. Then have it make a call to your home server to check if its license is up to date. file_get_contents() is as good as anything else for that purpose, but without obfuscating and encrypting your code, it's worthless. Also, sending eval() code over the wire isn't just pointless - since anyone can intercept it. It's introducing a giant security hole in your code that makes it, frankly, extraordinarily dangerous for an end user to run unwittingly. Because anyone can alter what goes into that stream and take control of the user's system. And if you crypt it somehow, someone will find the key and crack it in a day. Don't sacrifice user security for profit; you'll be counting the seconds to a lawsuit. Anyway, a little leg work will find you a better way to license, package and distribute your code without putting anyone at risk.
  5. Pardon but that's not really an acceptable answer; I need to know whether it can be controlled on a granular level. Can I make a master socket block, one child block, and another nonblock? More importantly, what is the rule that governs their behavior... that's what I want to know. I know what it does in practice. It seems to accept the parent's behavior. What I want is a logical explanation for why it behaves the way it does, and a strict rule which isn't laid out in the PHP manual or anywhere else I could find online. That's why I'm posting here. If you don't know, don't feel obligated to respond.
  6. I started in AS2/3 so I tend to use those conventions; camelCase for most variables, prepending $_variable for private or local vars, STATIC_VARS_UPPERCASE, Classes beginning with an uppercase letter. Just watch out you don't use camelCase when you're naming your mysql tables or you might be in for a surprise
  7. Not sure if you got your answer elsewhere yet (I'm new too and didn't realize this was the wrong place for my post, lol): Probably the easiest way to do this would be to not have a primary or unique key in your MySQL table. If you do need some kind of unique handle to sort them by or make it easier to delete them, generate a long random string (20 bytes ought to do the trick) and use that as the index field. MYSQL CREATE TABLE randomized_responses (randomID VARCHAR(20), response TEXT, PRIMARY KEY(randomID)); PHP $letters = array("A","a","B","b"...); $randomstring = ""; for ($i=0;$i<20;$i++) { $randomstring .= $letters[rand(0,51)]; } $q = mysql_query("INSERT INTO randomized_responses (randomID, response) VALUES('$randomstring','$response');", $sqlConn); //remember to use mysql_real_escape_string to clean the $response of malicious SQL injection before putting it into the database... if (mysql_affected_rows($sqlConn)<1) { //That random key was already used. Get another key and try again. Obviously putting this whole thing in a function and returning affected rows makes sense if you're worried about collisions. } Hope this helps...
  8. Hi, sorry for the crosspost - I'm new here, and obviously asked my question in the wrong part of the forum. Can anyone help out with this knotty little problem? http://www.phpfreaks.com/forums/index.php?topic=358418.0 TL;DR Do socket resources created by socket_accept take on the blocking properties of the master they were spawned from, or can you get granular control over them by calling socket_set_nonblock() on them individually?
  9. Well - not that the query would take seconds (probably, barring a deadlock against an insert ) ...but microseconds add up. I speak from a humbling experience -- I did exactly what you're talking about for a corporate intranet app in 2006. It works - it's still running - but only for a handful of users per location, never more than 20 on at a time. It ran fast when there were a few hundred contacts in the list it's searching; now with about 10,000 it runs slowly, and you can feel the drag. It's the most-complained-about product bug they're not willing to have rebuilt. And that's a tiny dictionary. I didn't know as much then... I wouldn't do it that way now, especially for an app with 1000+ users (live & learn, but don't apologize, hehe). A few points: * It may not help much to put an index on a VARCHAR or TEXT field in a very large table, especially if the table is user-updated and you're using InnoDB as the engine. I don't know whether we're talking about a dictionary of the english language or a few thousand words; obviously that makes a huge difference. If it's a very large dictionary, one thing you can do is partition the table by the first letter. That makes your index or table scans a lot smaller. * Presumably your database is going to be doing a lot of other things. If this dictionary takes inserts, you'll want it to be InnoDB for row-level instead of table-level locking. But that means much slower index scans. Also, it's a lot easier to scale in the future by putting more webservers in front of a database with plenty of extra cycles to spare, than it is to replicate the database. Is the dictionary small enough to store with shared memory in APC or memcached? If not, what about caching it occasionally to a SQLite table... that way you're not taxing your primary database with these select requests, and it's always free to take inserts. * If you have to open a lot of PHP connections sequentially to mysql, set wait_timeout LOW (I like 6 seconds) and max_user_connections high (500) in your my.cnf file. PHP has a nasty habit of sometimes not cleaning up its sockets right when a script terminates. Always use mysql_close(). Don't use persistent connections - ever. The only time I use them is when I have a long-running PHP daemon that has to make calls to a remote MySQL server on the other side of the planet, and then they have to be watched really closely to make sure they're still alive. * If the database is under 500k or so, you could even reasonably cache it to SQLite in Javascript and have no wait time and no extra connection overhead once the client is loaded. If done wisely, you could even just cache part of it (e.g., there are 17,576 sets of results for all 3 letter combinations -- maybe you don't need 10 suggestions and 4 would be enough -- in that case store the suggestions for all 3 letter combinations locally and only hit the DB on the 4th letter. If the average suggestion is 10 characters, you could store 342k of most common results on the client side; then hope that 50% of people stop typing and take a suggestion after 3 letters; I bet google has figures on that; heck, your whole dictionary might fit into 342k anyway!) * Apache will be the biggest problem. You'll need plenty of spare workers running for all these microscopic little hits. When you don't have enough spare workers, Apache has to spawn a new thread and bootstrap PHP again. Rough thumbnail, figure that usually sucks up about 2% of the whole CPU for half a second on a Quadcore Xeon. Therefore you're exposing the entire system to being crashed by ten users who can type at 40 WPM, or one user maliciously holding down "N". Check out how Google cuts you off if you do that. Lots of extra RAM is your friend here. This is one of those optimization problems where it's a game of inches, and there's no silver bullet. The trouble is it's the kind of little thing that can bring a perfectly healthy LAMP stack to its knees very quickly. The goal is just to shave down the response time as much as you can, using every nasty trick in the book. Every processor cycle and connection saved counts. Always assume the client believes this thing can scale to at least 100x what it was built for, and will blame you if it doesn't.
  10. If each request is going over HTTP to a script that runs once, there's no way to make the mysql connection persistent. It has to be reopened every time, and PHP will close it when the script exits. IMHO, the large number of mysql connections is just the tip of the iceberg for the problems you'll have with this. Making an ajax/http call on every keystroke, opening a connection, running an full table scan on SELECT LIKE 'S%', then returning the result set, is going to be very slow. First you're going to hit the wall when Apache starts taking 10 requests per user per second. Each of those Apache workers is going to have to run a PHP process that takes at least 15M of memory. Each PHP process is going to have to open a mysql connection. This is a very inefficient way to do it. At the very least, you can write a daemon that stays alive and keeps a single connection open to your database, and listens on a local socket. When requests come in, the web-facing scripts connect via socket to the daemon, which runs the db request for them. This way you have an orderly single thread handling your DB work. Secondly, don't do the call every time the user presses a key. Do it on batch when it's been 1/2 second or so since the last key press. And don't do it unless there are a few characters in the field. You can improve things by having PHP actually cache result sets, depending on the size of the table you're searching. And if you wanted to go further, you could sockets instead of HTTP requests to connect to the clients, and run the whole thing on the command line as a stand-alone PHP process. Not having to run Apache or do a new HTTP call with every keystroke will make it anywhere from 10x to 30x less resource-intensive.
  11. Hi, I'm new here...hoping someone can help. I've been searching for an answer for days: Main question: Do the resources returned by socket_accept automatically take on the blocking behavior of the master socket that spawned them? Or do they default to blocking mode? I'm running a socket server I wrote that uses a non-blocking master socket. Let's call it: $this->sock = socket_create(AF_INET, SOCK_STREAM, 0); socket_set_nonblock($this->sock); $a = @socket_bind($this->sock, "localhost", 9934); socket_listen($this->sock); Now I know the socket won't block on socket_accept. When I want to add a new client, I check if there's something to read: $r = array($this->sock); socket_select($r,$w=NULL,$e=NULL,0,15); if (count($r)==1) { $newclient = @socket_accept($this->sock); } @socket_read($newclient,1024); //this appears not to block...usually. Why not? My question is, do I have to socket_set_nonblock($newclient) now? Because it seems that 99% of the time, that client won't block on read. However the (very slim) PHP manual documentation on socket_set_nonblock() says: "Parameters: socket A valid socket resource created with socket_create() or socket_accept()" And although I then go on to socket_select() for read every time I want to use that client resource, once in a blue moon it appears to say it can read, but then block instead and destroys my application. Can anyone clarify the granularity of blocking behavior on socket resources created from socket_accept?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.