Jump to content

gizmola

Administrators
  • Posts

    5,945
  • Joined

  • Last visited

  • Days Won

    145

Everything posted by gizmola

  1. Correct -- you can't divide "N/A" by a number. What do you want to happen in these cases? You could first off check if it is a number with a php function like http://php.net/manual/en/function.is-numeric.php You could also force php to cast "N/A" to an integer or a real, which would be 0. $ratio = (float)$a/$b; ...or you could have some code like this: $a = ($a == 'N/A' ? 1 : $a);
  2. That was not my point. Even if you started with constraint definitions in your CREATE TABLE DDL, MyISAM will accept those statements and blissfully ignore them. So simply altering a table to use Innodb doesn't magically cause RI to start working. You have to go back and run those statements again after all the tables are normalized, or craft ALTER TABLE statements. This is important for someone who is still learning about mysql to understand.
  3. Let me get this straight -- you provided an example that doesn't actually match what you have, although you didn't indicate that to anyone. I then used your example to provide you a Rewrite condition and rule, which of course doesn't work. Am I to understand that you did not try and translate what I gave you to match your actual configuration?
  4. The myiasm engine is the default storage engine. It's lightweight and lacks many features that other Relational databases have. In particular it lacks referential integrity constraints, support for ACID/transactions (commit or rollback) and it does not offer row level locking. It does not provide a "data cache" (it's query cache simply caches statements) nor a transaction log. Most databases offer a transaction log feature so that if the database server goes down, you can recover quickly and cleanly. MyIsam instead provides tools to check for corruption of tables and indexes and repair them. This works ok, so long as your database doesn't get really large, but you can certainly find horror stories out there from people with really large myisam databases, and how it took days or weeks to recover from a server outage. Innodb offers all of those features, and also implements "clustered indexes" which is a fancy term for having all the data in the tables stored in primary key order, so that any data read using the primary key, has also read the entire row's data, as opposed to having a separate index file which has to be referenced and then a seperate read/seek of the datafile. In short, I would recommend using innodb for all your tables, however, many of the features I've discussed require an understanding of them, and in some cases configuration or code changes to take advantage of them. For example, if you have no constraints defined, having foreign key constraints doesn't do you any good. You would have to add them. You won't utilize transactions if you haven't coded your application to start a transaction and commit on success or rollback on failure. You do get row level locking for free just by using innodb, which can be important if you have tables with mixed use (lots of selects, with frequent updates or deletes). If your server has adequate resources and you can allocate memory to the innodb data cache known as the "buffer pool" (See http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html) you can have most of your queries serviced out of the memory cache, which of course is a huge performance improvement over reading from disk. I hope this helps you in the process of learning more about innodb.
  5. It's going through an array named $property, looking for entries that match '/MEDIA_IMAGE_XX' and if found in that array element, creating a string which is the configured path to a directory where I assume images are to be found. If files actually exist in that directory, it then adds each entry to an array ($entry['images']) which will contain one or more arrays with the path to the file and a url. I'm assuming this is for some sort of gallery system or media management system. What specifically don't you understand about the code?
  6. Well, ok, so theoretically, what you could do is setup a redirect rule and include the "NS" (No sub request) although I haven't tested this, it's worth a try. RewriteCond %{QUERY_STRING} ^model=(\w+)$ RewriteRule ^model\.php$ /product/%1.php [NS,R=301,L]
  7. If you are always providing the static url's in your navigation and links, there is no reason to worry about the php pages. Neither users nor search engines will see them. The fact of the matter is that you have php pages for your site, and if you redirected from them, you would have an endless loop of redirects.
  8. Need a condition on that elseif () there Psycho, and the first condition needs a () around it
  9. Web servers have keep alives. By setting the keep alive to a conservative value -- perhaps 15-30 seconds or more, these types of interactions would essentially occur within the same TCP connection so long as a player was continuing to make plays, however if a turn is occuring in 200ms, then you're trying to micro optimize and adding complexity. This article does a really good job of talking about the options with query clientside script examples: http://techoctave.com/c7/posts/60-simple-long-polling-example-with-javascript-and-jquery
  10. Great demos. Needless to say, the next step, aside from enhancing your engine would be to get documentation for your api. There are lots of options for generating them from your source annotations/docblocks. This recent article has some really good options you could investigate: http://www.lsauer.com/2013/05/javascript-documentation-generator.html#.UjAlLhZnTzL One thing that I don't understand, is why you aren't using git version tags. Putting all the versions of your library in a /ver/... directory structure seems to miss the whole point of version control.
  11. I don't see anything egregiously wrong, however, you should put your code inside the flock function, and also fflush before unlocking. // open file for appending $fp = fopen("/home/orders/orders.txt", 'ab'); // LOCK THE FILE FOR WRITING if (flock($fp, LOCK_EX)) { fwrite($fp, $outputstring, strlen($outputstring)); fflush($fp); flock($fp, LOCK_UN); // RELEASE WRITE LOCK echo "<p>Order written.</p>"; } else { echo "<p><strong> Your order could not be processed at this time. Please try again later.</strong></p></body></html>"; } fclose($fp);
  12. When you import the data, you will want to save the original email in one field, and split the names and domains into separate fields (name, domain) for example. Then you should be able to exact match using an index on any combination of fields. If you maintain a seperate collection of the bad addresses this will allow you to loop through the list and query each time for an exact match. Obviously the larger the list, the longer this will take, but each individual query will be very fast. This is an exact match so long as you have separated the host into a field. Here is where I don't follow you. Are you searching for words in an email, or searching for bad words in the text of an email? The mongo text index may be able to help you.
  13. Mongo's basic indexing is the good old btree. It also has a hash index, a text index and a geospatial and "geohaystack" index for dimensional data. The text index is similar to mysql's fulltext index. I think the "document" idea can be confusing in the case of mongo. A mongo document is essentially a json structure, and it's quite fine for strict hierarchical data or something like this where there's no real need for a relational model. So long as you have the memory to support it, as it's memory mapped, the performance should be very good.
  14. You are introducing a parsing issue with multiple sets of single quotes inside an interpolated string. When you have an array element with a key name and you want to interpolate that inside a string put a block (curly brackets) around the entire variable. mysql_query($linksav,"INSERT INTO bx_sites_main (URL) VALUES ('{$link['link']}')");
  15. Already you have repeated the same thing twice. What have you tried? To do a query by a single piece of criteria, is one of the most basic problems going. You literally could start with code from here: http://www.php.net/manual/en/pdo.prepare.php
  16. Hey here's a "bad" word. "Dick". Well, it's not really a bad word, because in English "Dick" is short version used for someone with the first name of "Richard". Dick Nixon, Dick Cavett, Dick Butkus are just a few of the myriad famous people who are popularly known by that first name. There is no way you have a valid 20k list of badwords. That is absolutely ridiculous. And even with a small list, there are are variations of "bad" words that are part of other valid words. These are email addresses... so using phrases, nicknames and the like are common practice. Practically speaking, micro optimization of something you are going to run once or twice is a complete waste of time. If it takes 3 hours to run, who cares, if you're going to run it once. However, again, there is no way there are 20k bad words. Spam is not a bad word -- so there's something you're not explaining about this list, and without that information, we can't really help.
  17. This is how things work. You show code, we help. You describe your requirements repeatedly and we lock the thread.
  18. What he was saying if you have a meal table, with a column for fruit, and the possible fruits are (apple, orange, pear), and you want to get all rows, regardless of the specific fruit, then there is no reason to generate this sql: SELECT * FROM meal WHERE fruit = 'apple' OR fruit = 'orange' OR fruit = 'pear'; You'll get the same result using SELECT * FROM meal. Your "any" selection should simply omit that criteria from the query entirely, and the query will work. GROUP BY and its related functions is something different entirely -- it's for finding groups of rows for the purposes of using the GROUP operators like COUNT, SUM, AVG etc. on those groups. When you're just looking for rows that match a criteria GROUP BY is not going to make anything faster or better, and given the overhead, probably will make things slower. Since it does in fact create groups of rows based on the GROUP BY, it could be delivering the wrong answer as well, based on your desired outcome. Don't use it unless you really understand what it does and why -- it's not an alternative.
  19. With a standard mod apache worker setup, apache forks child processes to handle requests. Of course you can see this doing a ps aux | grep apache or similar command, or you can run top and take a look at what is happening. Depending on the size of memory allocation of your php scripts, you may find that as time goes on, you get a set of apache processes with a significant memory footprint. It really depends on your code. In order to make this efficient apache will continue to reuse these child processes for a certain number of requests before the process is killed and a new one is created (this is configurable of course). This is a reliable and efficient configuration, with the downside that requests for static resources like images, css and .js files also have to be served by apache processes that have memory footprints reflecting their creation to serve a php request, even though (for that request) all that extra memory is not used. The net effect is that total memory becomes a fixed resource that limits the amount of requests the server can handle. In comparison, nginx with php-fpm will utilize php-fpm to serve the php requests, and of course there is still the same php memory issue, but as nginx is not allocating the php memory, it is much more efficient from a memory utilization standpoint, when serving the static elements of your site. As will pointed out you still will want to use apc in either scenario, so that your php scripts are coming out of a shared memory cache rather than being compiled on each request. Alternatively, you could do what he suggested and run nginx and apache (it's similar to nginx - php-fpm) but I lean towards using the php-fpm application server setup, since that's essentially what you are really doing in that sort of setup, but if you're more comfortable with apache, as he stated, it's a proven setup to have nginx proxy the requests for php scripts through to apache. If you do stay with apache it is important to go through the list of default modules and disable every single one that isn't absolutely essential, in order to minimize the amount of memory apache requires. This is an important first step, even if you just want to stay with apache for whatever reasons, in order to get the maximum capacity possible.
  20. Something seems to be wrong with this code: if (!$result) $AcctName=$test['AcctName']; $BSB1=$test['BSB1']; $BSB2=$test['BSB2']; $AccNumber=$test['AccNumber']; $Adviser=$test['Adviser']; $Bank=$test['Bank']; { die("Error: Data not found.."); } That isn't even syntactically correct, but assuming we're missing something, the condition is incorrect. It would be this instead: if ($result) { $AcctName=$test['AcctName']; $BSB1=$test['BSB1']; $BSB2=$test['BSB2']; $AccNumber=$test['AccNumber']; $Adviser=$test['Adviser']; $Bank=$test['Bank']; } else { die("Error: Data not found.."); }
  21. In my experience, that doesn't have any effect on the spidering, once a site is already in their index, but it doesn't hurt to try.
  22. You'll have to wait until Google spiders your site again, and rebuilds its index based on the updated information. I'd highly recommend setting up the google webmaster tools with your site, so you can investigate how Google sees it, and learn ways to address issues with the way your site appears in google: https://www.google.com/webmasters/tools/home?hl=en You should read/watch this material first: https://support.google.com/webmasters/answer/35769?hl=en&ref_topic=2370419
  23. uniflare: To add to Josh's point, when you don't use code tags, usually in order to make your post readable for the masses, one of the staff has to go in and edit your post and add them. If you're interested in helping others by posting answers, then I would think you would want your answers to be clear and readable right?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.