Jump to content

vinny42

Members
  • Posts

    411
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by vinny42

  1. There is no magical set of numbers to put in your config, because the requirements depend entirely on what kind of workload you have, what does the database do, etc. Having 1k people online doesn't say anything about how many pageviews you are getting what kind of work you do per pageview, etc. How big is your database? are you using InnoDB or MyISAM, what kind of disks does the server have Did you run a load test to see where your server is currently struggling? Anyway, you should read some articles about tuning MySQL, there is a lot of good information in http://www.mysqlperformanceblog.com/
  2. Huh? are you seriously saying "don't warn him that he's about to make a mistake, because he didn't ask"?
  3. then perhaps you should lookup how you can do an OR in fulltext, or do as I suggested and build a schadow index of synonims that you can query.
  4. I mean replacing MATCH(..) + MATCH (..) with MATCH (..) AND MATCH(...) That way, MySQL may be able to deduce that records that have not been "found" by the first match(), don't have to be examined by the second match(). If you do +, MySQL is forced to run both MATCH() statements and add them together to see if the result is nonzero. Ofcourse you could also just rewrite the second MATCH() so it also searches for the text from the first MATCH()...
  5. Have you tried not adding the MATCH() values up in the WHERE clause, and just using AND?
  6. One thing that is never good is this: AND Cat1 NOT LIKE '%prodimg%' Because a LIKE that uses an expression starting with a % will always trigger a sequential scan (all records must be visited and parsed, no indexes can be used to solve this). Then perhaps it's a good idea to prepare for thism by creating a table of synonyms. When a product "mtb helmet" is added, you look it up in the synonyms table and see that it's actually a "mountain bike helmet", so you link the product to "mountain bike helmet" instead of "mbt helmet". Then when the user searches for "mbt helmet" you do the same thing and search instead for "mountain bike helmet". I'm a big fan of tagging products with known keywords, rather than doing fulltext searches, because someone looking for "mbt helmet" will not find anything because "mbt" should be spelled "mtb". If you have a known list of keywords you can tell the user that the term does not exist, rather than pretend you have nothing to show, ald suggest alternatives. Like a spellchecker.
  7. How about json_decode()? :-) Or use a database that can deal with JSON, like PostgreSQL.
  8. Or a node has been moved using the interface :-) True, or you could skip all the PHP business and use a database that can do recursion and generate JSON that PHP only has to json_decode() to get the array :-)
  9. Both these solutions are of exponential time complexity 2O(n) and therefore very expensive in computational time. For every element you add to the dataset the computational time theoretically doubles. Except that it doesn't. Are you sure you look for children, and not for parents? How would you do that, given that you don't know which records are children of which others before you fetch them?
  10. Post your solution, because what you were trying with group by is wrong.
  11. You can't do recursion in a join. You could argue that the recursion could be done in PHP, if the entire tree has to be loaded (and can be loaded) into memory. But my first thought would be why the query does a SELECT *, and if there is an index on the columns used in the WHERE clause. 50 seconds is bizarly slow for a maximum of 2800 queries no matter what you do.
  12. You are not going to crash the server by sending 15k emails because you can only queue them and when the queue get's too full, it will simply tell you to calm down and wait for it to catch up. There are however much more important things to worry about, like how the mailserver sees this sudden avalange of email. most likely it wil decide that you have been hacked and block you, or owrse, decide that you are spamming and block you worldwide. Talk to your hoster or google for a massmailing service to do the work for you (they know how to send emails properly)
  13. Perhaps it's easiest if I sum up how it's usually done :-) First the user enters a search paramter. then you run a query with LIKe (or whatever) to find the records that match the search. You print a list of records and if the list gets too long you use LIMIT and OFFSET to paginate the results. Each link you print has the ID of the record in it so when the user clicks one, you can use aa WHERE to find exactly that one record and display it for editing. When the edit is finished the form is submitted with the ID so you can save the new data. So, no seeking is done at all.
  14. Even better, give the files a proper index, like the id that you use in the HTML. That way you don't have to guess which file is which: <input type="file" name="file[acceptletter]" id="acceptletter" onchange=checkFile(this) required value=""></td> <input type="file" name="file[turnitin]" id="turnitin" onchange=checkFile(this) required value=""></td> Loopingh will still work, but now you also know which file is which.
  15. +435 And besides that ; PHP simply doesn't have any way to control accessrights on a per-script basis. You can configure PHP per webserver instance, but then your code would have to obey the same restrictions. And even then there's a very good chance that you miss somthing as simple as a memory leak, or an fsock that acts ike a fileserver to warez sites. Nah, keep the customer's code on the customer's site. Give them an HTTP-based API so they can request information and send new data to your application but do *NOT* let them run their own code on your server.
  16. You use IF statements to check each line, but whe the line doesn match, you "break" out of the while loop. Don't do that :-)
  17. It's true that if you are going to use the user's data anyway then you might as well select it, counting to see if you can later select them would be extra work. But that said, I don't want people to get the idea that it's good practice to select records when they just want to count. Even selecting a single record can be significantly slower than doing a count because a count() can be done using only the indexes, whereas fetching data from the record itself requires accessing the datafiles. Ona different not, you should *never* put the password in the query, in any way. Select it and verify it in the application. Reason: queries are often logged by the database and unles you have complete control over those logs you have no idea who can read them. I've seen hosters who put the logs in the webroot, effectively making them visible to the world.
  18. Yes, but if you only wnat to count then this is a very wstefull way to do it. do this: SELECT COUNT(*) AS number_of_rows FROM .... then fetch one result and look at it's "numer_of_rows" column. That is the most efficient way to count.
  19. Because the function depends on something that must exist in the environment, which means that the caller cannot tell the function what to use, it must know what the function needs and prepare the environment in such a way that hopefully the function will do what is expected. It is much better to pass the value of $dbc along in the paramters, so that the calling script has 100% control over what data the function uses, without having to know anything about the internals of the function.
  20. Very good point, phpmyadmin may be warming the cache up. The filesystem will also have cached the data regardless of what MySQL caches (which is why other databases don't have a recordcache at all)
  21. If you are using prepared statements, yes. But he's not, so no. :-) http://www.php.net/manual/en/class.mysqli-result.php PDO pretends that all databases have the same API. They don't, which means that PDO's behaviour is still different from database to database, which is worse than having separate API's. The API for PostgreSQL simply doesn't have lastinsertid() so you cannot mistakenly use it and get false results, like you can with PDO. You are much better off understanding how the actual API works than to rely on how the creator of the PDO stuff thinks he should translate the actual API to PDO's API.
  22. Why do you want to wait between records and why do you have the need to go back an process the same record again? Explain what you are working on, not what you see as a solution to the problem (because you don't know how to solve the problem, that's why you are posting)
  23. Can you post the EXLPAIN outputs of the fast and the slow queries? (please really post both, don't assume that they are identical)
  24. I did not get that from your post :-) But it is absolutely true, MyISAM will silently fail lots of things that other engines do support. One of the many reasons why the whole multi-engine thing is a very bad idea.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.