jaymc Posted September 23, 2007 Share Posted September 23, 2007 I currently have a rather nice messaging system Usually there are 40,000-60,000 messages in the table at any one time ID | To | FROM | DATE | IP | READ Messages stay in the table for 31 days and then they are automatically deleted What I was thinking, is splitting that table into two tables However about a table for NEW MESSAGES and a table for READ messages So basically, when you read a message, it deletes it from UNREAD table and inserts it into READ table The reason behind this is I have a 3 minute refresher that checks for new messages, thats how ever many members online X 3 minutes. Just wondering if that would be a better approach? Quote Link to comment Share on other sites More sharing options...
fenway Posted September 24, 2007 Share Posted September 24, 2007 Assuming you had an index on 'date', how many new messages would there be? It's going to add alot of overhead to do read & write operations vs just read. Quote Link to comment Share on other sites More sharing options...
jaymc Posted September 24, 2007 Author Share Posted September 24, 2007 Few 1000 new messages a day? Quote Link to comment Share on other sites More sharing options...
fenway Posted September 24, 2007 Share Posted September 24, 2007 Sure, but you're checking every 3 minutes, so why are you worried about speed? Quote Link to comment Share on other sites More sharing options...
jaymc Posted September 25, 2007 Author Share Posted September 25, 2007 Because theres usually around 600 online at any one time 600 / 3 minutes = queries per sec just for messages I dunno, maybe its fine, I mean 40,000 rows of messages seems a lot to me, but MYSQL probably deals with that no problem providing there are indexes.. right? There is no lag at all, but the more queries the more strain on my server which may end up effecting other things.. Im just thinking of better ways to optimize even though this may be going over the top..? Quote Link to comment Share on other sites More sharing options...
fenway Posted September 25, 2007 Share Posted September 25, 2007 If there is an appropriate index in place, you'd be surprised how quickly it will run. Quote Link to comment Share on other sites More sharing options...
jaymc Posted September 25, 2007 Author Share Posted September 25, 2007 Well I have a table with 3 million row which is READ and UPDATED/INSERT NEW ROW for every profile view, and that is running fine to be fair which Im quite shocked at However, if I take the index off the server will crash within 1 minute I wonder if it could sustain 10 million rows and with heavy I/O and perform with unnoticeable lag Quote Link to comment Share on other sites More sharing options...
roopurt18 Posted September 25, 2007 Share Posted September 25, 2007 You're taking shots in the dark at optimizing your database; which is to say there are areas you suspect are problems and want to fix them without any actual proof. You need to gather actual data about performance before you run off and willy-nilly start changing database design and structure. Assuming you are calling a wrapper function for your database calls this shouldn't be difficult to implement. Before and after every single DB call, check the current time and compare the two results. If any query runs longer than X seconds, where X is an arbitrary number decided by you, log the following into a separate DB table: the query, how long it took to run, the return value of debug_backtrace(), and the time that this occurred. Set this up so you can easily disable / enable logging; you don't want it to always be on as it will affect performance. After a week or so of periodic logging you should be able to determine which queries are bringing your site down and from where they're being called. Quote Link to comment Share on other sites More sharing options...
jaymc Posted September 25, 2007 Author Share Posted September 25, 2007 Thank you! However, if the site is under haevy load, queries that usually take 0.005 seconds can take 1o seconds That is no use as it doesnt tell me which queries are poorly writtin or structured, only that they fail when the server is failing which you would expect To be honest at the moment there is no queries that are calling severe problems, but its not as powerful as Im going to make it so.. Quote Link to comment Share on other sites More sharing options...
fenway Posted September 25, 2007 Share Posted September 25, 2007 There are many blogs on optimization, they deal with nitty-gritty things like tables in memory, row cache, etc. It get's really ugly, trust me, you don't want to go there. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.