Jump to content

messaging system


jaymc

Recommended Posts

I currently have a rather nice messaging system

 

Usually there are 40,000-60,000 messages in the table at any one time

 

ID | To | FROM | DATE | IP | READ

 

Messages stay in the table for 31 days and then they are automatically deleted

 

What I was thinking, is splitting that table into two tables

 

However about a table for NEW MESSAGES and a table for READ messages

 

So basically, when you read a message, it deletes it from UNREAD table and inserts it into READ table

 

The reason behind this is I have a 3 minute refresher that checks for new messages, thats how ever many members online X 3 minutes. Just wondering if that would be a better approach?

Link to comment
Share on other sites

Because theres usually around 600 online at any one time

 

600 / 3 minutes = queries per sec just for messages

 

I dunno, maybe its fine, I mean 40,000 rows of messages seems a lot to me, but MYSQL probably deals with that no problem providing there are indexes.. right? There is no lag at all, but the more queries the more strain on my server which may end up effecting other things..

 

Im just thinking of better ways to optimize even though this may be going over the top..?

Link to comment
Share on other sites

Well I have a table with 3 million row which is READ and UPDATED/INSERT NEW ROW for every profile view, and that is running fine to be fair which Im quite shocked at

 

However, if I take the index off the server will crash within 1 minute

 

I wonder if it could sustain 10 million rows and with heavy I/O and perform with unnoticeable lag

Link to comment
Share on other sites

You're taking shots in the dark at optimizing your database; which is to say there are areas you suspect are problems and want to fix them without any actual proof.

 

You need to gather actual data about performance before you run off and willy-nilly start changing database design and structure.  Assuming you are calling a wrapper function for your database calls this shouldn't be difficult to implement.

 

Before and after every single DB call, check the current time and compare the two results.  If any query runs longer than X seconds, where X is an arbitrary number decided by you, log the following into a separate DB table: the query, how long it took to run, the return value of debug_backtrace(), and the time that this occurred.  Set this up so you can easily disable / enable logging; you don't want it to always be on as it will affect performance.  After a week or so of periodic logging you should be able to determine which queries are bringing your site down and from where they're being called.

Link to comment
Share on other sites

Thank you!

 

However, if the site is under haevy load, queries that usually take 0.005 seconds can take 1o seconds

 

That is no use as it doesnt tell me which queries are poorly writtin or structured, only that they fail when the server is failing which you would expect

 

To be honest at the moment there is no queries that are calling severe problems, but its not as powerful as Im going to make it so..

Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.