Jump to content

Recommended Posts

Hey

 

This site has been invaluable for the last few days with my questions, so thanks to the great people on here!

 

OK, i have a member table users_1 with max records of 30,000, when it reaches 30,000 records, it creates users_2 and starts populating this... then when thats full it will created users_3 and so on.

 

This is helping greatly on DB load to spread the queries over several tables etc etc. anyways..moving on.

 

When a user logs in, i need to find what users_  database their in... how do i do this efficiently? right now i do this.

 

cut down version albeit..

$result = mysql_query("Select * from users_1 WHERE username=$username");
$found = mysql_num_rows($result);
if($found >0) { $found_member = true; }

if(!$found_member) {

$result = mysql_query("Select * from users_2 WHERE username=$username");
$found = mysql_num_rows($result);
if($found >0) { $found_member = true; }

}

if(!$found_member) {

$result = mysql_query("Select * from users_3 WHERE username=$username");
$found = mysql_num_rows($result);
if($found >0) { $found_member = true; }

}


if($found_member) {
run the while command on db and get the members details etc...

} else {

not found

}

 

 

Im guessing theirs a better way at doing this? One stumbling block i hit is when it creates a new users_ table i need to update the code. So i first need to see how many tables there is and loop through each one till i find a match, if no match, error.

 

I appreciate your help :-)

 

 

 

Link to comment
https://forums.phpfreaks.com/topic/138811-finding-info-out-of-several-tables/
Share on other sites

This is helping greatly on DB load to spread the queries over several tables etc etc. anyways..moving on.

 

Wait wait wait!  No moving on just yet :)

 

Have you got an index on the username column on your users table?  You shouldn't need to break the table up like that to get adequate performance.

 

You may also need indexes on other values if you do frequent lookups by those values.

This is helping greatly on DB load to spread the queries over several tables etc etc. anyways..moving on.

 

Wait wait wait!  No moving on just yet :)

 

Have you got an index on the username column on your users table?  You shouldn't need to break the table up like that to get adequate performance.

 

You may also need indexes on other values if you do frequent lookups by those values.

 

Just to clarify, he is referring to mysql indexing. Here is an indepth article on it: http://www.informit.com/articles/article.aspx?p=377652

im really sorry i went away for a few days and left this unanswered.

 

Yes i do have an index on the tables, several in fact. The database was split up like this when the flash game developers (our website is an online multiplayer flash game ) first setup the game. They said it would increase performance, we had over 300,000 (we had 11 users_ tables) before we reset the db and cleaned up.

 

We really do, do alot of queries on the db and have a separate linux box for mysql. The game is constantly querying and updating the db.

 

But your saying i shouldn't really need to split up the tables like this? What sort of record numbers would start to see a slow down in performance?

 

Hmm.. if you are doing a lot of updates to those user tables, then it might help to split the tables actually.  I usually use postgres which acts differently in these situations.  I'm going to suggest this thread be moved to the mysql forum, where there's some mysql experts lurking :)

 

If you do need multiple tables and you also need to find which table a user is in, then you probably need some kind of global index.  This could be a mysql table as well, or it could be stored externally.  It's difficult to make a suggestion without knowing the access patterns though.

 

Eg, if 99% of your queries look up a user by user_id, then you can speed up 99% of queries by having another table that tells you which user table holds which user_id.  You'll need to be careful that that global table doesn't have the same scaling problems as the original user tables though.  Which depends on why the original table wasn't scaling.

 

If you tend to get repeated requests for the same user_ids, then you should see a large benefit by using a caching layer like memcache.  Eg if you look up a user on every hit, a memcache can satisfy all those requests except the first.  If the user's data is updated, then you trigger an update of the memcache entry so everything is consistent.

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.