Jump to content

help with huge table


eric1235711

Recommended Posts

hello,

I got a table with more than 500.000 records. It´s a list of adresses and postal-codes (I don´t know if this is the right word, here it cals CEP)

I find that this table overloaded the server, a simple query in it was pocessing a long time... I find that it hapened when a lot of people accessed it at same time...


Do you think that this table is too large?

In all the system there is an only one simple query that calls this table, looking for the address of a CEP number. This CEP field is a key.

Now the server looks like running normaly, but I´m thinking about hashing this table... in ten or a hundred tables...

What do you think?
Link to comment
Share on other sites

500,000 records shouldn't be too much of a load for a simple query if the table and the query both are properly optimized (well, for a good server, that is). if your table can be normalized further into wider categorization, that would be good. if you need some input on that aspect, just post your table schema and we may be able to help with that, too.
Link to comment
Share on other sites

the query is very simple, eg: SELECT * FROM addresses WHERE cep = '01001000'

It always returns a single row.

a shema is this?

Field Type Null Default
ID  int(11) Yes  NULL 
TYPE varchar(20) Yes  NULL 
ADDRESS varchar(60) Yes  NULL 
NUMBSTRT varchar(8) Yes  NULL 
NUMBEND varchar(8) Yes  NULL 
SIDE varchar(1) Yes  NULL 
SUBURB varchar(50) Yes  NULL 
CEP  int(8) Yes  NULL 
CITY varchar(38) Yes  NULL 
CITYID varchar(5) Yes  NULL 
STATE varchar(2) Yes  NULL 
ESPECIAL  varchar(70) Yes  NULL 

Indexes:
Keyname Type Cardinality Field
PRIMARY  PRIMARY  559819  ID 
CEP  INDEX  559819  CEP 

Space usage:
Type Usage
Data 39,707 KB
Index 11,797 KB
Total 51,504 KB

    Row Statistics:
Statements Value
Format dynamic 
Rows 559,819 
Row length ø 72 
Row size  ø 94 Bytes 
Next Autoindex 559,820 
Creation Oct 02, 2006 at 06:30 PM 
Last update Oct 02, 2006 at 06:30 PM 
Last check Oct 02, 2006 at 06:30 PM 


Link to comment
Share on other sites

500K is not a big deal... there are a few things you can do to speed things up, though. 

First, fixed-width tables are faster to query.. consider changing your VARCHARs to CHARs. 

Second, the index is very large -- you can make it smaller by (A) having the CEP serve as the PK (especially if you ZEROFILL it, it can stay an INT) and then (B) use a short prefix.  If you can load the index into memory, it will be much faster to query.

Third, if you don't need back * always, you can make a multi-column index, and never have to read the table for the one column you need back, which will be really fast.

Of course, splitting the table horizontally may also be required, but I wouldn't suggest it just yet.
Link to comment
Share on other sites

If its the number of sleeping processes is large then you may have to check the wait_timeout variable in mysql and reduce it if its set too large. Ideally, if there are too many waiting processes then the problem should not be with this query but establishing the mysql connection. Are you getting the 'Too many connections' error?
Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.