Jump to content

Archived

This topic is now archived and is closed to further replies.

eric1235711

help with huge table

Recommended Posts

hello,

I got a table with more than 500.000 records. It´s a list of adresses and postal-codes (I don´t know if this is the right word, here it cals CEP)

I find that this table overloaded the server, a simple query in it was pocessing a long time... I find that it hapened when a lot of people accessed it at same time...


Do you think that this table is too large?

In all the system there is an only one simple query that calls this table, looking for the address of a CEP number. This CEP field is a key.

Now the server looks like running normaly, but I´m thinking about hashing this table... in ten or a hundred tables...

What do you think?

Share this post


Link to post
Share on other sites
500,000 records shouldn't be too much of a load for a simple query if the table and the query both are properly optimized (well, for a good server, that is). if your table can be normalized further into wider categorization, that would be good. if you need some input on that aspect, just post your table schema and we may be able to help with that, too.

Share this post


Link to post
Share on other sites
the query is very simple, eg: SELECT * FROM addresses WHERE cep = '01001000'

It always returns a single row.

a shema is this?

Field Type Null Default
ID  int(11) Yes  NULL 
TYPE varchar(20) Yes  NULL 
ADDRESS varchar(60) Yes  NULL 
NUMBSTRT varchar(8) Yes  NULL 
NUMBEND varchar(8) Yes  NULL 
SIDE varchar(1) Yes  NULL 
SUBURB varchar(50) Yes  NULL 
CEP  int(8) Yes  NULL 
CITY varchar(38) Yes  NULL 
CITYID varchar(5) Yes  NULL 
STATE varchar(2) Yes  NULL 
ESPECIAL  varchar(70) Yes  NULL 

Indexes:
Keyname Type Cardinality Field
PRIMARY  PRIMARY  559819  ID 
CEP  INDEX  559819  CEP 

Space usage:
Type Usage
Data 39,707 KB
Index 11,797 KB
Total 51,504 KB

    Row Statistics:
Statements Value
Format dynamic 
Rows 559,819 
Row length ø 72 
Row size  ø 94 Bytes 
Next Autoindex 559,820 
Creation Oct 02, 2006 at 06:30 PM 
Last update Oct 02, 2006 at 06:30 PM 
Last check Oct 02, 2006 at 06:30 PM 


Share this post


Link to post
Share on other sites
500K is not a big deal... there are a few things you can do to speed things up, though. 

First, fixed-width tables are faster to query.. consider changing your VARCHARs to CHARs. 

Second, the index is very large -- you can make it smaller by (A) having the CEP serve as the PK (especially if you ZEROFILL it, it can stay an INT) and then (B) use a short prefix.  If you can load the index into memory, it will be much faster to query.

Third, if you don't need back * always, you can make a multi-column index, and never have to read the table for the one column you need back, which will be really fast.

Of course, splitting the table horizontally may also be required, but I wouldn't suggest it just yet.

Share this post


Link to post
Share on other sites
Fenway,

you said to (B) use a shot prefix, you mean that I shoud make a column of (eg) the two (or three) first digits of CEP?

Share this post


Link to post
Share on other sites
Well, since it's 5.4 postal codes, I'd say 5 would be a good length to choose ;-)

Share this post


Link to post
Share on other sites
If its the number of sleeping processes is large then you may have to check the wait_timeout variable in mysql and reduce it if its set too large. Ideally, if there are too many waiting processes then the problem should not be with this query but establishing the mysql connection. Are you getting the 'Too many connections' error?

Share this post


Link to post
Share on other sites
[quote author=Vikas Jayna link=topic=110362.msg446698#msg446698 date=1159976888]
Are you getting the 'Too many connections' error?
[/quote]
yes, some times...

Share this post


Link to post
Share on other sites
I made a "serial-killer" program... lol

it kills the processes that sleeps too much

Share this post


Link to post
Share on other sites

×

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.