peterjc Posted May 25, 2011 Share Posted May 25, 2011 I would like to know how do you guy handle the database table primary key when it reach the maximum limit? Example, a program need to insert a lot of records, but after insert, may delete it after a few days, so the primary key field will increase very fast, and of course after a few months or years, will reach the maximum limit right. So how do you guy handle it in this kind of case? Thank. Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/ Share on other sites More sharing options...
DaiLaughing Posted May 25, 2011 Share Posted May 25, 2011 Change the data type or length. Use BIGINT instead of INT for example. Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1219900 Share on other sites More sharing options...
gizmola Posted May 25, 2011 Share Posted May 25, 2011 Well there are a couple of things to this. The first is the type of the primary key column as DaiLaughing pointed out. What is VERY important is that your column be declared UNSIGNED. Even if it is not currently unsigned you can safely alter the table with no loss of data. If the column is not UNSIGNED then you only have access to half the available numbers, as it is reserving a bit for the sign, and half the potential range of values are allocated to negative integers. The rest of this comes down to the datatype that goes on byte boundaries. You have tinyint (1 byte), smallint (2 bytes), mediumint (3 bytes) int (4 bytes) and even Bigint (8 bytes). Very few sites will ever exceed the number of id's available from an unsigned int (4,294,967,295). Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1219921 Share on other sites More sharing options...
The Little Guy Posted May 25, 2011 Share Posted May 25, 2011 unsigned bigint: 18,446,744,073,709,551,615 http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html From what I have read, using a bigint can slow down search speeds, I don't know if that is true or not. Using database normalization will also help you in this situation. Database Normalization: The goal of database normalization is to decompose relations with anomalies in order to produce smaller' date=' well-structured relations. Normalization usually involves dividing large tables into smaller (and less redundant) tables and defining relationships between them.[/quote'] So, basically the less redundant data you have, the more rows you will/can save. A good way to do this is key value relations (Facebook does this). Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1220044 Share on other sites More sharing options...
gizmola Posted May 25, 2011 Share Posted May 25, 2011 Yes a bigint is going to be slower because it's 8 bytes vs 4. More data = larger files and larger indexes = slower. Facebook and sites like it use a variety of techniques including sharding, distrubuted data caching, and the use of guids for keys and are basically required to have different architectures than the average site. Sites like that do not use auto_increment. Going back to the OP's question, it is unlikely that he will have even a billion rows. Once you get into datasets with rows numbering in the 100's of millions you start to see significant hardware related performance issues. Usually those companies are successful enough that they can hire people and buy hardware to address those issues, but trying to design for exceptions is a huge waste of time unless you have a reasonable expectation in advance that you will be needing that type of architecture. Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1220269 Share on other sites More sharing options...
peterjc Posted May 26, 2011 Author Share Posted May 26, 2011 Thank for the reply. Facebook and sites like it use a variety of techniques including sharding, distrubuted data caching, and the use of guids for keys and are basically required to have different architectures than the average site. Sites like that do not use auto_increment. Eventhough i won't use the above techniques, but this look interesting, will do more searching about it. And if anyone know which site does provide good information about the techniques, hope you could provide the link to the site Thank Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1220462 Share on other sites More sharing options...
gizmola Posted May 26, 2011 Share Posted May 26, 2011 If you google you can find out more. Facebook is a big user of memcached. They also have compiler project that takes their PHP code and compiles it into binaries. There are a number of books on the topic including this one by the lead dev from flickr: http://www.amazon.com/Building-Scalable-Web-Sites-Henderson/dp/0596102356 Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1220464 Share on other sites More sharing options...
fenway Posted May 27, 2011 Share Posted May 27, 2011 BIGINT is crazy big -- INT will suffice, and is faster as a primary key index anyway (1/2 the size). Quote Link to comment https://forums.phpfreaks.com/topic/237411-database-table-primary-key-reach-it-limit/#findComment-1221101 Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.