Jump to content

Mysql shell - works. Php no. Same sql string.


Consolas

Recommended Posts

Hi everyone.

 

I didn't know where to ask this question but since this is a "multitasking" forum i thought there was no harm in including mysql and php on this topic.

 

I have the following scenario.

 

Everyday between 17:00 and 18:00 i've got a show running where i get more than 150.000 records inserted. During that show, i have to refresh on a minutes basis a php site that counts how many records so far ( during the day ) were collected.

 

Furthermore: around 18:00 i have to get from the 170.000 records, the last 10 that were inserted OR the following ten starting in a specific record.

 

All worked fine but today i couldn't get the ten last records result.

 

I've tried to do exactly the same sql string in the mysql shell and it took around 1 second to show the result..!

 

I have a table of about 45 thousand records ( 45.000.000 ).

 

Could this be it?

 

Plus: I have a replication system on, what do you think i should do in order to "start" a new table? stop the replication system every day between 17:00 and 18:00 and insert the records in a temporary table ( who only has the day records ) and after 18:00 write the records back to the main table..? and restart slavering?

 

thanks a lot for your help!!

Link to comment
Share on other sites

If you're selecting records, use EXPLAIN in front of the query. MySQL won't perform the actual query, but instead will show you a 'map' (for lack of a better layman's term) of how it would go about retrieving the requested records. This will reveal if there is a missing index or some other problem that might be slowing your query down. If you post the EXPLAIN results here, we might have a more informed opinion. Please provide the actual query too...

 

PhREEEk

Link to comment
Share on other sites

mysql> EXPLAIN SELECT number FROM Table WHERE brandMr='123456' AND receiveDate>'2007-12-20 00:00:01' AND receiveDate<='2007-12-20 23:59:59' ORDER BY receiveDate DESC LIMIT 0

,10\G;

*************************** 1. row ***************************

          id: 1

  select_type: SIMPLE

        table: Table

        type: range

possible_keys: indice2,indice3,indice4,indice7,alias,dataSource

          key: alias

      key_len: 62

          ref: NULL

        rows: 667126

        Extra: Using where; Using index

1 row in set (0.03 sec)

 

ERROR:

No query specified

 

 

 

And this on the shell takes a few seconds, but through the web it just doesn't output anything. Could this be somewhat related to a buffer or something, i should configure?

 

Apache crashed in the morning, i've gracefully restarted it and it worked fine...

 

Link to comment
Share on other sites

Hi again, Mr Phreeek

 

Sure i can. Here it is!

 

+------------------+------------------+------+-----+---------+----------------+

| Field            | Type            | Null | Key | Default | Extra          |

+------------------+------------------+------+-----+---------+----------------+

| idIncoming      | int(9)          | NO  | PRI | NULL    | auto_increment |

| idOperador      | int(10) unsigned | NO  |    | 0      |                |

| lenght          | int(10) unsigned | NO  |    | 0      |                |

| number | bigint(20)      | NO  |    | 0      |                |

| deliveryDate    | datetime        | YES  | MUL | NULL    |                |

| receiveDate      | datetime        | YES  | MUL | NULL    |                |

| laNumber        | bigint(20)      | YES  |    | NULL    |                |

| alias            | varchar(50)      | YES  | MUL | NULL    |                |

| messageReceived  | varchar(255)    | YES  |    | NULL    |                |

| idSource        | int(10) unsigned | YES  |    | NULL    |                |

| lida            | tinyint(1)      | YES  |    | 0      |                |

| deleted          | tinyint(1)      | YES  |    | 0      |                |

| audioFile        | varchar(40)      | YES  |    | NULL    |                |

| ConnectRadius    | varchar(80)      | YES  |    | NULL    |                |

| DisconnectRadius | varchar(80)      | YES  |    | NULL    |                |

| callDuration    | bigint(10)      | YES  |    | NULL    |                |

+------------------+------------------+------+-----+---------+----------------+

16 rows in set (0.01 sec)

 

 

I know i have to make a backup of the table, it's over 45 million records so far...

 

Since it is msam, when i dump it it will get a read lock, write ?

Link to comment
Share on other sites

SELECT number FROM Table WHERE brandMr='123456' AND receiveDate>'2007-12-20 00:00:01' AND receiveDate<='2007-12-20 23:59:59' ORDER BY receiveDate DESC LIMIT 0
,10\G;

 

What is `number` and `brandMr`? I do not see those in the structure for the table... It's good to see an index for receiveDate since you are using them in the WHERE clause, but again, I would like to know what the deal is with `number` and `brandMr`. Can you please run one more command so we can see more information about the indexes? I need:

 

mysql>SHOW INDEX FROM Table;

 

Without knowing what this database is for, and how often 'old' records are needed, it's hard to advise on how to split it up. Obviously it would be nice to archive off portions of it. If older portions are not needed very often, you could archive parts to SQL files (since text files can be compressed quite well). If you need fairly regular access, then you could archive off portions to other tables (archive tables). I'm thinking that might make more sense than saving new data to a temp table and then rewriting to the permanent table. So all of these options depend on what you do with this database, and how often you do it. Can we archive off by the month (receive/deliveryDate)? If ALL of the data is needed all of the time, then we will have a different problem and will need a different approach, so please advise.

 

Paging Mr. Fenway! Any thoughts on this, amigo? = )

 

PhREEEk

Link to comment
Share on other sites

Hello!

 

Sorry the brandM and the number were for example only, they are not the real fields, i'm sorry.

 

And i liked your idea of saving old portions off the database.

 

I guess i'm going to dump all records till year 2007 and then delete it from the database.

 

The problem is between 17:00 and 18:00 everyday..i have more than 100.000 records inserted in that period and also in that period i have to frequently, on an almost minute basis, refresh/make a new query to check out how many records were inserted. Then, near 18:00 i have to make a query to know which were the last ten records inserted and sometime the last ten records starting from a specific records: for instance, the next following records starting on record 80.000. I would get 80.000, 80.001..so forth until the 80.010.

 

So here i think a temporary table would do the trick...no?

 

I know that inserting the records on two different tables and the same time, if possible, could somehow slow all the insert process...but that would be perfect cause i would make the read querys on the temp table that would have got 170.000 top. After 18:00 that table would be cleared.

 

What do you think?

Link to comment
Share on other sites

A temporary table would absolutely work, and if that is the acceptable solution to you, then go with that. However, if the database is archived up front, and then archived regularly after that, you should be able to retain the old way of doing things.

 

A suggestion would be to move the records > (older than) Jan 1 2007, into a NEW table and begin using that new table the way you normally would. Start a new table on Jan 1 2008. That would seem easier than moving all of the old records. You can also easily rename tables or just alter the table name in your script. If you need any script help to do all of this, just let us know...

 

PhREEEk

Link to comment
Share on other sites

Hello!!

 

Wow, never thought of that, good idea! Moving the 'old data' to another table, or even a 'older data' database..!

 

I could do just that instead of just deleting the data.

 

What i was thinking is, since i can have a mysqldump of all data till year 2007, i was thinking of deleting the data from the table and keep the files - but of course, this way i could access 'easily' to the data.

 

So, what do you think of me moving the old data to a new database which wiould save the old database?

 

This way the REAL and RECENT database would stay with just one year of records and therefore with less records.

 

Just one thing: since i have the slaving system implemented, i always read from the slave database and therefore tables in the slave database. Still, doing this on the master will replicate this to the slave and therefor i would read info from the MAIN table which would have the records of i don't know, last 6 months and on the OLD DATABASE i would have ALL records.

 

What do you think? Is this okay?

I prefer this to the temporary table scheme.. eh eh eh

 

Link to comment
Share on other sites

Yes, I think just archiving what is not used very often, and then retaining your current way of storing/retrieving new data would be the easiest solution based on what you've revealed so far. Again, depending on how often you access 'old' data, you could certainly archive off old data earlier than 1 year if you wanted. Maybe 1/2 year intervals? Maybe even each month? Only you would know what would work best there.

 

Since you have access to shell mysql, I would get the archives correctly spread across n number of tables. I would then use mysqldump to archive the old tables to .sql files, then tar and gzip them with an appropriate archive name. I would then delete those old tables. Going forward, I would use the current table as usual, then archive it when necessary (most likely using a CRON job). If I ever needed to view some old data, using mysqldump to restore an archive would only take a few seconds. So, hopefully, out of all of these ideas, you'll end up with something that works better for you and isn't such a burden on your browser. = )

 

My best of luck to you, and a very happy holidays wherever you may be...

 

PhREEEk

Link to comment
Share on other sites

Hello once more

 

Thank you SO MUCH for all your help and insight! Thank you very very much!

 

I don't access old information that often so deleting the older tables may be a good idea! Even if i need the old tables, dumping the records to a new table isn't that hard, although i'm guessing it would need some time ( if we talk on getting information of complete year - > 25.000.000 records ).

 

I will retain info of a complete year cause that information i would need to access it on a monthly basis.

 

However, let's say i need some information of an earlier year...the dumping of 25.000.000 records takes how much time, average time?

 

Well, that is the kind of request we can get solved in 24 hours. The client is informed it will take some time to get older information..:)

 

What do you think?

 

Let me resume, as best as i can, what i have/need.

 

I have some campaigns and each one receive calls, that will get inserted in the mysql table by the radius.

 

For each call, a record is inserted into a specific table called, let me see, "Incoming". So this table has LOTS of records cause through the years lots and lots of calls are made and all records are inserted into table Incoming. I don't like how this is done, i would spread larger campaigns to an isolated table and the smaller campaigns would share a table but i inherit this scheme so i've got to do the best i can with it.

 

So, we have got a table Incoming getting LOTS of records, since year 2005.

 

So i'm thinking, if i do erase all old data, to dump it into a table ( i would have change the log file to TempTable - or maybe first mode the record to a TempTable and then dump - this is just to ensure that when i DO dump the data to a table, i don't drop nor delete an existing table ), so to dump the data into a table would take some time, if we are talking about millions of records. Since i won't be needing that kind of information that often, i think this is a good idea.

 

And if a client would need information of old data, i would save the data into a table and the next dar, for instance, i would have the table ready for him to check out - i'm guessing dumping the information back to a table would take several minutes...don't know..

 

Well, your last post was excelent and i think we have the best solution found.

 

Just tell me, if you can and isn't too much of a burden, what you think after reading this post.

 

Again, and not ewnough times said, thank you very VERY much!!!

 

Ricardo, from Portugal

 

Happy Holidays!!!

 

 

Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.