Jump to content

Managing MySQL INSERTS & UPDATES Along with Queries


n1concepts

Recommended Posts

Would like to get some insight on how best to design the database schema to handle a (constant) heavy load regarding MySQL INSERTS and/or UPDATES, whereas, the INSERTS will be steady and averaging about 5,000 (simple) INSERTS per hour - this will gradually increase over time. Note: The MySQL UPDATES will be random so load won't be too heavy but a query - as you might image - will be required to identify an existing record for each UPDATE (expect about 30% of each INSERT so about .

 

With that said, should this design consist or TWO databases - 1st db allocated strictly to INSERTS only with COPY or same data over to 2nd db which would manage the UPDATES & queries?

The objective is to reduce latency - improve performance across the board for all INSERTS, UPDATES, & queries.

 

Yes, I know a data warehouse the ultimate key but need a quick solution until that design complete.

Note: I have a beefed up db server that could handle this - initially - but thinking 'long-term' b/c once in production, will be hard to modify without serious downtime.

 

Any input from those that have addressed this issue appeciated.

Link to comment
Share on other sites

To be honest, what you are asking doesn't make sense to me. Either you aren't aware of some basic database handling or the questions are not accurate.

 

. . . a query - as you might image - will be required to identify an existing record for each UPDATE (expect about 30% of each INSERT so about .

What is this supposed to mean? Are you saying you will do a SELECT query to see if a record already exists before deciding to do an UPDATE vs INSERT query? If so, you are doing it wrong. You should be using an INSERT with an ON DUPLICATE KEY clause.

 

With that said, should this design consist or TWO databases - 1st db allocated strictly to INSERTS only with COPY or same data over to 2nd db which would manage the UPDATES & queries?

This isn't making any sense. I see no performance gain and it would only cause data to get out of sync.

Edited by Psycho
Link to comment
Share on other sites

Wow, I'll take this back to oracle and let them know they are clueless and to join this forum... :)

Seriously, I understand your point but clearly you missed mine...

 

We're in the process of setting up a datamart but - for short-term goals - the objective is to deal with a latency issue that's hammering an adequate db server.

Some of the thoughts - due to the high (current) and (future) volume of inserts (new data) is to let the 1st db log all raw data - no validation or scrubbing - and using transactional logging - pass that exact same data over to 2nd db where (updates - for each specified record) will be performed.

 

Now, the reason the 1st db will NEVER take updates is b/c that raw data is simply that - will be used to form trends/analysis (long-term).

The actual record mgt will all be handled from the 2nd db and reason for the two - there will be no data integrity issues as we've done similar for years and works just fine.

 

Appreciate your response - full of emotions (Smile) - but think I already have my answer.

I'll use threading and cURL to handle everything.

 

Problem solved - thx!

Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.