Jump to content

Backing up mysql


jaymc

Recommended Posts

I run a mysql backup every night at 6am

 

Sometimes it causes the entire server to crash

 

The backup pretty much backups up oer 10 million rows accross numerous tables, the sql dump file is around 600MB

 

I dont think this is the best way to backup, even though its run at 6am it still takes around 15 mins..

 

What are the alternitives?

 

I was thinking of copying the raw mysql files. Of course if they are being written to at the time this causes corruption but perhaps if i

 

1: Stop mysql

2: copy tables

3: start mysql

 

Then its jsut the time of copying 600MB which should be 10 seconds?

 

Is this the best answer, although I dont like the idea of stopping mysql and then starting it via a cronjob incase it doesnt come back up!

 

Advice please! Thanks

Link to comment
Share on other sites

Is it a known problem for mysqldump to give in when its given a hell of a lot to do

 

If so, what about companies that billions of rows? Whats the industry solution

 

I really dont want to be shutting down and starting the mysql server

 

1: if it doesnt come back up I wont know as it will be early hours

2: It destroys cache/save variables that are useful

Link to comment
Share on other sites

Yeh, if it runs mysql basically dies for the duration of the backup, and if its getting hammered while its backing up tthen the server will spiral out of control and today caused total lockout

 

Its a bugger to import aswell, especially a 600MB file

 

I actually just tried a raw copy, that itself took 2 minutes and because the drives go sluggish during a big copy it effect the server load, although I guess thats just the way it is

 

Any ideas

Link to comment
Share on other sites

Dual Opteron 142 (single core)

3GB ECC

2 SATA. 1 hosts system files the other hosts mysql and www

 

[mysqld]
datadir=/var/lib/mysql
skip-locking
skip-innodb
skip-networking
skip-name-resolv
safe-show-database
query_cache_limit=4M
query_cache_size=128M ## 32MB for every 1GB of RAM
query_prealloc_size=65536
query_cache_type=1
max_user_connections=250
max_connections=500
#interactive_timeout=20 ## can kill backups
wait_timeout=11
connect_timeout=10
thread_cache_size=384
key_buffer=756M ## 128MB for every 1GB of RAM
join_buffer=6M
max_connect_errors=20
max_allowed_packet=16M
table_cache=1800
record_buffer=16M
tmp_table_size=64M
sort_buffer_size=6M ## 1MB for every 1GB of RAM
read_buffer_size=6M ## 1MB for every 1GB of RAM
read_rnd_buffer_size=6M  ## 1MB for every 1GB of RAM
bulk_insert_buffer_size = 8M
thread_concurrency=4 ## Number of CPUs x 2
myisam_sort_buffer_size=64M
server-id=1

Link to comment
Share on other sites

Not as much as they could be, but thats in terms of queries. For instance I have a 5 million row table which is the most accessed! Currently splitting that up

 

But that wouldnt effect the dump

 

1: mysqldump = out the question as its causing crashes

2: Flat copy while mysql = stopped not to keen on due to it starting successful automated early hours

 

Just looking for more options and see what the best is

Link to comment
Share on other sites

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.