Jump to content

Recommended Posts

Hello all,

I am new to this board and fresh out of ideas.  I have a 2server setup: 1. Apache, 2. MySQL.

 

The MySQL server is going EXTREMELY slow with load averages exceeding 8 at times.  We are not getting THAT many hits, only a few hundred per day.  I am using a database wrapper and in that, I am logging all queries into another table.  All "SELECT" queries, that is.

 

I then ran a script to run an "EXPLAIN" command on all the queries, as follows....

 

$all_queries = super_basic_search( "tmp_sql_analyze", "tmp_id, sql_query, count( sql_query ) AS counter", "1", "1", "0", "sql_query" );

 

echo "Clearing explanation table..\n";

super_basic_query( "TRUNCATE TABLE tmp_explain_analyze");

 

$total = count( $all_queries);

 

echo "Analyzing " . $total . " queries...\n";

$cnt = 0;

foreach( $all_queries as $query )

{

$explain = super_basic_query( "EXPLAIN " . $query[sql_query] );

echo "Doing query #" . $cnt++ . " / " . $total . "\n";

echo "Tmp ID: " . $query[tmp_id] . "\n";

 

foreach( $explain as $explanation)

{

$new_array = array(

'tmp_id' => $query[tmp_id],

'exp_select_type' => $explanation[select_type],

'exp_table' => $explanation[table][tr][td],

'exp_type' => $explanation[table][tr][td],

'exp_possible_keys' => $explanation[possible_keys],

'exp_key' => $explanation[key],

'exp_key_len' => $explanation[key_len],

'exp_ref' => $explanation[ref],

'exp_rows' => $explanation[rows],

'exp_extra' => $explanation[extra],

'exp_sql_query' => $query[sql_query],

'counter' => $query[counter]

);

super_basic_add( "tmp_explain_analyze", $new_array );

}

}

 

The results of the tmp_explain_analyze table show nothing abnormal.  There are NO queries that exceed 4000 rows to search, which is pretty low considering the main table in the database has almost 200,000 rows.

 

What else can there be?  Any help would be GREATLY appreciated.

 

Thanks all.

Link to comment
https://forums.phpfreaks.com/topic/73979-solved-mysql-major-server-slow-down/
Share on other sites

With a setting of 10, here's the output of the slow log:

/usr/libexec/mysqld, Version: 5.0.22-log. started with:
Tcp port: 3306  Unix socket: /var/lib/mysql/mysql.sock
Time                 Id Command    Argument
# Time: 071020 20:19:02
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 18  Lock_time: 0  Rows_sent: 100  Rows_examined: 56077
use bsdb;
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b, books_to_categories AS btc
			WHERE btc.node_id='21' AND btc.book_id=b.book_id GROUP BY b.book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 100;
# Time: 071020 20:19:16
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 59  Lock_time: 0  Rows_sent: 100  Rows_examined: 471490
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b
			WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 100;
# Time: 071020 20:20:24
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 17  Lock_time: 0  Rows_sent: 100  Rows_examined: 189127
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b, books_to_categories AS btc
			WHERE btc.node_id='17' AND btc.book_id=b.book_id GROUP BY b.book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 100;
# Time: 071020 22:03:15
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 20  Lock_time: 0  Rows_sent: 10  Rows_examined: 471400
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b
			WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 10;
# Time: 071020 22:16:01
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 13  Lock_time: 0  Rows_sent: 100  Rows_examined: 79711
SELECT count(si.book_id) as counter, b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM search_index AS si, books b
			WHERE match(si.search_term) AGAINST ('a world without time') AND b.book_id=si.book_id  GROUP BY si.book_id ORDER BY si.search_strength DESC, counter DESC, copies_effective DESC, publication_date DESC  LIMIT 0, 100;
# Time: 071020 23:57:19
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 18  Lock_time: 0  Rows_sent: 1  Rows_examined: 0
SELECT COUNT(queue_id) 
			FROM queue
			WHERE member_id='11200' AND status='1';
# Time: 071020 23:57:27
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 50  Lock_time: 0  Rows_sent: 478197  Rows_examined: 956394
SELECT *
			FROM isbn_similarities
			WHERE 1 ORDER BY 1;
# Time: 071021  0:01:31
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 11  Lock_time: 0  Rows_sent: 100  Rows_examined: 76097
SELECT count(si.book_id) as counter, b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM search_index AS si, books b
			WHERE match(si.search_term) AGAINST ('love life') AND b.book_id=si.book_id  GROUP BY si.book_id ORDER BY si.search_strength DESC, counter DESC, copies_effective DESC, publication_date DESC  LIMIT 0, 100;

I have run EXPLAIN's.  Nothing out of the ordinary.

Really?

 

This is terrible:

SELECT *
			FROM isbn_similarities
			WHERE 1 ORDER BY 1;

Why do you need 500K rows?

 

Also, you have ORDER BY yourField DESC all over the place -- which means you're not using an indexes on these expensive filesort operations!

 

My slow log has quite a few queries that are "SET".  Not sure what that does.  Any thoughts?

 

# Time: 071024  9:31:46
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 19  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
SET insert_id=7266,timestamp=1193232706;
INSERT INTO tmp_sql_analyze (where_sql, group_sql, tables, sql_query, calling_file, calling_function, ip_address) VALUES (@'0', @'1', @'2', @'3', @'4', @'5', @'6');
# Time: 071024  9:32:03
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 17  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
UPDATE search_log SET search_terms = @'0', search_date = @'1', book_id = @'2', book_title = @'3' WHERE member_id = -1 AND search_terms = 'Carrie Karasyov';
# Time: 071024  9:32:06
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 16  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
SET insert_id=4646,timestamp=1193232726;
INSERT INTO entries (book_id, isbn, copy_number, human_code, purgatory, location, price_purchased, value, entry_status, notes, print_status, date_modified) VALUES (@'0', @'1', @'2', @'3', @'4', @'5', @'6', @'7', @'8', @'9', @'10', @'11');
# Time: 071024  9:32:22
# User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169]
# Query_time: 69  Lock_time: 0  Rows_sent: 100  Rows_examined: 483133
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b
			WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 100;

# Time: 071024 23:44:40

# Query_time: 67  Lock_time: 0  Rows_sent: 100  Rows_examined: 489115
use bsdb;
SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating
			FROM books b
			WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC  LIMIT 0, 100;

 

Why would this need to examine all 489K rows if I just want to return 100?

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.