shamoon Posted October 19, 2007 Share Posted October 19, 2007 Hello all, I am new to this board and fresh out of ideas. I have a 2server setup: 1. Apache, 2. MySQL. The MySQL server is going EXTREMELY slow with load averages exceeding 8 at times. We are not getting THAT many hits, only a few hundred per day. I am using a database wrapper and in that, I am logging all queries into another table. All "SELECT" queries, that is. I then ran a script to run an "EXPLAIN" command on all the queries, as follows.... $all_queries = super_basic_search( "tmp_sql_analyze", "tmp_id, sql_query, count( sql_query ) AS counter", "1", "1", "0", "sql_query" ); echo "Clearing explanation table..\n"; super_basic_query( "TRUNCATE TABLE tmp_explain_analyze"); $total = count( $all_queries); echo "Analyzing " . $total . " queries...\n"; $cnt = 0; foreach( $all_queries as $query ) { $explain = super_basic_query( "EXPLAIN " . $query[sql_query] ); echo "Doing query #" . $cnt++ . " / " . $total . "\n"; echo "Tmp ID: " . $query[tmp_id] . "\n"; foreach( $explain as $explanation) { $new_array = array( 'tmp_id' => $query[tmp_id], 'exp_select_type' => $explanation[select_type], 'exp_table' => $explanation[table][tr][td], 'exp_type' => $explanation[table][tr][td], 'exp_possible_keys' => $explanation[possible_keys], 'exp_key' => $explanation[key], 'exp_key_len' => $explanation[key_len], 'exp_ref' => $explanation[ref], 'exp_rows' => $explanation[rows], 'exp_extra' => $explanation[extra], 'exp_sql_query' => $query[sql_query], 'counter' => $query[counter] ); super_basic_add( "tmp_explain_analyze", $new_array ); } } The results of the tmp_explain_analyze table show nothing abnormal. There are NO queries that exceed 4000 rows to search, which is pretty low considering the main table in the database has almost 200,000 rows. What else can there be? Any help would be GREATLY appreciated. Thanks all. Quote Link to comment Share on other sites More sharing options...
fenway Posted October 19, 2007 Share Posted October 19, 2007 What does the slow query log have to say? Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 19, 2007 Author Share Posted October 19, 2007 How can I see the slow query log? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 19, 2007 Share Posted October 19, 2007 How can I see the slow query log? You have to set it up first. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 19, 2007 Author Share Posted October 19, 2007 Just enabled it. No output file yet. log-slow-queries=/var/log/slow-queries.log long_query_time = 1 Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 19, 2007 Author Share Posted October 19, 2007 Hmm... it doesn't seem to be writing to the log. Any thoughts? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 19, 2007 Share Posted October 19, 2007 Hmm... it doesn't seem to be writing to the log. Any thoughts? You have the query time set very low, so it should be catching lots of things. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 20, 2007 Author Share Posted October 20, 2007 What's a more appropriate length of time? Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 21, 2007 Author Share Posted October 21, 2007 With a setting of 10, here's the output of the slow log: /usr/libexec/mysqld, Version: 5.0.22-log. started with: Tcp port: 3306 Unix socket: /var/lib/mysql/mysql.sock Time Id Command Argument # Time: 071020 20:19:02 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 18 Lock_time: 0 Rows_sent: 100 Rows_examined: 56077 use bsdb; SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b, books_to_categories AS btc WHERE btc.node_id='21' AND btc.book_id=b.book_id GROUP BY b.book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 100; # Time: 071020 20:19:16 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 59 Lock_time: 0 Rows_sent: 100 Rows_examined: 471490 SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 100; # Time: 071020 20:20:24 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 17 Lock_time: 0 Rows_sent: 100 Rows_examined: 189127 SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b, books_to_categories AS btc WHERE btc.node_id='17' AND btc.book_id=b.book_id GROUP BY b.book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 100; # Time: 071020 22:03:15 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 20 Lock_time: 0 Rows_sent: 10 Rows_examined: 471400 SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 10; # Time: 071020 22:16:01 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 13 Lock_time: 0 Rows_sent: 100 Rows_examined: 79711 SELECT count(si.book_id) as counter, b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM search_index AS si, books b WHERE match(si.search_term) AGAINST ('a world without time') AND b.book_id=si.book_id GROUP BY si.book_id ORDER BY si.search_strength DESC, counter DESC, copies_effective DESC, publication_date DESC LIMIT 0, 100; # Time: 071020 23:57:19 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 18 Lock_time: 0 Rows_sent: 1 Rows_examined: 0 SELECT COUNT(queue_id) FROM queue WHERE member_id='11200' AND status='1'; # Time: 071020 23:57:27 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 50 Lock_time: 0 Rows_sent: 478197 Rows_examined: 956394 SELECT * FROM isbn_similarities WHERE 1 ORDER BY 1; # Time: 071021 0:01:31 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 11 Lock_time: 0 Rows_sent: 100 Rows_examined: 76097 SELECT count(si.book_id) as counter, b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM search_index AS si, books b WHERE match(si.search_term) AGAINST ('love life') AND b.book_id=si.book_id GROUP BY si.book_id ORDER BY si.search_strength DESC, counter DESC, copies_effective DESC, publication_date DESC LIMIT 0, 100; Quote Link to comment Share on other sites More sharing options...
fenway Posted October 22, 2007 Share Posted October 22, 2007 Lots of rows examined for very few rows returned... run EXPLAIN on each one of thsose. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 22, 2007 Author Share Posted October 22, 2007 I have run EXPLAIN's. Nothing out of the ordinary. Quote Link to comment Share on other sites More sharing options...
fenway Posted October 22, 2007 Share Posted October 22, 2007 I have run EXPLAIN's. Nothing out of the ordinary. Really? This is terrible: SELECT * FROM isbn_similarities WHERE 1 ORDER BY 1; Why do you need 500K rows? Also, you have ORDER BY yourField DESC all over the place -- which means you're not using an indexes on these expensive filesort operations! Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 22, 2007 Author Share Posted October 22, 2007 Hmm... that one query seems a bit odd. I'll have to do some research. But what's wrong with the ORDER by's? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 22, 2007 Share Posted October 22, 2007 Nothing wrong with the order by... but with DESC, it can't use the index. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 22, 2007 Author Share Posted October 22, 2007 Hmm... so then what's a good way to order returned rows? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 23, 2007 Share Posted October 23, 2007 There is no "good way" -- the docs say that you can request an index to made ASC or DESC, but AFAIK, this has not yet been implemented... there are a variety of hacks available. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 24, 2007 Author Share Posted October 24, 2007 My slow log has quite a few queries that are "SET". Not sure what that does. Any thoughts? # Time: 071024 9:31:46 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 19 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SET insert_id=7266,timestamp=1193232706; INSERT INTO tmp_sql_analyze (where_sql, group_sql, tables, sql_query, calling_file, calling_function, ip_address) VALUES (@'0', @'1', @'2', @'3', @'4', @'5', @'6'); # Time: 071024 9:32:03 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 17 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 UPDATE search_log SET search_terms = @'0', search_date = @'1', book_id = @'2', book_title = @'3' WHERE member_id = -1 AND search_terms = 'Carrie Karasyov'; # Time: 071024 9:32:06 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 16 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 SET insert_id=4646,timestamp=1193232726; INSERT INTO entries (book_id, isbn, copy_number, human_code, purgatory, location, price_purchased, value, entry_status, notes, print_status, date_modified) VALUES (@'0', @'1', @'2', @'3', @'4', @'5', @'6', @'7', @'8', @'9', @'10', @'11'); # Time: 071024 9:32:22 # User@Host: bookswim[bookswim] @ D3350.servadmin.com [12.47.45.169] # Query_time: 69 Lock_time: 0 Rows_sent: 100 Rows_examined: 483133 SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 100; Quote Link to comment Share on other sites More sharing options...
fenway Posted October 24, 2007 Share Posted October 24, 2007 Depends on how many indexes have to be updated... Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 24, 2007 Author Share Posted October 24, 2007 Any way to speed that up? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 24, 2007 Share Posted October 24, 2007 Any way to speed that up? Well, at least for the INSERTs, you can probably use DELAYED / IGNORE. For the UPDATEs, you can use LOW_PRIORITY. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 24, 2007 Author Share Posted October 24, 2007 Hmm... what's the command for that? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 24, 2007 Share Posted October 24, 2007 Hmm... what's the command for that? Which one? Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 24, 2007 Author Share Posted October 24, 2007 Well.. for delayed inserts, it seems that I can only use it some of the time because sometimes the next section depends on the result, same with the UPDATE LOW_PRIORITY. Is there much of a savings by doing this? Quote Link to comment Share on other sites More sharing options...
fenway Posted October 24, 2007 Share Posted October 24, 2007 True enough... if you're doing many inserts/updates that are for "logging" only, then yes. Quote Link to comment Share on other sites More sharing options...
shamoon Posted October 25, 2007 Author Share Posted October 25, 2007 # Time: 071024 23:44:40 # Query_time: 67 Lock_time: 0 Rows_sent: 100 Rows_examined: 489115 use bsdb; SELECT b.title, b.title_html, b.authors, b.description, b.book_id, b.cover_id, b.publication_date, b.avg_rating AS rating FROM books b WHERE 1 GROUP BY book_id ORDER BY copies_effective DESC, publication_date DESC LIMIT 0, 100; Why would this need to examine all 489K rows if I just want to return 100? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.