Jump to content

optimization for a 1 gigabyte table - split into 2?


Recommended Posts

I have a table for guitar tabs. this table has about 270,000 rows, totalling about 1,200 megabytes.

 

Its causing the page load times to take up to 5 seconds or more.

 

The structure is as follows:

 

CREATE TABLE IF NOT EXISTS `tabs` (

  `id` int(10) unsigned NOT NULL auto_increment,

  `song` text NOT NULL,

  `song_slug` text NOT NULL,

  `band` text NOT NULL,

  `band_slug` text NOT NULL,

  `tab` longtext NOT NULL,

  `version` int(10) unsigned NOT NULL,

  `rating` int(11) NOT NULL default '0',

  `views` int(10) unsigned NOT NULL default '0',

  PRIMARY KEY  (`id`)

) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=267636 ;

 

In my select query, I am using mysql_fetch_assoc and am only selecting the fields I am actually using on the page.

 

If I split this table up into 2, 1 with all these fields minus "tab", and 1 solely with "tab" and "tab_id", would it improve my page load time?

Hrmmmm....  Each song only has 1 row in that table, yes?  If so, I don't think you would gain anything by breaking it into more tables, and I think that might actually cause a performance hit because of a JOIN (not sure though).

 

 

What are your indexes, and what does an example SELECT query look like?

i don't know anything about indexes.

 

heres an example of my select query for the band page (the page that lists all tabs for a band). this is the slowest page to load.

 

$esc_band is simply the $_GET['band_slug'] variable protected with mysql_real_escape_string.

 

<?php

$sql = mysql_query("select song, song_slug, band, band_slug, version, rating, views from tabs where band_slug='$esc_band' order by song, version asc");

while ($row = mysql_fetch_assoc($sql))
{

  $song = $row['song'];
  $song_slug = $row['song_slug'];
  $band_slug = $row['band_slug'];
  $version = $row['version'];
  $rating = $row['rating'];
  $views = $row['views'];

  // additional coding using the variables here
}

?>

You don't turn a column into an index, but you create an index on it.

 

 

And yes, if you plan on searching by a column, you should almost always have an index on that field.

 

Without an index, MySQL has to go through each row looking for rows with a column equal to that value until the end of the table (well... unless there's a LIMIT clause).

 

 

Do something like:

 

CREATE INDEX band_slug_idx ON tabs(band_slug);

 

And see if that improves performance a little ;p.

 

 

 

I should note by the way that indexes can take a substantial amount of disk space.  They are necessary for quickly finding data in a table though.

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.