Jump to content

markwillis82

Members
  • Posts

    31
  • Joined

  • Last visited

    Never

Contact Methods

  • Website URL
    http://mwillis.co.uk

Profile Information

  • Gender
    Not Telling

markwillis82's Achievements

Newbie

Newbie (1/5)

0

Reputation

  1. decompressing the csv would be the easiest way as mysql wouldn't be able to handle the decompression. you could then wrap the whole process in a BASH/DOS script
  2. SELECT `table`.*, ROUND( SQRT( POW((69.1 * ( 50.734501 - `table`.`lat`)), 2) + POW((53 * ( -2.98377 - `table`.`long`)), 2)), 1) AS `distance` FROM `table` WHERE `table`.`lat`< (50.734501+0.12) AND `table`.`lat` > ( 50.734501-0.12) AND `table`.`long` < (-2.98377+0.12) AND `table`.`long` > (-2.98377-0.12) ORDER BY `distance` ASC if you have the lat/long of the entered postcode, replace (50.734501, -2.98377) with your own. the 0.12 is 10 miles - so 5 miles is 0.6 The `distance` column is in miles. Hope this helps Mark Willis
  3. exec("convert -size {$size[0]}x{$size[1]} $original_image -thumbnail $max_widthx$max_height $new_image"); add the new absolute directory for new image exec("convert -size {$size[0]}x{$size[1]} $original_image -thumbnail $max_widthx$max_height /var/www/html/$new_image");
  4. thats much better. Do you have any other indexes on that table? as you may be able to modify an old index instead of creating a new one. glad i could help
  5. If you want - try adding 1 index to each table containing the 4 where columns. So the index would be on: telekurs,sxc, ccy and price. Run the query again and time it + show us an EXPLAIN again
  6. mysql dump is on windows too (look in bin directory of mysql installation) - if both servers are on a network then just use COPY through a network share (instead of BASH use a batch file) It will still be 3 commands
  7. ah - after more searching on the member list, my username was kram
  8. can you place EXPLAIN at the start of the query and post back the results. you may not be using indexes (resulting in lots of table scans)
  9. your db connection isn't "visible" from inside the function. Change function print_tables($table_name, $name, $cols){ $sqlcols = explode(",", $cols); for($i = 0; $i<count($sqlcols); $i++){ $sqlcols[$i] = trim($sqlcols[$i]); } while($nt=mysql_fetch_row($table_name)){ $data = null; $x = 0; $sql = "insert into " . $name . " "; $scols = "("; $svals = "("; foreach($nt as $info){ if ($info){ if (is_numeric($info)){ $scols .= $sqlcols[$x] . ", "; $svals .= $info . ", "; } else{ $data = addslashes($info); $scols .= $sqlcols[$x] . ", "; $svals .= "'" . $data . "', "; } } $x++; } $scols = substr_replace($scols, ")", strlen($scols) - 2, 2); $svals = substr_replace($svals, ")", strlen($svals) - 2, 2); $sql .= $scols . " values " . $svals; //print $sql . "<br />"; $result=mysql_query($sql, $db_connect_dest) or die(mysql_error()); } } ro adding a global tag in function print_tables($table_name, $name, $cols){ global $db_connect_source; $sqlcols = explode(",", $cols); for($i = 0; $i<count($sqlcols); $i++){ $sqlcols[$i] = trim($sqlcols[$i]); } while($nt=mysql_fetch_row($table_name)){ $data = null; $x = 0; $sql = "insert into " . $name . " "; $scols = "("; $svals = "("; foreach($nt as $info){ if ($info){ if (is_numeric($info)){ $scols .= $sqlcols[$x] . ", "; $svals .= $info . ", "; } else{ $data = addslashes($info); $scols .= $sqlcols[$x] . ", "; $svals .= "'" . $data . "', "; } } $x++; } $scols = substr_replace($scols, ")", strlen($scols) - 2, 2); $svals = substr_replace($svals, ")", strlen($svals) - 2, 2); $sql .= $scols . " values " . $svals; //print $sql . "<br />"; $result=mysql_query($sql, $db_connect_dest) or die(mysql_error()); } } alternativly if your on 2 linux boxes would it not be easier to use a BASH script and SCP (file transfer) and do: mysqldump #DATABASE# #TABLE# > dump.sql scp dump.sql user@server2.co.uk:~/dump.sql ssh user@server2.co.uk -C "mysql < dump.sql" the last SSH command may not be correct (running of memory here) hope this helps Mark willis
  10. yep. although I would make the group_user a primary key of uid,gid (both columns) this will mean each user can only be in each group once - and allow the use of indexes also "password" if your storing a hash (md5) you can make the column CHAR(32) instead of varchar. (but this depends on what you are storing)
  11. COUNT(*) on innodb without WHERE clauses will be slower with INNOdb, and if you are joining tables between multiple engine types (myisam to innodb) you may see a small performance decrease. I would suggest moving the hit count out of the main table for 3 reasons: 1) Table level locking - keeping the table as MyISAM will give excellent SELECT speed - but at the cost of table level locking - so connections will be waiting for updates to be processed. 2) Query Cache - If your application is running the same queries on the table constantly - they will be stored in the query cache, but if the underlying table is constantly being updated, this will invalidate the query in the cache - rendering it useless and the next time the query runs it will need to hit the table again. 3) Faster updates - As you will have a very small hit count table, the updates will run much quicker - also as the table will be fixed row length, it will be quicker to find results. Hope this helps Mark Willis
  12. I've been working on something like this for a while (see: crawler catcher on the link on my signature) - It epends on how much data you wish to store. For example - showing how many hits each month for x users is great - but if you are also including the current month ( reading rows that are being updated) you may get longer query times due to lock waits (more a MyIsam problem then an innodb problem) this can be a problem if LOTS of people are accessing the table. One solution would be use 2 tables a fill table and a log table. the page hits go into the fill table, then every hour / every day you could aggregate those rows (group by ip/userID) and then update the "log" table via a cron. This would mean you are only performing SELECTS on the log table (MyISAM is best for this) and then any aggregate functions will be much faster. Just make sure that in the "main" tables you have a fixed row length - and ideally only store id numbers. So if you store User agents - store in a seperate table, then reference it to see if the user agent exists, and store the ID of that useragent. My "test" log table (slightly different to above example) stores: IP,proxy,useragent,domain,page,timestamp,hit count - each row is 123 bytes, 1Mil (ish) rows is 100Mb.
  13. slightly off topic - kickstart... are you also on BCF? I used to be a member (not sure if my account is still up) it was "drummer"
  14. if you want to get each number, that is each number between spaces, load the file into a array, loop through each line, explode each line by " ", then loop through each member of the new array - that would give you your numbers, and perform whatever function you want on it
  15. ok - found a few bugs.... in track.php line 8: if (isset($image) { should be if (isset($image)) { this should then allow the tracker image to work. http://www.theelemethian.com/tracker/track.php?id=1 something like this. (currently displays an error message) <? $id = $_GET['id'] ; $myFile = "log/$id.htm"; $fh = fopen($myFile, 'r'); $theData = fread($fh, filesize($myFile)); fclose($fh); echo $theData; ?> is returning an error because the file does not exist (if the track.php isn't working it's not creating the file) so it might be worth putting a try/catch block around the code or alternativly use file_exists() in an if block. But if you are using this as a form of tracking, you will quickly generate alot of files. It may be worth using a database backend (quicker to access and alot less disk IO) If you want to track every page hit, etc. (including googlebot and other crawlers) i'm currently building an open source project "crawler catcher" which is currently in alpha stage, once i'm happy with the code base I will release the code in BETA to hopefully get some more coverage. http://mwillis.co.uk/crawler-catcher/ < - link to more info. I hope this helps Mark
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.