For number 1, I dont have any at this point. The script is a log reader/parser called Ultrastats for COD4/5. I noticed in the code that it does exactly what I didnt want it to do... goes through it 100% until it reaches its like, and resumes making a big file bigger... this is bad as it bogs the server. i guess what im asking is if i resume filex.log from position 25000 and the local filex.log is 0, will it just append 25001 and up making this new file a much smaller file?
2. they're logs on some game servers for cod4 and 5. in both games, the guy im trying to help with this, runs some of the more popular servers and the logs increase a huge amount, very quickly. as it is, in just 2 months if logging on cod5, he has one thats running about 2gb. i already know im going to take these files down, split them into smaller portions and parse them 1 by 1. what im worried about is when all is said and done with the logs and i go to rerun ultrastats, it'll redownload the whole file, then try to parse it. bad times as it'll not only time out, but abuse the server to all hell.
Edit: Also, i may eventually, somehow, talk him into stopping the server to clear the logs, but i know talking him into doing it more then once wont happen. so the logs will eventually get huge again and i'll still be on this boat.
heres the problem in number 1 with how it parses the file and what im hoping to either work around or recode
while (!feof ($myhandle))
{
// A logline was never more then 1024 bytes, so it's enough buffer
$gl_linebuffer = fgets($myhandle, 1024);
if ( $currentline < $db_lastlogline )
{
// Repeat until new file position is reached
$currentline++;
it will go until it times out on larger files and abuse the server like nothing else. what i want to do is either get it to portion the files and, upon downloading, place from the lastlogline on through the end in a new file, so instead of 2gb, its only getting and holding the last 250mb that was placed since its last download.
thanks for the response,
--pyr0