Jump to content

drifter

Members
  • Posts

    189
  • Joined

  • Last visited

    Never

Everything posted by drifter

  1. you need to send the header with it telling that it is an html email. from php.net $headers  = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";
  2. change it from a int field to a char or varchar - or add the 00's back when you get the number out of the db
  3. well when you create the username dir, do you see it in your file system? Also, try 777 - some hosting companies I have been with have php running under different things such as nobody, I need differnent permissions at different places
  4. file_get_contents() preg_match()
  5. well for one you need a % when you use like and list search are usually slow also depending on what you are doing, you can visit pear.php.net and get a DB class - that will help you a lot
  6.   mkdir($_POST[user], 0700);   mkdir($_POST[user]. "/images/", 0700); and make sure uploads writeable
  7. EDIT: SHOOT - I answered for one random poll... (do a while loop and array push - will give you all them in an array - then do a shuffle and take the first 10) there is an order by rand in mysql, however it is kinda slow. if you know you have at least 20 polls in there, you can $num=rand(0,19) (get random number) ORDER BY id DESC (to get the most recent) LIMIT $num,1 That would get you a random out of your last 20 Or do two queries one with a count and then choose random from that number. but that i a bit slower again. hope this helps...
  8. I was reading that tables are a lot faster if they are fixed width columns for searches. I have a table with about 150,000 rows, maybe 60 fields. I can easily make them all fixed width except 1 - the description - is a text field. I would have to rewrite that to a new table... join....etc... My question is how much of a speed boost do you really get? That table is populated by 15 data importers from different sources, and I would need to rewrite them all. So I just need to know if the search make a noticable difference. NOTE: all myisam tables. Thanks Scott
  9. Well I use this function to put ... after a word after so many characters. function myfragment($s,$n) { $scan=0;   while($scan==0){   if(substr($s,$n,1)==' ' || strlen($s)<$n){       $scan=1;   }else{       $n++;   }      }   return substr($s,0,$n) . "..."; } lets try this (not tested, just a guess) [code] function myfragments($s,$n,$ad) { $scan=0;   while($scan==0){   if(substr($s,$n,1)==' ' || strlen($s)<$n){       $scan=1;   }else{       $n++;   }      }   return substr($s,0,$n) . "<br />" . $ad . "<br /> . substr($s,$n); } // //Insert $my in $article after the first word ends ater 500 characters // echo myfragments($article,500,$myad); [/code] Hope this works, or at least gives you an idea.
  10. Well just looking at this array Array ( [0] => Array ( [0] => Stubbs [char_name] => Stubbs ) [1] => Array ( [0] => Ic3m4n [char_name] => Ic3m4n ) [2] => Array ( [0] => Bawlz [char_name] => Bawlz ) [3] => Array ( [0] => CrispinxLongbow [char_name] => CrispinxLongbow ) [4] => Array ( [0] => Lilice [char_name] => Lilice ) ) your foreach $key=>$name would give you   $key=0   $name=Array ( [0] => Stubbs [char_name] => Stubbs ) So you would need to do $name[0] or $name['char_name'] this also means you do not need the key so you can do foreach($online AS $name){   echo $name['char_name']; }
  11. I have a script I need to run daily that downloads a bunch of .gz files then parses, etc. One of the files is currently 98MB - when I run the script it gets about 45MB-47MB Downloaded and just dies. Script works fine with other files if I comment out the biggest one. I have tried ftpget ftp_nb_get etc. I can resume the file download, but  I will be one day setting this on a cron job, and the script does not know when it fails, it just stops. No awnsers or anything. I am just wondering is there some setting somewhere that I can pass, or in php.ini or something else that will allow me to fetch very large files. Thanks Scott
  12. I have a very large data file that I need to parse every day - somewhere around 5MB. Each line is tab delimited, so I must explode, then I use MYSQL to insert or update into a database. I know that realistically, probably only 5% of the lines change each day, so I was wondering if I could make this faster by storing yesterdays file in and then using arrays... Basically, I have never used much speed testing with large arrays, so I have a 5MB file. I read it into an array and loop through to handle it. Would it be faster to just update the DB for every line, or to search another very large array that contains yesterdays data, and only process if there is a change? To make it even more simple... how fast is it if I use in_array with a 5mb array? Faster then just exploding each line in the array and write it to a database?
  13. OK - I have a table with users... userid - name - etc - etc 1 - scott 2 - angela 3 - perte 4 - jamie I have a secong table with hobbies... id - userid - hobby 1 - 1 - golf 2 - 1 - swimming 3 - 2 - golf 4 - 4 - tennis 5 - 1 - bowling I want to find all users that have a hobby of golf and swimming. How do I do this? Really I just need to get pointed in the right direction. Thanks in advance Scott
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.