Jump to content



  • Please log in to reply
No replies to this topic

#1 sungo

  • New Members
  • Pip
  • Newbie
  • 5 posts

Posted 23 June 2006 - 02:16 AM

Hi there, I am wondering about searching through a websites content and snipping such things as news out to be displayed on my website. I'm someone new to php so I looked around for some scripts to tinker with. The only one I could find was:

$url = 'http://txt.lyricz.info/test.cgi/rip/dan.txt';
$lines_array = file($url);
$lines_string = implode('', $lines_array);
eregi("<pre>(.*)</pre>", $lines_string, $head);
$lines = split("\n", $head[0]);
$x = count($lines);
for ($i=0;$i<$x;$i++) {
  echo $lines[$i];
  echo "\n";

As far as I can see all this does is look at the source of $url and clip out the stuff in between < pre> and < /pre>. The one problem with this is that < pre> and < /pre> are still included in the snippit. Also I cannot seem to get this code to work for other variations other then html tags (ie: < pre>, < center>, < head>;, < body>, etc.). I was wondering if there was a better way to go about searching through a websites content and snipping out a bit of info to be displayed elsewhere?

Ideally I would like to be able to snip between things such as, " ?sn= " and " '); "

Thanks in advance!

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users