Jump to content

donatello

Members
  • Posts

    14
  • Joined

  • Last visited

    Never

About donatello

  • Birthday 08/09/1959

Contact Methods

  • Website URL
    http://www.robotsgenerator.com

Profile Information

  • Gender
    Male
  • Location
    Warsaw, Poland

donatello's Achievements

Newbie

Newbie (1/5)

0

Reputation

  1. OK. I turned on the error reporting like so: ini_set('display_errors',1); error_reporting(E_ALL|E_STRICT); No help. Here is the page: http://www.killersmiley.com/test/error.php
  2. Also, Ajax tabs like they use on the Yahoo homepage.
  3. Use an accordion script. http://www.javascriptsearch.org/results.html?cx=016703912148653225111%3A2snawkdvrjq&cof=FORID%3A9&q=accordion+script#915
  4. Actually, no... I'm a total novice with PHP as this is a hobby... Can I do that for a single directory?
  5. I have had a problem with some scripts I wrote (Screenscrapers) that worked great in PHP4, but stopped working the minute I upgraded to PHP5. I can change all of my filenames to have the .PHP4 extension and this solves the problem, but since this encompasses a number of sites, internal links and hundreds of files, this is not my first choice solution. Here is the scraper, what it does, is it takes items from the zazzle Results Page by category, strips out the formatting, adds my affiliate ID and then I can present these items on my page. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Test of scrape</title> <link rel="stylesheet" type="text/css" href="/css/scraper.css" /> <script type='text/javascript' src='http://www.zazzle.com/js/logging/omniture/s_code.zjs/r-52.78223/site-zazzle.js'></script> </head> <body> <div class="gridCell " id="page_productsGrid_assetCell1"> <?php $page = file_get_contents("http://www.zazzle.com/cool+smiley+gifts"); //comment out the <span> tags completely $page = preg_replace('/<span/', "<!-- <span", $page); $page = preg_replace('/<\/span>/', "<\/span> -->", $page); $page = preg_replace('/<a /', "<a rel=\"nofollow\" ", $page); $rf_id="238219236805025733"; // Regular expression to parse "&rf=" and the $rf_id into the existing link $page=preg_replace("/(.*?)(href\s*=+\s*[\"\'])(.*?)([\"\'])(.*?)/is","$1$2$3?rf=$rf_id$4$5",$page); $test = explode('<div style="position:relative" class="clearfix">',$page); for($t=1;$t<=count($test)-2;$t++){ print "<div class=\"gridCellInfo\" id=\"page_products\">"; print $test[$t]; } ?> </body> </html> This works find in PHP4; not at all in PHP5. My ideal solution would be an .htaccess file that I could put in any directory under PHP5 to make it default to php4. I have tried this, to no avail (.htaccess): <IfModule mod_rewrite.c> RewriteEngine On AddType text/html .php4 AddHandler php4-script .php .html php5 </IfModule> I have also tried a few alternatives... this appears to be a common problem and I have scoured the web and found no solution. Here are the two pages in PHP4 and PHP5: php4: http://www.killersmiley.com/test/cool-smiley.php4 php5: http://www.killersmiley.com/test/cool-smiley.php
  6. I have a simple script which records search terms and writes them to an external file. I would like to limit this file size. The Use: I have a search page where people can search for PDF files. I would like to have a 'cloud tag' or list of the most recent terms searched at the bottom of the page. Here is my script which works beautifully: <?php $pattern = "/filetype:(\w+)/"; // filteype:(wildcard for word) to grab the file extension along with the word filetype: if ( $_GET['q'] == "" ) { $term = ""; } else { $term = preg_replace("$pattern", '', $_GET['q']); // get rid of the filetype parameter } $searched = $term . ", "; $fopen = fopen("searched.html", "a"); fwrite($fopen, $searched); fclose($fopen); ?> The above code grabs the search term when the SERP page is opened, and writes it to a file. I will later use phpInclude to put the contents of that file on the bottom of my search engine page. The problem is that after a million searches, this file will be huge! Question: How can I limit the file size and organize these search terms so that the most recent ones appear on the page?
  7. http://www.TheGuitarTuner.com Please review. The content is a bit thin at the moment as I am producing a series of instructional videos which will go up on the site. Still working out a new Flash tuner... but in general, this will be the look and feel.
  8. With regard to the theme, fair enough. My changes were minimal... ...but I was thinking more of the search algorithms and content aspect of the site as this is a programming oriented forum.
  9. http://www.Video-Search.org Please review. I originally created this design for a TV related site, and decided to use it for a collection of video search algorithms and video search engines. There is a collection of other video resources on the site. I was considering moving it over to Wordpress, but for a hobby site it's probably not worth the effort...
  10. http://www.FontSearch.org Please review my new site. I used WordPress as a CMS and set it to be a static-page site. The search algorithms I wrote myself using the Google CSE and JavaScript. Some custom PHP is in there too, but not a lot as I am not a wizard but more of a dilettante. I wrote a few of these search algorithms when I was looking for fonts for myself when I had to edit some tricky PDF files with embedded, esoteric fonts. Afterwards, I decided to put them online for others to use too. I was surprised to be able to get a great domain... FontSearch.org
  11. I posted an URL extractor script earlier today... here's the thread: http://www.phpfreaks.com/forums/php-coding-help/email-extractor-script/
  12. Could you have a piece of the code reside on your server that is required? Say use PHP Include to require a single line of code that it retrieves from your server... Just a thought... I've done this for my own code so that I do not have to update multiple files for affiliate sites. You can use CURL.
  13. Yes. I was looking to combine the two scripts and after several unsuccessful attempts am pleading for help... The final script should be able to pull all of the links out of the page, as the link extractor I posted above already does. THEN, it should parse each of the found pages for email addresses and print them. I'm not sure how to combine these two scripts to make this work.
  14. I am working on an email extractor script that will extract emails from a site. I have a working script that will extract them from a single URL, but what I need it to do is to follow the links on the page. Here is my email script: <?php $the_url = isset($_REQUEST['url']) ? htmlspecialchars($_REQUEST['url']) : ''; ?> <form method="post"> Please enter full URL of the page to parse (including http://):<br /> <input type="text" name="url" size="65" value="http://<?php echo str_replace('http://', '', $the_url); ?>"/><br /> or enter text directly into textarea below:<br /> <textarea name="text" cols="50" rows="15"></textarea> <br /> <input type="submit" value="Parse Emails" /> </form> <?php if (isset($_REQUEST['url']) && !empty($_REQUEST['url'])) { // fetch data from specified url $text = file_get_contents($_REQUEST['url']); } elseif (isset($_REQUEST['text']) && !empty($_REQUEST['text'])) { // get text from text area $text = $_REQUEST['text']; } // parse emails if (!empty($text)) { $res = preg_match_all( "/[a-z0-9]+([_\\.-][a-z0-9]+)*@([a-z0-9]+([\.-][a-z0-9]+)*)+\\.[a-z]{2,}/i", $text, $matches ); if ($res) { foreach(array_unique($matches[0]) as $email) { echo $email . "<br />"; } } else { echo "No emails found."; } } ?> <!-- Email Extractor END --> It's a bit rough and quirky, but it works for a single URL. Here is the email extractor in action: http://www.site-search.org/email-extractor.php My ideal solution would be to combine this script with my URL extactor/link-extractor script: <!-- URL Extractor BEGIN --> <?php // findlinks.php // php code example: find links in an html page // mallsop.com 2006 gpl echo "<form method=post action=\"$PHP_SELF\"> \n"; echo "<p><table align=\"absmiddle\" width=\"100%\" bgcolor=\"#cccccc\" name=\"tablesiteopen\" border=\"0\">\n"; echo "<tr><td align=left>"; if ($_POST["FindLinks"]) { $urlname = trim($_POST["urlname"]); if ($urlname == "") { echo "Please enter a URL. <br>\n"; } else { // open the html page and parse it $page_title = "n/a"; $links[0] = "n/a"; //$meta_descr = "n/a"; //$meta_keywd = "n/a"; if ($handle = @fopen($urlname, "r")) { // must be able to read it $content = ""; while (!feof($handle)) { $part = fread($handle, 1024); $content .= $part; // if (eregi("</head>", $part)) break; } fclose($handle); $lines = preg_split("/\r?\n|\r/", $content); // turn the content into rows // boolean $is_title = false; //$is_descr = false; //$is_keywd = false; $is_href = false; $index = 0; //$close_tag = ($xhtml) ? " />" : ">"; // new in ver. 1.01 foreach ($lines as $val) { if (eregi("<title>(.*)</title>", $val, $title)) { $page_title = $title[1]; $is_title = true; } if (eregi("<a href=(.*)</a>", $val, $alink)) { $newurl = $alink[1]; $newurl = eregi_replace(' target="_blank"', "", $newurl); $newurl = eregi_replace(' rel="nofollow"', "", $newurl); $newurl = eregi_replace(" title=\"(.*)\"","", $newurl); $newurl = trim($newurl); $pos1 = strpos($newurl, "/>"); if ($pos1 !== false) { $newurl = substr($newurl, 1, $pos1); } $pos2 = strpos($newurl, ">"); if ($pos2 !== false) { $newurl = substr($newurl, 1, $pos2); } $newurl = eregi_replace("\"", "", $newurl); $newurl = eregi_replace(">", "", $newurl); //if (!eregi("http", $newurl)) { // local // $newurl = "http://".$_SERVER["HTTP_HOST"]."/".$newurl; // } if (!eregi("http", $newurl)) { // local $pos1 = strpos($newurl, "/"); if ($pos1 == 0) { $newurl = substr($newurl, 1); } $newurl = $urlname."/".$newurl; } // put in array of found links $links[$index] = $newurl; $index++; $is_href = true; } } // foreach lines done echo "<h2>Extracted Links</h2>\n"; echo "<p><b>Page Summary</b><br>\n"; echo "<b>Url:</b> ".$urlname."<br>\n"; if ($is_title) { echo "<b>Title:</b> ".$page_title."<br>\n"; } else { echo "No title found<br>\n"; } echo "<b>Links:</b><br>\n"; if ($is_href) { foreach ($links as $myval) { echo "<a href=\"$myval\">".$myval."</a><br>\n"; } } else { echo "No links found<br>\n"; } echo "End</p>\n"; } // fopen handle ok else { echo "<br>The url $urlname does not exist or there was an fopen error.<br>"; } echo "<br /><br /><h4><a href=\"http://www.site-search.org/url-extractor.php\" title=\"Link Extractor\">Try Again</a></h4>"; } // end else urlname given } // else find links now submit else { $urlname = ""; // or whatever page you like echo "<br /><br />\n"; echo "<p><h2>Link Extractor</h2><br>\n"; echo "File or URL: <input type=\"TEXT\" name=\"urlname\" value=\"http://\" maxlength=\"255\" size=\"80\">\n"; echo "<input type=\"SUBMIT\" name=\"FindLinks\" value=\"Extract Links\"></font><br></p> \n"; echo "<br /><br />\n"; } echo "</td></tr>"; echo "</table></p>"; echo "</form></BODY></HTML>\n"; ?> <!-- URL Extractor END --> Her e is the script in action: http://www.site-search.org/url-extractor.php
  15. I'm a total newbie at PHP... I manage to code some things and they actually work - usually... although I'm still surprised when a script I write works on the first try.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.