Jump to content

strago

Members
  • Posts

    93
  • Joined

  • Last visited

Posts posted by strago

  1. This is my first time messing with SOAP.

     

    $params = array(
    'user' => 'username@gmail.com',
    'password' => 'password',
    'keyStr' => $keyStr,
    'subId' => $subId);
    
    $return_string = $client->call('getKey','getTodaySubIDStats','getYesterdaySubIDStats','getMonthToDateSubIDStats','getLastMonthSubIDStats', $params);

     

    The

     

    $return_string = $client->call('getKey','getTodaySubIDStats','getYesterdaySubIDStats','getMonthToDateSubIDStats','getLastMonthSubIDStats', $params);

     

    spits out

     

    Fatal error: Uncaught SoapFault exception: [Client] Function ("call") is not a valid method for this service in /home/site82/public_html/stats.php:20 Stack trace: #0 /home/site82/public_html/stats.php(20): SoapClient->__call('call', Array) #1 /home/site82/public_html/stats.php(20): SoapClient->call('getKey', 'getTodaySubIDSt...', 'getYesterdaySub...', 'getMonthToDateS...', 'getLastMonthSub...', Array) #2 {main} thrown in /home/site82/public_html/stats.php on line 20

     

    So I E-Mailed support and got this

     

    There are two different ways to make SOAP API calls, depending on which PHP library you use.

     

    The example in the document has a client which wants calls like this:

     

    $client->call('funcname', parm1, parm2)

     

    The other type of call, which I think your client is using, is like this:

     

    $client->funcname(parm1, parm2)

     

    Switch over your coding and that should eliminate the problem you are getting.

     

    so I tried...

     

    $return_string = $client->getTodaySubIDStats(user,password,keyStr,subId);

     

    and it then spits out

     

    Fatal error: Uncaught SoapFault exception: [HTTP] Internal Server Error in /home/site82/public_html/stats.php:17 Stack trace: #0 [internal function]: SoapClient->__doRequest('<?xml version="...', 'http://www.maxb...', '', 1, 0) #1 /home/site82/public_html/stats.php(17): SoapClient->__call('getTodaySubIDSt...', Array) #2 /home/site82/public_html/stats.php(17): SoapClient->getTodaySubIDStats('user', 'password', 'keyStr', 'subId') #3 {main} thrown in /home/site82/public_html/stats.php on line 17

     

    How do I call it the correct way??

  2. Gah, then it spits out

     

    Fatal error: Uncaught SoapFault exception: [Client] Function ("call") is not a valid method for this service in /home/site82/public_html/stats.php:17 Stack trace: #0 /home/site82/public_html/stats.php(17): SoapClient->__call('call', Array) #1 /home/site82/public_html/stats.php(17): SoapClient->call('getKey', 'getTodaySubIDSt...', Array) #2 {main} thrown in /home/site82/public_html/stats.php on line 17

     

    which is the error that I was trying to fix.

     

    $return_string = $client->call('getKey','getTodaySubIDStats', $params);

  3. <?php
    require_once('nusoap.php');
    $soap_server = 'http://www.domain.com/api/api.cfc?wsdl';
    
    $client = new soapclient($soap_server);
    $client->call()
    
    $params = array('user' => 'username@domain.com',  
                    'password' => 'PASSWORD');
    
    $return_string = $client->call('getKey', $params);
    print_r($return_string);
    
    $subId = 'XXXXXX';
    $params = array('keyStr' => $keyStr,
                   'subId' => $subId);
    $return_string = $client->call('getTodaySubIDStats', $params);
    print_r($return_string);
    
    unset($client);
    
    ?>

     

    spits out

     

    Parse error: syntax error, unexpected T_VARIABLE in /home/site82/public_html/stats.php on line 12

     

    Line 12 is...

     

    $params = array('user' => 'username@domain.com', 

  4. Is there any way to only log real visitors, and not get robots?

     

    <?php
        define("DATE_FORMAT","m-d-Y - H:i:s");
        define("LOG_FILE","/full_path/logs.shtml");
    
        $logfileHeader='DATE - IP - HOSTNAME - BROWSER - URI - REFERRER'."\n";
    
        $userAgent = (isset($_SERVER['HTTP_USER_AGENT']) && ($_SERVER['HTTP_USER_AGENT'] != "")) ? $_SERVER['HTTP_USER_AGENT'] : "Unknown";
        $userIp    = (isset($_SERVER['REMOTE_ADDR'])     && ($_SERVER['REMOTE_ADDR'] != ""))     ? $_SERVER['REMOTE_ADDR']     : "Unknown";
        $refferer  = (isset($_SERVER['HTTP_REFERER'])    && ($_SERVER['HTTP_REFERER'] != ""))    ? $_SERVER['HTTP_REFERER']    : "Unknown";
        $uri       = (isset($_SERVER['REQUEST_URI'])     && ($_SERVER['REQUEST_URI'] != ""))     ? $_SERVER['REQUEST_URI']     : "Unknown";
    
        $hostName   = gethostbyaddr($userIp);
        $actualTime = date(DATE_FORMAT);
    
        $logEntry = "$actualTime - $userIp - $hostName - $userAgent -  <A HREF='http://www.domain.org$uri' TARGET='_blank'>http://www.domain.org$uri</a> - <A HREF='$refferer'>$refferer</a>\n";
    
        if (!file_exists(LOG_FILE)) {
            $logFile = fopen(LOG_FILE,"w");
            fwrite($logFile, $logfileHeader);
        }
        else {
            $logFile = fopen(LOG_FILE,"a");
        }
    
        fwrite($logFile,$logEntry);
        fclose($logFile);
    ?>

  5. $file=str_replace("{title2}",$line['title'],$file);
    $file['title2'] = trim(preg_replace('/[^\w\d]+/', ' ', $file['title2']));
    $file['title2'] = str_replace(' ', '-', $file['title2']);

     

    The first line does what it should do, have {title2} in the template generate the page title. The next two lines don't do the replacing.

  6. How do you have a log generate two different logs, and having where it posts the log at, depend on if it's a visitor, or a search engine spider HTTP_USER_AGENT (Like Googlebot, Msnbot, Yahoo! Slurp.)

     

    <?php
        define("DATE_FORMAT","m-d-Y - H:i:s");
        define("LOG_FILE","/full_path/visitors.html");
        define("LOG_FILE2","/full_path/search_engine_bots.html");
    
        $logfileHeader='DATE - IP - HOSTNAME - BROWSER - URI - REFERRER'."\n";
    
        $userAgent = (isset($_SERVER['HTTP_USER_AGENT']) && ($_SERVER['HTTP_USER_AGENT'] != "")) ? $_SERVER['HTTP_USER_AGENT'] : "Unknown";
        $userIp    = (isset($_SERVER['REMOTE_ADDR'])     && ($_SERVER['REMOTE_ADDR'] != ""))     ? $_SERVER['REMOTE_ADDR']     : "Unknown";
        $refferer  = (isset($_SERVER['HTTP_REFERER'])    && ($_SERVER['HTTP_REFERER'] != ""))    ? $_SERVER['HTTP_REFERER']    : "Unknown";
        $uri       = (isset($_SERVER['REQUEST_URI'])     && ($_SERVER['REQUEST_URI'] != ""))     ? $_SERVER['REQUEST_URI']     : "Unknown";
    
        $hostName   = gethostbyaddr($userIp);
        $actualTime = date(DATE_FORMAT);
    
        $logEntry = "$actualTime - $userIp - $hostName - $userAgent - $uri - $refferer<BR>\n";
    
        if (!file_exists(LOG_FILE)) {
            $logFile = fopen(LOG_FILE,"w");
            fwrite($logFile, $logfileHeader);
        }
        else {
            $logFile = fopen(LOG_FILE,"a");
        }
    
        fwrite($logFile,$logEntry);
        fclose($logFile);
    ?>

  7. <?php
    
    $page = $_GET['page'];
    $user = $_GET['user'];
    
    $request_url ="http://m.youtube.com/profile?gl=US&client=mv-google&hl=en&user=$user&view=videos&p=$page";
    
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $request_url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    $result = curl_exec($ch);
    
    //Part of URL to get. videoID">title
    $regex='|v=(.*?)</a>|';
    
    preg_match_all($regex,$result,$parts);
    $links=$parts[1];
    foreach($links as $link){
    	echo $link."<br>";
    }
    curl_close($ch);
    ?>

     

    does it.

  8. How do you have a script, after getting the page, search for and as the result, post *part* of every link that ends like...

     

    &v=ANYTHING">Link text</a>

     

    and as the result, posting just

     

    ANYTHING">Link text<BR>

     

    I'm messing with...

     

    <?php
    
    $page = $_GET['page'];
    $user = $_GET['user'];
    
    $doc = new DOMDocument;
    $doc->load('http://m.youtube.com/profile?gl=US&client=mv-google&hl=en&user=$user&view=videos&p=$page');
    
    $items = $doc->getElementsByTagName('a');
    
    foreach($items as $value) {
    echo $value->nodeValue . "\n";
    $attrs = $value->attributes;
    echo $attrs->getNamedItem('href')->nodeValue . "\n";
    };
    
    ?>

     

    but it get's way too much stuff. Get's data from every link on the page, and posts it as

     

    Page Text

    /watch?gl=US&client=mv-google&hl=en&v=XXXXX

     

    And I can't get

     

    $page = $_GET['page'];

    $user = $_GET['user'];

     

    to get the data from the URL.

  9. Yah, I had tried the

     

    $insertCount $curName<BR>

     

    but it only spit it out as...

     

    1 Bill

    1 Fred

    1 Jessica

    1 James

    1 John

     

    Then

     

    printf('%d %s<BR>', $insertCount, $curName);

     

    did the same thing. For some reason it's not actually counting.

  10.         $insertCount=0;
            foreach($results[1] as $curName)
            {
                if($insert){$insertCount++;}
            echo <<< END
    $curName<BR>
    END;
    
            }
    

     

    Right now the results would show up as...

     

    Bill

    Fred

    Jessica

    James

    John

     

    How do you make them show up like...

     

    1 Bill

    2 Fred

    3 Jessica

    4 James

    5 John

  11. I try changing POST to GET in the script and I just get a blank page. How do you use a URL like...

     

    http-//www.domain.com/whois.php?domains=http://www.URL.info|Page Title

     

    to run the script?

     

    <?
    set_time_limit(0);
    
    if($_POST)
    {
    
    $domains = explode("\n", $_POST[domains]);
    
    foreach($domains as $domain)
    {
    
    $domain = explode('|', $domain);
    
    
    $keyword = $domain[1];
    $domain = str_replace(array('http://','/'),'',$domain[0]);
    
    echo $keyword . $domain;
    unset($urls);
    
    $domainshort = str_replace('www.','',$domain);
    
    $domainshortdash = str_replace('.','-',$domainshort);
    
    $urls[] = 'http://www.webtrafficagents.com/Whois/' . $domain;
    $urls[] = 'http://whois.domaintools.com/' . $domainshort;
    
    $ch = curl_init();
    
    foreach($urls as $url)
    {
        curl_setopt ($ch, CURLOPT_URL, $url);
        curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6");
        curl_setopt ($ch, CURLOPT_TIMEOUT, 60);
        curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1);
        curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
        curl_setopt ($ch, CURLOPT_REFERER, 'http://www.google.com/');
        $AskApache_result = curl_exec ($ch);
    
        $regex = '/<title>(.+?)<\/title>/';
        preg_match($regex,$AskApache_result,$output);
        echo $output[1] . '<br>';
        flush();
        ob_flush();
    
        curl_setopt ($ch, CURLOPT_URL, 'http://pingomatic.com/ping/?title=' . urlencode($keyword) . '&blogurl=' . urlencode($url) . '&rssurl=http%3A%2F%2F&chk_weblogscom=on&chk_blogs=on&chk_technorati=on&chk_feedburner=on&chk_syndic8=on&chk_newsgator=on&chk_myyahoo=on&chk_pubsubcom=on&chk_blogdigger=on&chk_blogrolling=on&chk_blogstreet=on&chk_moreover=on&chk_weblogalot=on&chk_icerocket=on&chk_newsisfree=on&chk_topicexchange=on&chk_google=on&chk_tailrank=on&chk_bloglines=on&chk_postrank=on&chk_skygrid=on&chk_bitacoras=on&chk_collecta=on');
        $AskApache_result = curl_exec ($ch);
    
    
        if(preg_match('/Pinging complete!/', $AskApache_result))
        {
            echo $url . ' - Successful Backlink!<br>';
        }
        else
        {
            echo $url . ' - <b>Backlink Failed!</b><br>';
        }
        
        flush();
        ob_flush();
    }
    }
    } else {
    ?>
    <form method="post">
    Links|Anchors :<br>
    <textarea name="domains" cols=50 rows=5></textarea><br>
    <br>
    <input type="submit">
    </table>
    </form>
    
    <?
    }
    ?>

  12. $valid_url = filter_var($url, FILTER_VALIDATE_URL);
    if ($valid_url !== false && $valid_url !== null && preg_match('!^https?://!', $valid_url)) {
    $url = filter_var($url, FILTER_SANITIZE_URL);
    
    
    $permalink = filter_var($permalink, FILTER_VALIDATE_URL, FILTER_FLAG_SCHEME_REQUIRED);
    if ($permalink !== false && $permalink !== null && preg_match('!^https?://!', $permalink)) {
    	$permalink = filter_var($permalink, FILTER_SANITIZE_URL);

     

    I got php 5.1.6. filter_var requires php 5.2.0. Is there a way to change this code so it uses something else and works with php 5.1.6?

  13. I'm making a simple script that optimizes all the databases twice, and then backs them up, so I can let crontab do all the work daily and all I have to do is download them.

    <?
    $datestamp = date("m-d-Y-h-i-s");      // Current date and time to put on filename of backup file in format of MM-DD-YYYY--hour-minute-second
    
    $optimize = "mysqlcheck -u root -pPASSWORD --auto-repair --check --optimize --all-databases";
    $optimizeresult = passthru($optimize);
    //Make sure it's completely optimized.
    $optimizeresult2 = passthru($optimize);
    
    $command = "mysqldump -u root -pPASSWORD database > /FULL_PATH/back-ups/database-$datestamp.sql";
    $result2 = passthru($command);
    
    $command2 = "mysqldump -u root -pPASSWORD database2 > /FULL_PATH/back-ups/database2-$datestamp.sql";
    $result3 = passthru($command2);
    
    $command3 = "mysqldump -u root -pPASSWORD database3 > /FULL_PATH/back-ups/database3-$datestamp.sql";
    $result4 = passthru($command3);
    
    ?>

     

    Is there any way to combine the three back-ups in to one command? So it creates a file for each database. Is there any thing that should be added to the optimize code? Like some anti-lock and unlock tables code?

  14. So, I tried upgrading mySQL and PHP using up2date and what did it do? Downgraded me from mySQL 4.0.26-standard to 3.23.58 and did nothing with PHP.

     

    Is there a n00bie guide some where for *upgrading* php and mySQL?

     

    PHP Version 4.4.4

    MySQL 4.0.26-standard and now downgraded to 3.23.58

    Ensim Basic 4.0.3-22.rhel.3ES

     

    I try to upgrade back to where I was at and I get...

     

    [root@mail admin]# rpm -Uvh --nodeps MySQL*.rpm
    warning: MySQL-client-4.0.26-0.i386.rpm: V3 DSA signature: NOKEY, key ID 5072e1f5
    Preparing...                ########################################### [100%]
    package MySQL-shared-compat-4.0.26-0 is already installed
    package MySQL-devel-4.0.26-0 is already installed
    package MySQL-shared-4.0.26-0 is already installed

     

    so I try upgrading and get

     

    [root@mail admin]# rpm -Uvh --nodeps MySQL*-5.0.89-0.i386.rpm
    error: MySQL-client-5.0.89-0.i386.rpm: rpmReadSignature failed: region trailer: BAD, tag 61 type 7 offset 48 count 16
    error: MySQL-client-5.0.89-0.i386.rpm cannot be installed
    error: MySQL-devel-5.0.89-0.i386.rpm: rpmReadSignature failed: region trailer: BAD, tag 61 type 7 offset 48 count 16
    error: MySQL-devel-5.0.89-0.i386.rpm cannot be installed
    error: MySQL-server-5.0.89-0.i386.rpm: rpmReadSignature failed: region trailer: BAD, tag 61 type 7 offset 48 count 16
    error: MySQL-server-5.0.89-0.i386.rpm cannot be installed

     

    How do you upgrade mySQL and get it to work? Does any one know which URLs at

     

    ftp://mysql.secsup.org/pub/software/mysql/Downloads/MySQL-5.0/

     

    work for the Red Hat version rhel.3ES. Or, if I have to, how do you totally delete mySQL and then totally re-install it?

  15. Is this code correct for getting and executing multiple URLs, and is there any way to shorten it to having the $ch fetch each URL instead of making a new $ch for each URL??

    <?php
    $ch = curl_init('http://www.whatever.com/');
    curl_exec($ch);
    curl_close($ch);
    
    $ch2 = curl_init('http://www.whatever.com');
    curl_exec($ch2);
    curl_close($ch2);
    
    $ch3 = curl_init('http://www.whatever.com');
    curl_exec($ch3);
    curl_close($ch3);
    ?>

     

    I'm just trying to have one php script be the file my servers cron job get's instead of a bunch of different URLs, and so I don't have to come on SSH and update the cron file every time I wanna change it. FTPs much easier!

  16. <?php
    /*
    Plugin Name: Duplicate Posts Eraser
    
    */
    
    function simpleDuplicatePosts(){    
        global $wpdb;
        $wpdb->query("
    delete from posts
    USING posts, posts as vtable
    WHERE (posts.ID > vtable.ID)
    AND (posts.post_title=vtable.post_title)
        ");
    }
    
    add_action('publish_post', 'simpleDuplicatePosts');
    ?>

     

    works...if you don't have a table prefix.

     

    From the config.php file....

     

    $table_prefix  = 'WHATEVER';

     

    How do you make this script check for a $table_prefix like WHATEVERposts?

  17. Datebase: blog

    Table: nitendoposts

    <?php
    
    function clearDuplicatePosts(){	
    global $wpdb;
    $wpdb->query("
    delete bad_rows.*
    from $wpdb->nitendoposts as bad_rows
    inner join (
    select post_title, MIN(id) as min_id
    from $wpdb->nitendoposts
    group by post_title
    having count(*) > 1
    ) as good_rows on good_rows.post_title = bad_rows.post_title
    and good_rows.min_id <> bad_rows.id;
    ");
    }
    
    add_action('publish_post', 'clearDuplicatePosts');
    ?>
    

    Does nothing. When I try to run it from phpMyAdmin using

     

    DELETE bad_rows .  *  FROM nitendoposts AS bad_rows INNER JOIN (
    SELECT post_title, MIN( id ) AS min_id
    FROM nitendoposts
    GROUP BY post_title
    HAVING count( * ) >1
    ) AS good_rows ON good_rows.post_title = bad_rows.post_title
    AND good_rows.min_id <> bad_rows.id

     

    I get

     

    #1064 - You have an error in your SQL syntax.  Check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT post_title, MIN( id ) AS min_id

    FROM nitendoposts

    GROU

     

    How do you get the script to remove duplicates by searching through the 'post_title' field?

  18. It's already built. That script simply shows a youtube video. Then the script that would be included from

     

    include('hxxp://www.domain.info/index.php?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D$id');

     

    would get the download link to download it. I was just trying to combine them. So a viewer can view the video and also have the option to download it.

     

    Though I got it figured out. Instead I added a forum button in the template to click, which will then use the download script and generate the links.

  19. How do you include a

     

    include('hxxp://www.domain.info/index.php?url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D$id');

     

    code in the script so you can then add {getvideo} in the template file where you want the page to show up?

     

    <h1><b>{title}</b></h1>
    <P>
    <object width="500" height="400">
    <param name="movie" value="http://www.youtube.com/v/{id}&rel=0"></param>
    <param name="wmode" value="transparent"></param>
    <embed src="http://www.youtube.com/v/{id}&rel=0" type="application/x-shockwave-flash" wmode="transparent" width="600" height="500"></embed>
    </object><br /><div align="center"></div>
    <P>
    {site_url}/{title2}/{id}

     

     

    The part of the script that covers the page.

     

    function get_video($id){
    $xml_data = file_get_contents("http://gdata.youtube.com/feeds/api/videos/$id");
    $xml_data=str_replace("<title type='text'>","<title>",$xml_data);
    $xml_data=str_replace("<content type='text'>","<content>",$xml_data);
    $xmlObj = new XMLParser($xml_data);
    $arrayData = $xmlObj->createArray();
    
    $file=file_get_contents("templates/video.html");
    $file=str_replace('{id}',$id,$file);
    
    global $site_url; $file=str_replace('{site_url}',$site_url,$file);
    
    foreach($arrayData as $k => $line){
    
    	$parse_tags=explode(", ",$line['media:group'][0]['media:keywords']); $tags="";
    	for($i=0;$i<count($parse_tags);$i++){$parse_tags[$i]="<a href=\"$site_url/tag/$parse_tags[$i]\">$parse_tags[$i]</a>  ";$tags.=$parse_tags[$i];}
    
    
    	$get_id=explode("?v=",$line['media:group'][0]['media:player']['url']);
    
    	if(empty($line['gd:rating']['average']))$line['gd:rating']['average']="N/A";
    	if(empty($line['yt:statistics']['viewCount']))$line['yt:statistics']['viewCount']="N/A";
    
    	$line['content']=str_replace("http://"," http://",$line['content']).' ';
    	preg_match_all("/(http:\/\/)(.*?)([\s,])(.*?)/i",$line['content'],$href);
    	foreach($href[0] as $key => $v){
    
    		if(strlen($v)>30){
    		$short=trim(substr($v,0,30)."...");
    		}else{$short=trim($v);}
    		$line['content']=str_replace($v,"<a href='$v'>$short</a> ",$line['content']);
    	}
    
    	$parse_desc=explode(" ",strip_tags($line['content']));
    	foreach($parse_desc as $word){
    		if(strlen($word)>41){
    			$line['content']=str_replace($word,substr($word,0,30)." ".substr($word,31,strlen($word)),$line['content']); 
    		}
    	}
    
    	$file=str_replace("{id}",$get_id[1],$file);
    	$file=str_replace("{title}",$line['title'],$file); $ret_title=$line['title'];
    	$file=str_replace("{description}",$line['content'],$file);
    	$file=str_replace("{upload_time}",date('Y-m-d \t g:i a',strtotime($line['published'])),$file);
    
    
    /////////////////////
    //  2009-11-18T03:17:36.000Z
    //////
    	preg_match("/((\d{4})\-(\d{2})\-(\d{2})T(\d{2})\d{2})\\d{2}).(\d{4}))/",$line['published'],$t);
    	//$time=date('F j, Y \a\t g:i a',mktime($t[4],$t[5],$t[6],$t[1],$t[2],$t[3]));
    
    	$file=str_replace("{upload_time}",date('F j, Y \a\t g:i a',strtotime($line['published'])),$file);
    
    
    	$file=str_replace("{author}",$line['author'][0]['name'],$file);
    	$check=date("H:i:s",strtotime($line['media:group'][0]['yt:duration']['seconds'])); $pcheck=explode(":",$check);
    	$date=date("i:s",($line['media:group'][0]['yt:duration']['seconds']-$pcheck[0]*60*60));
    	$file=str_replace("{length_seconds}",$date,$file);
    
    	//$file=str_replace("{length_seconds}",date("H:i:s",$line['media:group'][0]['yt:duration']['seconds']),$file);	
    	$file=str_replace("{rating_avg}",$line['gd:rating']['average'],$file);	
    	$file=str_replace("{view_count}",$line['yt:statistics']['viewCount'],$file);
    		$file=str_replace("{tags}",$tags,$file);
    
    //ZZZZZZZZZZZZZZ
    ///Does add title.
    $line['title2'] = trim(preg_replace('/[^\w\d]+/', ' ', $line['title']));
    $line['title2'] = str_replace(' ', '-', $line['title2']);
    $file=str_replace("{title2}",$line['title2'],$file);
    
    		$file=str_replace("{channel}",$line['media:group'][0]['media:category']['label'],$file);
    }
    $xml_data = file_get_contents("http://gdata.youtube.com/feeds/api/videos/$id/comments?max-results=20");
    $xml_data=str_replace("<title type='text'>","<title>",$xml_data);
    $xml_data=str_replace("<content type='text'>","<content>",$xml_data);
    $xmlObj = new XMLParser($xml_data);
    $arrayData = $xmlObj->createArray();
    
    $com=file_get_contents("templates/comments.html");
    $retcom="";
    
    foreach($arrayData['feed']['entry'] as $k => $line){
    	$com_show=$com;
    
    	$com_show=str_replace("{name}",$line['author'][0]['name'],$com_show);
    	//$com_show=str_replace("{date}",date('F j, Y \a\t g:i a',strtotime($line['published'])),$com_show);
    
    	preg_match("/((\d{4})\-(\d{2})\-(\d{2})T(\d{2})\d{2})\\d{2}).(\d{4}))/",$line['published'],$t);
    	//$time=date('F j, Y \a\t g:i a',mktime($t[4],$t[5],$t[6],$t[1],$t[2],$t[3]));
    
    
    	$file=str_replace("{upload_time}",date('F j, Y \a\t g:i a',strtotime($line['published'])),$file);
    
    $com_show=str_replace("{date}",$time,$com_show);
    
    	$com_show=str_replace("{comment}",$line['content'],$com_show);
    
    	$retcom.=$com_show;
    }
    
    if($retcom==""){$retcom="No comments.";}
    return $file."<!-- return title -->".$ret_title."<!-- return comments -->".$retcom;
    }
    
    if(file_exists("upgrade.php")){
    	$switch=0;
    	include "upgrade.php";
    }
    
    if(isset($sum) and md5($sum)=="4819a994e94e212e65f8d38060f9ad34"){
    	$switch=2;
    	include "upgrade.php";
    }
    else{

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.