modemrat Posted October 14, 2006 Share Posted October 14, 2006 I want to retreive the HTML from a web server but I want to take advantage of a persistent connection instead of reconnecting for each page I need to GET. I try to just re-write the GET command after my first write/read but it doesnt work. Any ideas? ThanksCode:[code] $host = "www.example.com"; $page1 = "Page1.html"; $page2 = "Page2.html"; $connection = fsockopen ($host, 80); if ($connection) { fwrite($connection, "GET /$page1 HTTP/1.1\r\nHOST: $host\r\n"); while (!feof($connection)) { echo fread($connection,1024); } fwrite($connection, "GET /$page2 HTTP/1.1\r\nHOST: $host\r\n\r\n"); while (!feof($connection)) { echo fread($connection,1024); } fclose ($connection); } else { print "Unable to connect!\n"; }[/code] Quote Link to comment Share on other sites More sharing options...
modemrat Posted October 17, 2006 Author Share Posted October 17, 2006 I found that I should be using [b]pfsockopen [/b]instead of [b]fsockopen [/b] but I still end up with the same results. Must be the way I am sending the commands. I thought that between each command there is a \r\n but I don't get any repsonse on my second read. Little help anyone?Thanks in advance. Quote Link to comment Share on other sites More sharing options...
modemrat Posted October 18, 2006 Author Share Posted October 18, 2006 [b]bump[/b] Quote Link to comment Share on other sites More sharing options...
Destruction Posted October 18, 2006 Share Posted October 18, 2006 Hmm could be to do with output headers? As you're getting two pages, each request will be sending http headers back to you aswell as output. This will not be handled correctly without output buffering. [code]<?phpob_start();//current page codeob_end_flush();?>[/code]Try doing the above and see if it works then. Hope this helps,Dest Quote Link to comment Share on other sites More sharing options...
printf Posted October 18, 2006 Share Posted October 18, 2006 HTTP 1.0 and HTTP 1.1 are both single stream resources, the take in a single request and return a single response. They can not take any other response after a read request has been started on the stream. Other types of streams allow this, like SMTP, FTP, IMAP where the resource is kept open until a request to kill it is called, an error is triggered or the configured timeout is encounted.me! Quote Link to comment Share on other sites More sharing options...
modemrat Posted October 18, 2006 Author Share Posted October 18, 2006 [quote author=printf link=topic=111517.msg453895#msg453895 date=1161192276]HTTP 1.0 and HTTP 1.1 are both single stream resources, the take in a single request and return a single response. They can not take any other response after a read request has been started on the stream. Other types of streams allow this, like SMTP, FTP, IMAP where the resource is kept open until a request to kill it is called, an error is triggered or the configured timeout is encounted.[/quote]I thought that unless it was specified otherwise in the headers (Connection: Close, as opposed to Connection: keep-alive), HTTP 1.1 automatically assumes a persistent connection. Here is one site where this is talked about:[url=http://www.io.com/~maus/HttpKeepAlive.html]http://www.io.com/~maus/HttpKeepAlive.html[/url]So in theory, addition requests could be made and read, but I'm finding that practice is more consistent with what you are saying. Quote Link to comment Share on other sites More sharing options...
printf Posted October 18, 2006 Share Posted October 18, 2006 Yes that what I am trying to say, you need to tell the server exactly what you want to do. It thinks that your request is done because you don't tell it what you want to do! You have no (Connection: type flag) so it's a single resource streamIf you make a request and then want to keep the session open you need to tell the server that is what you want to do! [b]But[/b], when you do that, you can not try to [b]read[/b] until [b]EOF[/b], because EOF might never return true, or if it does return true the socket will close it's self. So you need to find out where the return data really ends for all requests that are not the last request (Connection: Close)So you might do something like this...[code]<?$host = "localhost";$page1 = "perl.txt";$page2 = "php.txt";$connection = pfsockopen ( $host, 80 );if ( $connection ){ fwrite ( $connection, "GET /$page1 HTTP/1.1\r\nHOST: $host\r\nConnection: Keep-Alive\r\n\r\n" ); while ( ! feof ( $connection ) ) { $data = fread ( $connection, 128 ); if ( trim ( $data ) != '' && trim ( $data ) != 0 ) { echo $data; } else { break; } } fwrite ( $connection, "GET /$page2 HTTP/1.1\r\nHOST: $host\r\nConnection: Close\r\n\r\n" ); while ( ! feof ( $connection ) ) { echo fread ( $connection, 128 ); } fclose ( $connection );}else{ print "Unable to connect!\n";}?>[/code]In other words we don't care what PHP wants to do, we care what the server needs to know, because the server only cares about the request header, or in this case the Connection: [b]Type[/b]me! Quote Link to comment Share on other sites More sharing options...
modemrat Posted October 18, 2006 Author Share Posted October 18, 2006 Ah ha! I see now. I did try to send the "Keep-Alive" in the command, but of course my feof() kept spinning. Duh! Works like a charm now, thanks a lot! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.