pepsimax123 Posted January 22, 2011 Share Posted January 22, 2011 There's many options available for downloading a URL - but I'm stuck. I've looked through all the ones I know, but none seem to pay attention to partial content. I'm trying to retrieve a URL that gives the following header: HTTP/1.1 206 Partial Content Content-Range: bytes 0-100000/631723 As you can see it dishes out the file in 100,000 byte chunks. Trouble is, when I use any method in PHP, ie file_get_contents, fopen, or even cURL, none of these continue on after retrieving the 100,000 bytes. End result, I have a 100,000 byte file. What I need is to get the PHP script to grab all the data, in the example above, all 631,723 bytes. How can I do this? Quote Link to comment Share on other sites More sharing options...
litebearer Posted January 23, 2011 Share Posted January 23, 2011 Just a crazy thought, but have you tried looping using increments of 100,000 to grab each "chunck" then 'merge' them? ie.. Quote Link to comment Share on other sites More sharing options...
pepsimax123 Posted January 23, 2011 Author Share Posted January 23, 2011 I have indeed, but this is not something I had hoped to do. The server it retrieves data from is set up in such a poor way, I don't know if this is purposeful or not - but it does make things a pain! Sending HEAD rather than GET will return the correct content-length so I am now retrieving the correct content length and then looping through in 100,000 parts and merging as you say. This, sadly, because it does not pay attention to other ranges you give it - you can only go in 100,000 blocks and using HTTP 1.0 still gives you HTTP 1.1 headers! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.