Jump to content

Si14

Members
  • Posts

    3
  • Joined

  • Last visited

Posts posted by Si14

  1. Ah sorry must have mis understood. 

     

    Yes you will need WAMP setup on your machine then this should do what you need it to:

     

    //define our array of files we'd like to get with the dir name as keys
    $pdfs = array(
    	'folder1' => 'http://www.bbc.co.uk/bbctrust/assets/files/pdf/about/how_we_govern/charter.pdf',
    	'folder2' => 'http://www.bbc.co.uk/radio4/today/reports/pdf/camera_gifford.pdf'
    );
    
    try{
            //start to loop through the files stored in the pdfs array
    	foreach( $pdfs as $key => $pdf ){
    
    		//split the string on /
    		$urlParts = explode( '/', $pdf );
    
    		//get the last segment as this is our file name eg charter.pdf
    		$fileName = end( $urlParts );
    
    		//get the contents of the file
    		$fileContents = file_get_contents( $pdf );
    
    		//get a path to our directory
    		$directory = $_SERVER['DOCUMENT_ROOT'] . '/' . $key . '/';
    
    		//check to see if the directory DOESN'T exist
    		if( !is_dir( $directory ) ){
    			//create the directory
    			mkdir( $directory );
    		}
    
    		//create a file object for the contents to be written to
    		$fileObject = new SPLFileObject( $directory . $fileName, 'a+'  );
    
    		//write the contents to the file
    		$fileObject->fwrite( $fileContents );
    
    		//clean up by removing the contents
    		unset( $fileContents );
    
    	}
    
    }catch( Exceptions $e ){
    
    	echo $e->getMessage();
    
    }
    

     

     

    Any problems then give us a shout :)

     

    Thanks for your reply and your help.

    Instead of the direct PDF links, Is it possible to put the link of the page and then it detects all PDF files of that URL automatically?

  2. Thank You for you reply exeTrix.

    I think I did not clearly expressed the question.

    I want to download all the PDF files of a website. Similar to what download managers do. You may ask why I am not using a download manager, and the reponse would be because I want to customize the code later.

    At the moment, the basic thing it needs to do is to download all PDF files of one (or multiple) URLs (which I provide) and then store them into separate directories on my hard drive (one directory for one URL). In order to run this code, I assume I should use a server client e.g. WAMP?

    Please let me know if you have any suggestions.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.