Jump to content

Recommended Posts

QuickOldCar - Thanks I'm going to start studying up on cURL to see if it's what I need.

 

johnny86 - I'm wanting to extract information. For example if a web page had a list of prices and I wanted my page to list some of those prices based on the item from that page I want to list.

 

I hope that makes since.

Or you can just fetch the source code of the site with $source = file_get_contents('http://site.to.fetch.from/');

 

Then you could use preg_match to extract what you need. Which would be bit faster than using simplehtmldom. But you can also use simplehtmldom with your $source..

I think $source = file_get_contents('http://site.to.fetch.from/'); will work for me, but I'm also going to look into simplehtmldom a little more. The first problem I have is that the site I'm trying to get info from requires a log in and of course when I try to log in it looks for the log in page for the site on my server. Is there a work around for this?

of course when I try to log in it looks for the log in page for the site on my server

 

I don't understand.

 

cURL can store cookies. Do a request to the login page, passing the appropriate login parameters via POST. Then do another request to the page you want to access.

This thread is more than a year old. Please don't revive it unless you have something important to add.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.