That's what I'm doing, the program doesn't grab the information on the fly. An import is scheduled via cron, and fired off once or twice a day depending on the needs of the particular site. That import grabs the data from an XML feed, turns it into a page and then retrieves the images. It's this image retrieval that's causing the script to run for an extremely long period of time. With the image import disabled, it takes on average 30 seconds for the script to import all the data from the remote XML feed. That's about 5-700 pages of information, adding anywhere from 1-18 seconds per page to grab the images.
If we average 4 seconds (low-balling) per page due to the image import, we're looking at a half hour execution time on the script. I'd rather not run a single process for that length of time. If I make the script sleep and restart every minute, I'm actually extending that time, but I'm starting a new process every minute so it can't eat a ridiculous amount of memory.
If I fire off a separate request for each page to grab the images, the whole process may complete in far less time, but it would require 5-700 php scripts being fired in the space of 30 seconds... I'm leaning this direction because it's not much different than a site that has 1000-1400 visitors per minute, which can easily be handled by a properly configured server. If it's scheduled during the lowest traffic time of the day, it will have the least competition and there should be no visible affect to regular visitors on the front end.
Unless of course there's a better solution! I googled for about an hour on this earlier, and while I'm quite positive I'm not the only person who's had to pull in a large amount of remote files, there doesn't seem to be too many people interested in writing about the methodology they employed to do so.