I think I found my solution but it wasn't what I set out looking for.
I was looking in the wrong direction. As I surfed for a solution I stumbled on data mining, page scraping and data harvesting. Most of the files I have to work with are .HTML so I dug into how to use these methods and I came up with gold.
First I created a .html file with a link to all the files... that was simpler than thought it would be.
Once all the files we're "linked" by the new file that I created, I could run web-harvest (sourceforge) or any number of other tools available on the web. As soon as the files were all linked the program treated it as a site and surfed the entire thing extracting the data I wanted. Web-Harvest took some playing around with to configure but it worked in the end.
That made me think about it and if you ever run into a website that has information you need spread all through it this same tactic would work perfectly, as a matter of fact that is what these tools were really created for.
I'd recommend HT Track or web2Disk by InSpyder to capture the website's content than run web-harvest to extract the data to a CSV, spreadsheet or what ever you need.
All these tools mentioned are available on the web some free and some not free but cheap just the same.
Keep this information in mind, might come in handy some day!
Thank you - to all that gave thought to my problem!
Jeff