I need 2 web sites spidered that contain job openings. The structure of both web sites is quite simple - there is a list of job titles on one html site and a the job title is an html link to the detail job description. The content of the detail sites contain 5 to 8 items that need to be extracted and the complete job description needs to be extracted completely.
The result of spidering should be stored in a general xml file and an xml file with the job descriptions.
I have attached samples for it and the links to the web sites.
As well I will deliver detailed mapping rules (how can you find out what to extract and put where) for both sites.
It needs to be able to run both spiders individually cause we will run them over cron. Our cron script will make sure that the files will be deleted before they run once more, so you don't care about it.
Maybe this project is the chance to get a longer term relationship, cause I have PHP jobs from time to time...
## Deliverables
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):
a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.
3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).
## Platform
linux
php5
xml