I have experience scraping the financial websites using both simple scripts and self-driving web browsers.
For this task in particular I will use a python script implementing the bs4 module in the following way:
1. Navigate to the given URL.
2. Look for the elements of interest (product title, product URL and product price).
3. Handle pagination to extract all available information.
4. Write extracted information to an Excel spreadsheet.
2 and 3 depend on the way each site was built so that we will need to build a scraper for each site or a long scraper for all sites. A third option would be to make it modular so that each scraper shares functions, its easier to maintain, reusable and scalable.