Respecting robots.txt

Many sites want to be crawled. It is inherent in the nature of the beast: Web hosters put content on their sites to be seen by humans. But it is also important that other computers see the content. A great example is search engine optimization (SEO). SEO is a process where you actually design your site to be crawled by spiders such as Google, so you are actually encouraging scraping. But at the same time, a publisher may only want specific parts of their site crawled, and to tell crawlers to keep their spiders off of certain portions of the site, either it is not for sharing, or not important enough to be crawled and wast the web server resources.

The rules of what you are and are not allowed to crawl are usually contained ...

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.