Controlling the depth of a crawl

The depth of a crawl can be controlled using Scrapy DepthMiddleware middleware. The depth middleware limits the number of follows that Scrapy will take from any given link. This option can be useful for controlling how deep you go into a particular crawl. This is also used to keep a crawl from going on too long, and useful if you know that the content you are crawling for is located within a certain number of degrees of separation from the pages at the start of your crawl.

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.