How it works

The limiting of depth can be performed by setting the DEPTH_LIMIT parameter:

process = CrawlerProcess({    'LOG_LEVEL': 'CRITICAL',    'DEPTH_LIMIT': 2,    'DEPT_STATS': True})

A depth limit of 1 means we will only crawl one level, which means it will process the URLs specified in start_urls, and then any URLs found within those pages. With DEPTH_LIMIT we get the following output:

Parsing: <200 http://localhost:8080/CrawlDepth0-1.html>Requesting crawl of: http://localhost:8080/CrawlDepth0-2.htmlRequesting crawl of: http://localhost:8080/Depth1/CrawlDepth1-1.htmlParsing: <200 http://localhost:8080/Depth1/CrawlDepth1-1.html>Requesting crawl of: http://localhost:8080/Depth1/CrawlDepth1-2.htmlRequesting crawl of: http://localhost:8080/Depth1/depth1/CrawlDepth1-2.html ...

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.