How it works

Lets start with the code in them main script file, 08/05_wikipedia_scrapy.py.  This starts with creating a WikipediaSpider object and running the crawl:

process = CrawlerProcess({    'LOG_LEVEL': 'ERROR',    'DEPTH_LIMIT': 1})process.crawl(WikipediaSpider)spider = next(iter(process.crawlers)).spiderprocess.start()

This tells Scrapy that we want to run it for one level of depth, and we get an instance of the crawler as we want to inspect its properties which are the result of the crawl.  The results are then printed with the following:

print("-"*60)for pm in spider.linked_pages:    print(pm.depth, pm.title, pm.child_title)

Each result from the crawler is stored in the linked_pages property.  Each of those objects is represented by several ...

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.