O'Reilly logo

Python Web Scraping Cookbook by Michael Heydt

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

How it works

Lets start with the code in them main script file, 08/05_wikipedia_scrapy.py.  This starts with creating a WikipediaSpider object and running the crawl:

process = CrawlerProcess({    'LOG_LEVEL': 'ERROR',    'DEPTH_LIMIT': 1})process.crawl(WikipediaSpider)spider = next(iter(process.crawlers)).spiderprocess.start()

This tells Scrapy that we want to run it for one level of depth, and we get an instance of the crawler as we want to inspect its properties which are the result of the crawl.  The results are then printed with the following:

print("-"*60)for pm in spider.linked_pages:    print(pm.depth, pm.title, pm.child_title)

Each result from the crawler is stored in the linked_pages property.  Each of those objects is represented by several ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required