How to do it

Scrapy will cache requests using a middleware module named HttpCacheMiddleware. Enabling it is as simple as configuring the HTTPCACHE_ENABLED setting to True:

process = CrawlerProcess({    'AUTOTHROTTLE_TARGET_CONCURRENCY': 3})process.crawl(Spider)process.start()

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.