How it works

The code is the same as previous NASA site crawlers except that we include allowed_domains=['nasa.gov']:

class Spider(scrapy.spiders.SitemapSpider):    name = 'spider'    sitemap_urls = ['https://www.nasa.gov/sitemap.xml']    allowed_domains=['nasa.gov']    def parse(self, response):        print("Parsing: ", response)

The NASA site is fairly consistent with staying within its root domain, but there are occasional links to other sites such as content on boeing.com. This code will prevent moving to those external sites.

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.