Python Web Scraping - Second Edition

Book description

Successfully scrape data from any website with the power of Python 3.x

About This Book

  • A hands-on guide to web scraping using Python with solutions to real-world problems
  • Create a number of different web scrapers in Python to extract information
  • This book includes practical examples on using the popular and well-maintained libraries in Python for your web scraping needs

Who This Book Is For

This book is aimed at developers who want to use web scraping for legitimate purposes. Prior programming experience with Python would be useful but not essential. Anyone with general knowledge of programming languages should be able to pick up the book and understand the principals involved.

What You Will Learn

  • Extract data from web pages with simple Python programming
  • Build a concurrent crawler to process web pages in parallel
  • Follow links to crawl a website
  • Extract features from the HTML
  • Cache downloaded HTML for reuse
  • Compare concurrent models to determine the fastest crawler
  • Find out how to parse JavaScript-dependent websites
  • Interact with forms and sessions

In Detail

The Internet contains the most useful set of data ever assembled, most of which is publicly accessible for free. However, this data is not easily usable. It is embedded within the structure and style of websites and needs to be carefully extracted. Web scraping is becoming increasingly useful as a means to gather and make sense of the wealth of information available online.

This book is the ultimate guide to using the latest features of Python 3.x to scrape data from websites. In the early chapters, you'll see how to extract data from static web pages. You'll learn to use caching with databases and files to save time and manage the load on servers. After covering the basics, you'll get hands-on practice building a more sophisticated crawler using browsers, crawlers, and concurrent scrapers.

You'll determine when and how to scrape data from a JavaScript-dependent website using PyQt and Selenium. You'll get a better understanding of how to submit forms on complex websites protected by CAPTCHA. You'll find out how to automate these actions with Python packages such as mechanize. You'll also learn how to create class-based scrapers with Scrapy libraries and implement your learning on real websites.

By the end of the book, you will have explored testing websites with scrapers, remote scraping, best practices, working with images, and many other relevant topics.

Style and approach

This hands-on guide is full of real-life examples and solutions starting simple and then progressively becoming more complex. Each chapter in this book introduces a problem and then provides one or more possible solutions.

Table of contents

  1. Preface
    1. What this book covers
    2. What you need for this book
    3. Who this book is for
    4. Conventions
    5. Reader feedback
    6. Customer support
      1. Downloading the example code
      2. Errata
      3. Piracy
      4. Questions
  2. Introduction to Web Scraping
    1. When is web scraping useful?
    2. Is web scraping legal?
    3. Python 3
    4. Background research
      1. Checking robots.txt
      2. Examining the Sitemap
      3. Estimating the size of a website
      4. Identifying the technology used by a website
      5. Finding the owner of a website
    5. Crawling your first website
      1. Scraping versus crawling
      2. Downloading a web page
        1. Retrying downloads
        2. Setting a user agent
      3. Sitemap crawler
      4. ID iteration crawler
      5. Link crawlers
        1. Advanced features
          1. Parsing robots.txt
          2. Supporting proxies
          3. Throttling downloads
          4. Avoiding spider traps
          5. Final version
      6. Using the requests library
    6. Summary
  3. Scraping the Data
    1. Analyzing a web page
    2. Three approaches to scrape a web page
      1. Regular expressions
      2. Beautiful Soup
      3. Lxml
    3. CSS selectors and your Browser Console
    4. XPath Selectors
    5. LXML and Family Trees
    6. Comparing performance
    7. Scraping results
      1. Overview of Scraping
      2. Adding a scrape callback to the link crawler
    8. Summary
  4. Caching Downloads
    1. When to use caching?
    2. Adding cache support to the link crawler
    3. Disk Cache
      1. Implementing DiskCache
      2. Testing the cache
      3. Saving disk space
      4. Expiring stale data
      5. Drawbacks of DiskCache
    4. Key-value storage cache
      1. What is key-value storage?
      2. Installing Redis
      3. Overview of Redis
      4. Redis cache implementation
      5. Compression
      6. Testing the cache
      7. Exploring requests-cache
    5. Summary
  5. Concurrent Downloading
    1. One million web pages
      1. Parsing the Alexa list
    2. Sequential crawler
    3. Threaded crawler
    4. How threads and processes work
      1. Implementing a multithreaded crawler
      2. Multiprocessing crawler
    5. Performance
    6. Summary
  6. Dynamic Content
    1. An example dynamic web page
    2. Reverse engineering a dynamic web page
      1. Edge cases
    3. Rendering a dynamic web page
      1. PyQt or PySide
        1. Debugging with Qt
      2. Executing JavaScript
      3. Website interaction with WebKit
        1. Waiting for results
    4. The Render class
      1. Selenium
        1. Selenium and Headless Browsers
    5. Summary
  7. Interacting with Forms
    1. The Login form
      1. Loading cookies from the web browser
    2. Extending the login script to update content
    3. Automating forms with Selenium
      1. "Humanizing" methods for Web Scraping
    4. Summary
  8. Solving CAPTCHA
    1. Registering an account
      1. Loading the CAPTCHA image
    2. Optical character recognition
      1. Further improvements
    3. Solving complex CAPTCHAs
    4. Using a CAPTCHA solving service
      1. Getting started with 9kw
        1. The 9kw CAPTCHA API
      2. Reporting errors
      3. Integrating with registration
    5. CAPTCHAs and machine learning
    6. Summary
  9. Scrapy
    1. Installing Scrapy
    2. Starting a project
      1. Defining a model
      2. Creating a spider
        1. Tuning settings
        2. Testing the spider
    3. Different Spider Types
    4. Scraping with the shell command
      1. Checking results
      2. Interrupting and resuming a crawl
        1. Scrapy Performance Tuning
    5. Visual scraping with Portia
      1. Installation
      2. Annotation
      3. Running the Spider
      4. Checking results
    6. Automated scraping with Scrapely
    7. Summary
  10. Putting It All Together
    1. Google search engine
    2. Facebook
      1. The website
      2. Facebook API
    3. Gap
    4. BMW
    5. Summary

Product information

  • Title: Python Web Scraping - Second Edition
  • Author(s): Katharine Jarmul, Richard Lawson
  • Release date: May 2017
  • Publisher(s): Packt Publishing
  • ISBN: 9781786462589