You are previewing Web Scraping with Python.
O'Reilly logo
Web Scraping with Python

Book Description

Successfully scrape data from any website with the power of Python

About This Book

  • A hands-on guide to web scraping with real-life problems and solutions

  • Techniques to download and extract data from complex websites

  • Create a number of different web scrapers to extract information

  • Who This Book Is For

    This book is aimed at developers who want to use web scraping for legitimate purposes. Prior programming experience with Python would be useful but not essential. Anyone with general knowledge of programming languages should be able to pick up the book and understand the principals involved.

    What You Will Learn

  • Extract data from web pages with simple Python programming

  • Build a threaded crawler to process web pages in parallel

  • Follow links to crawl a website

  • Download cache to reduce bandwidth

  • Use multiple threads and processes to scrape faster

  • Learn how to parse JavaScript-dependent websites

  • Interact with forms and sessions

  • Solve CAPTCHAs on protected web pages

  • Discover how to track the state of a crawl

  • In Detail

    The Internet contains the most useful set of data ever assembled, largely publicly accessible for free. However, this data is not easily reusable. It is embedded within the structure and style of websites and needs to be carefully extracted to be useful. Web scraping is becoming increasingly useful as a means to easily gather and make sense of the plethora of information available online. Using a simple language like Python, you can crawl the information out of complex websites using simple programming.

    This book is the ultimate guide to using Python to scrape data from websites. In the early chapters it covers how to extract data from static web pages and how to use caching to manage the load on servers. After the basics we'll get our hands dirty with building a more sophisticated crawler with threads and more advanced topics. Learn step-by-step how to use Ajax URLs, employ the Firebug extension for monitoring, and indirectly scrape data. Discover more scraping nitty-gritties such as using the browser renderer, managing cookies, how to submit forms to extract data from complex websites protected by CAPTCHA, and so on. The book wraps up with how to create high-level scrapers with Scrapy libraries and implement what has been learned to real websites.

    Style and approach

    This book is a hands-on guide with real-life examples and solutions starting simple and then progressively becoming more complex. Each chapter in this book introduces a problem and then provides one or more possible solutions.

    Downloading the example code for this book. You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the code file.

    Table of Contents

    1. Web Scraping with Python
      1. Table of Contents
      2. Web Scraping with Python
      3. Credits
      4. About the Author
      5. About the Reviewers
      6. www.PacktPub.com
        1. Support files, eBooks, discount offers, and more
          1. Why subscribe?
          2. Free access for Packt account holders
      7. Preface
        1. What this book covers
        2. What you need for this book
        3. Who this book is for
        4. Conventions
        5. Reader feedback
        6. Customer support
          1. Errata
          2. Piracy
          3. Questions
      8. 1. Introduction to Web Scraping
        1. When is web scraping useful?
        2. Is web scraping legal?
        3. Background research
          1. Checking robots.txt
          2. Examining the Sitemap
          3. Estimating the size of a website
          4. Identifying the technology used by a website
          5. Finding the owner of a website
        4. Crawling your first website
          1. Downloading a web page
            1. Retrying downloads
            2. Setting a user agent
          2. Sitemap crawler
          3. ID iteration crawler
          4. Link crawler
            1. Advanced features
              1. Parsing robots.txt
              2. Supporting proxies
              3. Throttling downloads
              4. Avoiding spider traps
              5. Final version
        5. Summary
      9. 2. Scraping the Data
        1. Analyzing a web page
        2. Three approaches to scrape a web page
          1. Regular expressions
          2. Beautiful Soup
          3. Lxml
            1. CSS selectors
          4. Comparing performance
            1. Scraping results
          5. Overview
          6. Adding a scrape callback to the link crawler
        3. Summary
      10. 3. Caching Downloads
        1. Adding cache support to the link crawler
        2. Disk cache
          1. Implementation
          2. Testing the cache
          3. Saving disk space
          4. Expiring stale data
          5. Drawbacks
        3. Database cache
          1. What is NoSQL?
          2. Installing MongoDB
          3. Overview of MongoDB
          4. MongoDB cache implementation
          5. Compression
          6. Testing the cache
        4. Summary
      11. 4. Concurrent Downloading
        1. One million web pages
          1. Parsing the Alexa list
        2. Sequential crawler
        3. Threaded crawler
          1. How threads and processes work
          2. Implementation
          3. Cross-process crawler
        4. Performance
        5. Summary
      12. 5. Dynamic Content
        1. An example dynamic web page
        2. Reverse engineering a dynamic web page
          1. Edge cases
        3. Rendering a dynamic web page
          1. PyQt or PySide
          2. Executing JavaScript
          3. Website interaction with WebKit
            1. Waiting for results
            2. The Render class
          4. Selenium
        4. Summary
      13. 6. Interacting with Forms
        1. The Login form
          1. Loading cookies from the web browser
        2. Extending the login script to update content
        3. Automating forms with the Mechanize module
        4. Summary
      14. 7. Solving CAPTCHA
        1. Registering an account
          1. Loading the CAPTCHA image
        2. Optical Character Recognition
          1. Further improvements
        3. Solving complex CAPTCHAs
          1. Using a CAPTCHA solving service
          2. Getting started with 9kw
            1. 9kw CAPTCHA API
          3. Integrating with registration
        4. Summary
      15. 8. Scrapy
        1. Installation
        2. Starting a project
          1. Defining a model
          2. Creating a spider
            1. Tuning settings
            2. Testing the spider
          3. Scraping with the shell command
          4. Checking results
          5. Interrupting and resuming a crawl
        3. Visual scraping with Portia
          1. Installation
          2. Annotation
          3. Tuning a spider
          4. Checking results
        4. Automated scraping with Scrapely
        5. Summary
      16. 9. Overview
        1. Google search engine
        2. Facebook
          1. The website
          2. The API
        3. Gap
        4. BMW
        5. Summary
      17. Index