How to do it

The code for this example is in the 07/02_tokenize.py file.  This extends the sentence splitter to demonstrate five different tokenization techniques.  The first sentence in the file will be the only one tokenized so that we keep the amount of output to a reasonable amount:

  1. The first step is to simply use the built-in Python string .split() method.  This results in the following:
print(first_sentence.split())['We', 'are', 'seeking', 'developers', 'with', 'demonstrable', 'experience', 'in:', 'ASP.NET,', 'C#,', 'SQL', 'Server,', 'and', 'AngularJS.']

The sentence is split on space boundaries.  Note that punctuation such as ":" and "," are included in the resulting tokens.

  1. The following demonstrates using the tokenizers built ...

Get Python Web Scraping Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.