The code for this example is in the 07/02_tokenize.py file. This extends the sentence splitter to demonstrate five different tokenization techniques. The first sentence in the file will be the only one tokenized so that we keep the amount of output to a reasonable amount:
- The first step is to simply use the built-in Python string .split() method. This results in the following:
print(first_sentence.split())['We', 'are', 'seeking', 'developers', 'with', 'demonstrable', 'experience', 'in:', 'ASP.NET,', 'C#,', 'SQL', 'Server,', 'and', 'AngularJS.']
The sentence is split on space boundaries. Note that punctuation such as ":" and "," are included in the resulting tokens.
- The following demonstrates using the tokenizers built ...