Improving our tokenization

The preceding simple approach results in a lot of tokens and does not filter out many nonword characters (such as punctuation). Most tokenization schemes will remove these characters. We can do this by splitting each raw document on nonword characters using a regular expression pattern:

val nonWordSplit = text.flatMap(t =>   t.split("""W+""").map(_.toLowerCase)) println(nonWordSplit.distinct.count)

This reduces the number of unique tokens significantly:

130126

If we inspect the first few tokens, we will see that we have eliminated most of the less useful characters in the text:

println( nonWordSplit.distinct.sample(true, 0.3,   50).take(100).mkString(","))

You will see the following result displayed:

jejones,ml5,w1w3s1,k29p,nothin,42b,beleive,robin,believiing,749, ...

Get Machine Learning with Spark - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.