Tokenization

Tokenization refers to the technique of splitting up your text into individual tokens or terms. Formally, a token is defined as a sequence of characters that represents a subset of the original text. Informally, tokens are typically just the different words that make up the original text, and that have been segmented using the whitespace and other punctuation characters. For example, the sentence "Machine Learning with Apache Spark" may result in a collection of tokens persisted in an array or list expressed as ["Machine", "Learning", "with", "Apache", "Spark"].

Get Machine Learning with Apache Spark Quick Start Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.