How it works...

Word2Vec in Spark uses skip-gram and not continuous bag of words (CBOW) which is more suitable for a Neural Net (NN). At its core, we are attempting to compute the representation of the words. It is highly recommended for the user to understand the difference between local representation versus distributed presentation, which is very different to the apparent meaning of the words themselves.

If we use distributed vector representation for words, it is natural that similar words will fall close together in the vector space, which is a desirable generalization technique for pattern abstraction and manipulation (that is, we reduce the problem to vector arithmetic).

What we want to do for a given set of words {Word1, Word2, .... ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.