Word2Vec in Spark uses skip-gram and not continuous bag of words (CBOW) which is more suitable for a Neural Net (NN). At its core, we are attempting to compute the representation of the words. It is highly recommended for the user to understand the difference between local representation versus distributed presentation, which is very different to the apparent meaning of the words themselves.
If we use distributed vector representation for words, it is natural that similar words will fall close together in the vector space, which is a desirable generalization technique for pattern abstraction and manipulation (that is, we reduce the problem to vector arithmetic).
What we want to do for a given set of words {Word1, Word2, .... ...