This recipe works very similarly to creating embeddings with Skip-Gram. The main difference is how we generate the data and combine the embeddings.
For this recipe, we loaded the data, normalized the text, created a vocabulary dictionary, used the dictionary to look up embeddings, combined the embeddings, and trained a neural network to predict the target word.