Input and embedding layer architecture

In the last chapter, we trained an LSTM with a set of lags from a time series. Here our lags are really the words in a sequence. We will use these words to predict the sentiment of the reviewer. In order to get from a sequence of words to an input vector that considers the semantic value of those words, we can use an embedding layer.

Using the Keras functional API, the embedding layer is always the second layer in the network after the input layer. Let's look at how these two layers fit together:

input = Input(shape=(sequence_length,), name="Input")embedding = Embedding(input_dim=vocab_size, output_dim=embedding_dim,                      input_length=sequence_length, name="embedding")(input)

Our input layer needs to know ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.