Sequence-to-sequence model architecture

The key to understanding sequence-to-sequence model architecture is understanding that the architecture is built to allow the input sequence to vary in length from the output sequence. The entire input sequence can then be used to predict an output sequence of varying length.

To do that, the network is divided into two separate parts, each part consists of one or more LSTM layers responsible for half of the task. We discussed LSTMs back in Chapter 9, Training an RNN from scratch, if you'd like a refresher on their operation. We will learn about each of these two parts in the following sections.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.