Attention

Attention is another helpful training trick that can be implemented in sequence-to-sequence models. Attention lets the decoder see the hidden state at each step of the input sequence. This lets the network focus on (or pay attention to) specific inputs, which speeds training and can provide some lift in model accuracy. Attention is typically a good thing; however, at the time of writing, Keras doesn't have attention built in. Keras does currently have a pull request pending for a custom attention layer though. I suspect that, very soon, support for attention will be built in to Keras.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.