Introduction to long short term memory networks

The vanishing gradient problem has appeared as the biggest obstacle to recurrent networks.

As the straight line changes along the x axis with a slight change in the y axis, the gradient shows change in all the weights with regard to change in error. If we don't know the gradient, we will not be able to adjust the weights in a direction that will reduce the loss or error, and our neural network ceases to learn.

Long short term memories (LSTMs) are designed to overcome the vanishing gradient problem. Retaining information for a larger duration of time is effectively their implicit behavior.

In standard RNNs, the repeating cell will have an elementary structure, such as a single tanh layer:

Get Neural Network Programming with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.