Backpropagation through time

Training an RNN requires a slightly different implementation of backpropagation, known as backpropagation through time (BPTT).

As with normal backpropagation, the goal of BPTT is to use the overall network error to adjust the weights of each neuron/unit with respect to their contribution to the overall error, by the gradient. The overall goal is the same.

When using BPTT, our definition of error slightly changes however. As we just saw, a recurrent neuron can be unrolled through several time steps. We care about the prediction quality at all of those time steps, not just the terminal time step, because the goal of an RNN is to predict a sequence correctly, given that a logic unit error is defined as the sum of ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.