O'Reilly logo

Pattern Recognition by Matthias Nagel, Matthias Richter, Jürgen Beyerer

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Fig. 7.8. Pre-training with stacked autoencoders decomposes training a deep network layer by layer. Each layer is trained as an autoencoder of its input, where the input is the output of the previous layer.

One particular approach to pre-training is the use of stacked autoencoders, where each layer of the deep network is treated as a coder of its input (Bengio et al. [2007], see Figure 7.8).

The layers are trained one after the other: The first layer is trained to reconstruct the input of the network with minimal error. When the training of this layer is complete, the weights are fixed, and the output of the neurons of the first layer are treated ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required