Stacked training

The DCGAN framework is trained using minibatches, the same way I have previously trained networks in this book. However, later on when we build the code you will notice that we're building a training loop that explicitly controls what happens for each update batch, rather than just calling the models.fit() method and relying on Keras to handle it for us. I'm doing this because GAN training requires several models to update their weights over the same batch, so it's slightly more complicated than a single parameter update as we were previously doing.

Training a DCGAN happens in two steps, for each batch.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.