Step 2 – train the stack

After the discriminator has updated it's weights, we will train both the discriminator and generator together as a single model. When doing so, we will make the discriminator's weights non-trainable, freezing them in place but still allowing the discriminator to reverse propagate a gradient to the generator so that the generator can update it's weights.

For this step in the training process, we will use a noise vector as input, which will cause an image to be generated by the generator. The discriminator will be shown that image and asked to predict if the image is real or not. The following diagram illustrates this process:

The discriminator will come up with some prediction, which we can call . The loss function ...

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.