Safe choices for GAN

I've previously mentioned Soumith Chintala's GAN hacks Git (https://github.com/soumith/ganhacks), which is an excellent place to start when you're trying to make your GAN stable. Now that we've talked about how difficult it can be to train a stable GAN, let's talk about some of the safe choices that will likely help you succeed that you can find there. While there are quite a few hacks out there, here are my top recommendations that haven't been covered already in the chapter:

  • Batch norm: When using batch normalization, construct different minibatches for both real and fake data and make the updates separately.
  • Leaky ReLU: Leaky ReLU is a variation of the ReLU activation function. Recall the the ReLU function is .

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.