Conclusion

In this chapter, we've presented various improvements in the original algorithm of GAN, first introduced in the previous chapter. WGAN proposed an algorithm to improve the stability of training by using the EMD or Wassertein 1 loss. LSGAN argued that the original cross-entropy function of GAN is prone to vanishing gradients, unlike least squares loss. LSGAN proposed an algorithm to achieve stable training and quality outputs. ACGAN convincingly improved the quality of the conditional generation of MNIST digits by requiring the discriminator to perform classification task on top of determining whether the input image is fake or real.

In the next chapter, we'll study how to control the attributes of generator outputs. Whilst CGAN and ACGAN ...

Get Advanced Deep Learning with Keras now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.