Conclusion

In this chapter, we've discussed CycleGAN as an algorithm that can be used for image translation. In CycleGAN, the source and target data are not necessarily aligned. We demonstrated two examples, grayscalecolor, and MNISTSVHN. Though there are many other possible image translations that CycleGAN can perform.

In the next chapter, we'll embark on another type of generative model, Variational AutoEncoders (VAEs). VAEs have a similar objective of learning how to generate new images (data). They focus on learning the latent vector modeled as a Gaussian distribution. We'll demonstrate other similarities in the problem being addressed by GANs in the form of conditional VAEs and the disentangling of latent representations in VAEs.

Get Advanced Deep Learning with Keras now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.