Conclusion

In this chapter, we've covered the principles of variational autoencoders (VAEs). As we learned in the principles of VAEs, they bear a resemblance to GANs in the aspect of both attempt to create synthetic outputs from latent space. However, it can be noticed that the VAE networks are much simpler and easier to train compared to GANs. It's becoming clear how conditional VAE and

Conclusion

-VAE are similar in concept to conditional GAN and disentangled representation GAN respectively.

VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore, building a

-VAE is straightforward. We should note however that interpretable and disentangled ...

Get Advanced Deep Learning with Keras now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.