Cost of a sparse autoencoder based on KL Divergence

The cost function is redefined with two new parameters, sparse_reg and kl_divergence, when compared to the previous encoders discussed in this chapter:

self.cost = 0.5 * tf.reduce_sum(  tf.pow(tf.subtract(self.reconstruction, self.x), 2.0)) +     self.sparse_reg * self.kl_divergence(self.sparsity_level, self.hidden_layer)

Get Neural Network Programming with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.