Optimizers

In the previous section, we explored various activation functions and noticed that the ReLU activation function gives a better result when run over a high number of epochs.

In this section, we will look at the impact of varying the optimizer while the activation function remains ReLU on the scaled dataset.

The various loss functions and their corresponding accuracies on the test dataset when run for 10 epochs are as follows:

Optimizer

Test dataset accuracy

SGD

88%

RMSprop

98.44%

Adam

98.4%

Now we have seen that RMSprop and Adam optimizers perform better than the stochastic gradient descent optimizer; let's look at the other parameter within an optimizer that can be modified to improve the accuracy ...

Get Hands-On Machine Learning on Google Cloud Platform now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.