Gradient descent pseudocode

We proceed with the gradient descent pseudocode:

  1. Let w be some initial value that can be chosen randomly.
  2. Compute the ∂J/∂W gradient.
  3. If ∂J/∂W < t, where t is some predefined threshold value, EXIT. We found the weight vector that gets the minimum error for the predicted output. 
  4. Update W. W = W - s (∂J/∂W) [s is called the learning rate. It needs to be chosen carefully, if it is too large, the gradient will overshoot and we will miss the minimum. If it is too large, it will take too many iterations to converge].

So far, we have traversed the ANN in one direction, which is termed as forward propagation. The ultimate goal in training the ANN is to derive the weights on each of the connections between the nodes ...

Get Artificial Intelligence for Big Data now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.