Gradient descent

We will now make a full implementation of the training method for our NN in the form of batch-stochastic gradient descent (BSGD). Let's think about what this means, word by word. Batch means that this training algorithm will operate on a collection of training samples at once, rather than all of the samples simultaneously, while stochastic indicates that each batch is chosen randomly. Gradient means that we will be using a gradient from calculus—which, here, is the collection of derivatives for each weight and bias on the loss function. Finally, descent means that we are trying to reduce the loss function—we do this by iteratively making subtle changes on the weights and biases by subtracting the Gradient.

Remember from ...

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.