Implementation of Cross-Entropy loss

Now, let's implement what is known as the cross-entropy loss function. This is used to measure how accurate an NN is on a small subset of data points during the training process; the bigger the value that is output by our loss function, the more inaccurate our NN is at properly classifying the given data. We do this by calculating a standard mean log-entropy difference between the expected output and the actual output of the NN. For numerical stability, we will limit the value of the output to 1:

MAX_ENTROPY = 1def cross_entropy(predictions=None, ground_truth=None):  if predictions is None or ground_truth is None:  raise Exception("Error! Both predictions and ground truth must be float32 arrays")  p = np.array(predictions).copy() ...

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.