How it works...

For a recap and explanation, for both examples we did the following:

  1. Created the data. Both examples needed to load data through a placeholder.
  2. Initialized placeholders and variables. These were very similar placeholders for the data. The variables were very similar, they both had a multiplicative matrix, A, but the first classification algorithm had a bias term to find the split in the data.
  3. Created a loss function, we used the L2 loss for regression and the cross-entropy loss for classification.
  4. Defined an optimization algorithm. Both algorithms used gradient descent.
  5. Iterated across random data samples to iteratively update our variables.

Get TensorFlow Machine Learning Cookbook - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.