For a recap and explanation, for both examples we did the following:
- Created the data. Both examples needed to load data through a placeholder.
- Initialized placeholders and variables. These were very similar placeholders for the data. The variables were very similar, they both had a multiplicative matrix, A, but the first classification algorithm had a bias term to find the split in the data.
- Created a loss function, we used the L2 loss for regression and the cross-entropy loss for classification.
- Defined an optimization algorithm. Both algorithms used gradient descent.
- Iterated across random data samples to iteratively update our variables.