See also

Stochastic Gradient Descent: There are multiple variations of Gradient Descent (GD), with Stochastic Gradient Descent (SGD) being the most talked about. Apache Spark supports the Stochastic Gradient Descent (SGD) variation, in which we update the parameters with a subset of training data - which is a bit challenging since we need to update the parameters simultaneously. There are two main differences between SGD and GD. The first difference is that SGD is an online learning/optimization technique while GD is more of an offline learning/optimization technique. The second difference between SGD versus GD is the speed of convergence due to not needing to examine the entire dataset before updating any parameter. This difference is depicted ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.