How it works...

Again, we used the housing data so we can compare this method with ridge regression and show how lasso not only shrinks the parameters, like ridge regression, but it goes all the way and sets the parameters that are not significantly contributing to zero.

The Signatures for this method constructor are as follows:

new LassoWithSGD()

Defaults for Parameters:

  • stepSize= 1.0
  • numIterations= 100
  • regParm= 0.01
  • miniBatchFraction= 1.0

As a reminder, ridge regression reduces the parameter's weight but does not eliminate them. When dealing with a huge number of parameters without a deep learning system in data mining/machine learning, lasso is usually preferred to reduce the number of inputs in the early stages of a ML pipeline, at ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.