Weight and bias initialization

Initializing the weight and biases for the hidden layers is an important hyperparameter to be taken care of:

  • Do not do all-zero initialization: A reasonable-sounding idea might be to set all the initial weights to zero, but it does not work in practice because if every neuron in the network computes the same output, there will be no source of asymmetry between neurons as their weights are initialized to be the same.
  • Small random numbers: It is also possible to initialize the weights of the neurons to small numbers but not identically zero. Alternatively, it is also possible to use small numbers drawn from a uniform distribution.
  • Initializing the biases: It is possible, and common, to initialize the biases to ...

Get Scala Machine Learning Projects now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.