O'Reilly logo
  • Vijayakumar Ramdoss thinks this is interesting:

That is the “forest” half. The other half, the “random,” says that when training you don’t give each tree all the training data; you randomly hold back some rows, or hold back some columns. This makes each individual tree a bit dumber than if it had seen all the data. But when their results are averaged together the whole is more intelligent than any one part.

From

Cover of Practical Machine Learning with H2O

Note

Random Forest definition