Evaluation

Evaluation is the next important task, after the model has been developed. It lets you decide whether the model is performing on the given dataset well and ensures that it will be able to handle data that it has never seen. The evaluation framework mostly uses the following features:

  • Error estimation: This uses holdout or interleaved test-and-train methods to estimate the errors. K-fold cross-validation is also used.
  • Performance measures: The Kappa statistics are used, which are more sensitive towards streaming classifiers.
  • Statistical validation: When comparing evaluating classifiers, we must look at the differences in random and non-random experiments. The McNemar's test is the most popular test in streaming, used to access ...

Get Machine Learning in Java - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.