Overfitting detection – cross-validation

Cross-validation is a model evaluation technique generally used to evaluate a machine learning algorithm's performance in making predictions on new datasets that it has not been trained on. In fact, it is not advisable to compare the predictive accuracy of a set of models using the same observations as used for model estimation. Therefore, to evaluate the predictive performance of the models, we must use an independent set of data.

In the cross-validation procedure, a dataset partitions a subset of data used to train the algorithm, and the remaining data is used for testing. Subdivision is usually randomly performed to ensure that the two parts have the same distribution. Because cross-validation does ...

Get Regression Analysis with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.