Evaluating the held-out dataset

In Chapter 5, Model Creation, we evaluated the performance of our different models on a slice of the training datasource. We obtained for each model an AUC score, and selected the AUC with the best AUC score. We relied on Amazon ML to create the validation set, by splitting the training dataset into two, with 70% for training and 30% of the data for validation. We could have done that split ourselves, created the validation datasource, and specified which datasource to use for the evaluation of the model.

In fact, nothing prevents us from running a model evaluation on the held-out dataset. If you go to the model summary page, you will notice a Perform another Evaluation button in the Evaluation section:

Get Effective Amazon Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.