Multiplying trials

The evaluation scores on the various models and dataset version are, to a certain extent, dependent on the samples contained in the evaluation sets. If we run the following experiment several times on the three datasets, we see certain variations in the scores:

  • Shuffle and split the dataset into three -- training, validation, and held-out and create the respective datasources
  • Train a model on the training dataset, keeping the default Amazon ML settings (mild L2 regularization)
  • Evaluate the model on the evaluation and held-out datasets

The following plot shows the respective performances of the three models for several trials. The average AUC is written on the graph. We see that on average, the extended dataset performs ...

Get Effective Amazon Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.