Making real-time predictions

With batch predictions, you submit all the samples you want the model to predict at once to Amazon ML by creating a datasource. With real-time predictions, also called streaming or online predictions, the idea is to send one sample at a time to an API endpoint, a URL, via HTTP queries, and receive back predictions and information for each one of the samples.

Setting up real-time predictions on a model consists of knowing the prediction API endpoint URL and writing a script that can read your data, send each new sample to that API URL, and retrieve the predicted class or value. We will present a Python-based example in the following section.

Amazon ML also offers a way to make predictions on data you create on ...

Get Effective Amazon Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.