Chapter 7. Making It Better

The two-part design for the basic recommender we’ve been discussing is a full-scale system capable of producing high-quality recommendations. Like any machine-learning system, success depends in part on repeated cycles of testing, evaluation, and tuning to achieve the desired results. Evaluation is important not only to decide when a recommender is ready to be deployed, but also as an ongoing effort in production. By its nature, the model will change over time as it’s exposed to new user histories—in other words, the system learns. A recommender should be evaluated not only on present performance but also on how well it is setup to perform in the future.

As we pointed out in Chapter 2, as the developer or project director, you must also decide how good is good or, more specifically, which criteria define success in your situation—there isn’t just one yardstick of quality. Trade-offs are individualized, and goals must be set appropriately for the project. For example, the balance between extreme accuracy in predictions or relevance and the need for quick response or realistic levels of development effort may be quite different for a big e-commerce site when compared to a personalized medicine project. Machine learning is an automated technology, but human insight is required to determine the desired and acceptable results, and thus what constitutes success.

In practical recommendation, it’s also important to put your effort where it pays off the most. In ...

Get Practical Machine Learning: Innovations in Recommendation now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.