Evaluating a model

As statistical topic modeling has an unsupervised nature, it makes model selection difficult. For some applications, there may be some extrinsic tasks at hand, such as information retrieval or document classification, for which performance can be evaluated. However, in general, we want to estimate the model's ability to generalize topics regardless of the task.

In 2009, Wallach et al. introduced an approach that measures the quality of a model by computing the log probability of held-out documents under the model. The likelihood of unseen documents can be used to compare models—higher likelihood implies a better model.

We will evaluate the model using the following steps:

  1. Let's split the documents into training and test ...

Get Machine Learning in Java - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.