As statistical topic modeling has an unsupervised nature, it makes model selection difficult. For some applications, there may be some extrinsic tasks at hand, such as information retrieval or document classification, for which performance can be evaluated. However, in general, we want to estimate the model's ability to generalize topics regardless of the task.
In 2009, Wallach et al. introduced an approach that measures the quality of a model by computing the log probability of held-out documents under the model. The likelihood of unseen documents can be used to compare models—higher likelihood implies a better model.
We will evaluate the model using the following steps:
- Let's split the documents into training and test ...