Summary

In this chapter, we had a closer look at modeling sequences of observations with hidden (or latent) states with the two commonly used algorithms:

  • The generative hidden Markov model to maximize p(X,Y)
  • The discriminative conditional random field to maximize log p(Y|X)

The HMM is a special form of Bayes network. It requires the observations to be independent. Although restrictive, the conditional independence prerequisites make the HMM fairly easy to understand and validate, which is not the case for a CRF.

You learned how to implement three dynamic programming techniques: Viterbi, Baum-Welch, and alpha/beta algorithms in Scala. These algorithms are used to solve diverse type of optimization problems. They should be an essential component of ...

Get Scala for Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.