Summary

In the previous chapters, we discussed learning the parameters, as well as the structures, of a Bayesian model using just the data samples. In this chapter, we discussed the same situations, but in the context of a Markov model. Firstly, we discussed a very famous technique of parameter estimation, maximum likelihood estimation. We saw that in Markov models, even the maximum likelihood estimate in the case of a simple model could be computationally expensive, and in some cases, it could also be intractable. This motivated us to find alternatives, such as using approximate inference algorithms to compute the gradient or using a different likelihood. We showed that learning with belief propagation can be reformulated as optimizing inference ...

Get Mastering Probabilistic Graphical Models Using Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.