Chapter 10Classification
10.1 Introduction
Classification is the problem of predicting the class of a given instance, on the basis of some attributes (features) of it. In the Bayesian framework,1 a classifier is learned from data by updating a prior density, which represents the beliefs before analyzing the data and which is usually assumed uniform, with the likelihood, which models the evidence coming from the data; this yields a posterior joint probability over classes and features. Once the classifier has been learned from data, it can classify novel instances; under 0-1 loss, it returns the most probable class after conditioning on the value of the features of the instance to be classified.
Yet, Bayesian classifiers can happen to issue prior-dependent classifications, namely the most probable class changes under different priors. This might be acceptable if the prior has been carefully elicited and represents domain knowledge; however, in general the uniform prior is taken as default, without further investigation.
This consideration has lead to the development of credal classifiers, which extend Bayesian classifiers to imprecise probabilities. The term ‘credal classifier’ was firstly used in [715], ...
Get Introduction to Imprecise Probabilities now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.