Our final task is spelling correction: given a typed word, w, determine what word c was most likely intended. For example, if w is "acomodation", c should be "accommodation". (If w is "the", then c too should be "the".)
Following the standard methodology, we want to choose the c that maximizes P(c | w). But defining this probability is not straightforward. Consider w = "thew". One candidate c is "the"—it's the most common word, and we can imagine the typist's finger slipping off the "e" key and hitting the "w". Another candidate is "thaw"—a fairly common word (although 30,000 times less frequent than "the"), and it is common to substitute one vowel for another. Other candidates include "thew" itself (an obscure term for muscle or sinew), "threw", and "Thwe", a family name. Which should we choose? It seems that we are conflating two factors: how probable is c on its own, and how likely is it that w could be a typo for c, or a mispronunciation, or some other kind of misspelling. One might think we will have to combine these factors in some ad hoc fashion, but it turns out that there is a mathematical formula, Bayes's theorem, that tells us precisely how to combine them to find the best candidate:
argmaxcP(c | w) = argmaxcP(w | c) P(c)
Here P(c), the probability that c is the intended word, is called the language model, and P(w | c ), the probability that an author would type w when c is intended, is called the error model or noisy channel model. (The idea is that ...