A nyone who has explored the history, technology, or theory of computers has likely encountered the concept of the Turing Machine. The Turing Machine is an imaginary — not quite even hypothetical — computer invented in 1936 by English mathematician Alan Turing (1912–1954) to help solve a question in mathematical logic. As a by-product, Turing founded a new field of research known as computation theory or computability, which is the study of the abilities and limitations of digital computers.
Although the Turing Machine is a rather implausible computer, it benefits from being extremely simple. The basic Turing Machine performs just a few simple operations. If the machine did anything less than what it does, it wouldn't do anything at all. Yet, through combinations of these simple operations, the Turing Machine can perform any computation that can be done on modern digital computers.
By stripping the computer to the raw basics, we can understand better the abilities of digital computers — and just as importantly, their inabilities. Years before anyone had demonstrated what a computer could do, Turing proved what computers could never do.
The Turing Machine remains a popular topic for exposition and discussion. (Try the term “Turing Machine” in your favorite Internet search engine.) Yet, I suspect that Alan Turing's original paper describing his invention is rarely read. Perhaps this neglect has something to do with the title: “On Computable Numbers, with an Application ...