Thursday, August 21, 2008

Theory of Computation: The Halting Problem

Certain things cannot be known about any algorithm or effective procedure. For example, let's say we wrote an algorithm together.

First we give that algorithm some input, maybe a finite number of integers:

x1, x2, x3, ..., xn

Now our algorithm is going to do something with these integers in a finite number of steps and give us a result. Maybe it will sort them by largest to smallest and print them out in that order: the key here is that the algorithm does things with the inputs in a specified order and with specified instructions on how to handle certain instructions. For example the algorithm probably needs to compare two integers and see which one of them is larger, that is evaluate the expression:

a > b

and then do something with it. A common notation for that is:

if ( a > b )

{

do stuff;

...

}



where "do stuff" stands for a set of instructions. The algorithm is also allowed to do things on a repetitive basis using logic such as

while ( a > b )

{

do stuff;

...

}



So we have conditional branches (if statements) and conditional loops (while statements). There are a few more possibilities but at this point we have enough expressiveness in our language to warrant the following conjecture:

Given any algorithm A(x1, x2, x3, ..., xn) with finitely many finite inputs there is no generic algorithm that can be written to determine whether or not A will ever halt.

That is we can never determine if our algorithm will finish the job we gave it, and so we call such a problem undecidable. It turns out that a lot of interesting problems are provably undecidable, which has stunned computer scientists and mathematicians for almost a century now. There was a time when mathematicians -- first and foremost among them David Hilbert -- thought that the idea of an undecidable problem was fictitious. Gödel then proved that any computational language that can do simple arithmetic (i.e. add and multiply natural numbers) has undecidable propositions, so while they may have a definite value there is absolutely no way to show it in a finite number of steps. That result is now called Gödel's Incompleteness Theorem and it has forever changed human ideas in mathematics. A more specific version of his theorem goes like this (from Wikipedia):

Any effectively generated theory capable of expressing elementary arithmetic cannot be both *consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

*Consistency means that no statement in the language can be both true and not true.


While Gödel's incompleteness theorem sounds scary, it is by no means the end of computability theory. In fact it was only the start; the techniques and ideas invented to prove the incompleteness theorem led to a cascade of ideas that is now modern computability theory. The Church-Turing thesis and the lambda calculus are a good next stop.

Beer is Nighttime Water

Am I right?
 
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.