FOLLOW ALONG: www.cis.ksu.edu/~schmidt/halting.html
Computers calculate on numbers (represented as 1s and 0s), on text, sounds, images, voltages, etc. (all represented as numbers). Are there some calculation activities that are beyond the reach of computers, that they cannot do? The answer is yes.
So does Western Thought. A sentence can be an imperative (e.g., "Do your homework!"), a question ("Are we there yet?"), or a declaration ("The sky is blue.")
Here, we care about declarations.
If we abuse English and write a declaration that is both a fact and a falsehood at the same moment, it is a paradox and must be discarded from the English language --- it is nonsensical; it cannot be. A paradox makes True = False, which violates the starting law of the universe.
The oldest known example of a paradox is the sentence, "I am lying." --- the declaration is a fact exactly when it is a falsehood (since "lying" means "stating a falsehood"). This is the Liar's Paradox. The Liar's Paradox is nonsensical; it cannot be.
Here is a little story called the Barber's Paradox, due to Bertrand Russell. It will be the centerpiece of the next section:
Bertrand Russell was a 20th-century philosopher, mathematician, logician, and a Nobel-prize winner in literature. He found a famous paradox in mathematics, one that spread into computing. |
Declarations in mathematics are called predicates; here are examples:
Mathematicians has always worked with numbers, but in the nineteenth century, mathematicians started using predicates to define sets. (Recall that a "set" is a collection --- a "grocery sack" --- of elements. Examples: {4,5,6} is a set of three numbers. {} is a set of no numbers.)
Here are examples of sets of numbers, defined with predicates. (Read the first one as, ''the set of elements, n, such that n > 0 is a fact.'')
The whole point of defining a set is to ask what's in it. The predicate, e ∈ S, is a fact exactly when element e is contained in set S. (Read as "e is in S".) Here are examples:
Mathematicians use sets to hold sets, too. Here are examples:
RUSSELL'S PARADOX: Let R be {S | not(S ∈ S)} ↑ R ∈ R ↑ is a fact exactly when not(R ∈ R) is a fact, that is, exactly when R ∈ R is a falsehood R ∈ R is a paradox!Russell's Paradox is generated by the combination of S ∈ S (self-inspection) and not ("flipping" the answer) --- a real disaster!
We have uncovered something that mathematics cannot do --- define a set of sets that don't hold themselves. Modern versions of set theory disallow self-inspection to remove this form of paradox.
That is, a human loaded and started WIN-WORD.EXE on the computer's processor, and the human told the running program to load as the input, WIN-WORD.EXE.
Editor programs, like MS Word, can inspect all programs, including themselves. Compiler/interpreter/IDE programs can inspect/compile/run most all programs, including themselves. Error-finding, performance-analyzing, and correctness-checking programs are can inspect/analyze a wide range of programs, including themselves.
A key feature of computer programs, one that leads us into artificial-intelligence-like areas, is that they can inspect other programs, including themselves.
How far can these inspections go? We will see that they can generate paradoxes!
Alan Turing was an English mathematician, employed by the British government to break the codes the Nazi armies used for communication. Turing and his team succeeded, and the Allies used the decoded Nazi communications to win World War II.
Turing was interested in machines for calculating mathematics. His imaginary Turing machine was a controller with a read-write head that traversed the squares of an unbounded roll of toilet paper, where only a 0 or a 1 can be written in a square: |
Turing realized that the controller could be wired as a "processor" that understood these six instructions:
* move Left/Right one square * erase the square (make it blank) * write 0/1 on the square * if square is blank/0/1, then goto Instruction #n * goto Instruction #n * stopHere is a sample program that scans a binary number from left to right and flips all 0s to 1s and all 1s to 0s:
#1: if square is 0, then goto #4 #2: if square is 1, then goto #6 #3: if square is blank, then goto #9 #4: write 1 #5: goto #7 #6: write 0 #7: move Right #8: goto #1 #9: stop ... => ...The nine instructions have number codings, and the program can be copied onto a tape as one huge (Base-2) number to be read by the controller. The Universal Turing Machine was configured like this:
The Universal Turing Machine is the origin of
Turing wrote programs for the Universal Turing machine, and he believed that all facts/knowledge/problems that can be calculated/deduced/solved by a machine can be done by a Universal Turing Machine. This is now known as the Church-Turing thesis.
(The "Church" part refers to Alonzo Church, a Princeton mathematician who developed a symbol system, the lambda-calculus, which has the same computational powers as Turing's machines.)
Turing asked: Are there some problems whose solutions cannot be calculated by his machine? In particular, he wondered,
Is there a program that can read the coding of any program and correctly predict whether the latter will halt when it runs?
This question is called the Halting Problem. Its answer is no. (If the answer were yes, then the program would let us build Russell's paradox in the computer!)
The computer processor does the calculations stated in the program. If all goes well, the program converges (halts, terminates) and emits an output, a number. If something goes wrong and no output is emitted, the program diverges (loops, freezes, goes lost, runs forever, hangs).
Here are two examples of requirements:
INPUT: a number, n OUTPUT: a number ALGORITHM: 1. read n 2. calculate answer = n * n 3. write answerSay that the program we write from the algorithm is named square. Then,
n => ↓m iff m2 = n => ↑ iff there is no m such that m2 = nRead iff as "exactly when" and ↑ as ``diverges.''
This requirement might be implemented by a program that reads n and guesses possible answers (0,1,2, etc.) until an output, m is found, if ever. (If not, the program runs forever.)
Say that sqrt is the name of the program we build. Here are two uses of it:
p => ↓(a + 1) iff run(p)(0)↓a => ↑ iff run(p)(0)↑The requirement demands we run input number/program p with datum 0 and add one to its answer, if there is one. There is an easy algorithm:
INPUT: a program, p, coded as an integer OUTPUT: a number or ↑ ALGORITHM: 1. do run(p)(0) and wait.... 2. if run(p)(0)↓a, then produce output a+1 (if run(p)(0)↑, then we are still waiting, and there is nothing we can do)
Not all such requirements can be implemented. In particular, Turing restated Russell's paradox as this requirement:
p => ↓1 iff run(p)(p)↑ => ↑ iff run(p)(p)↓answer If some program, named Russell, implemented this requirement, then run(Russell)(Russell)↓answer iff run(Russell)(Russell)↑ which cannot be --- Russell would be a "paradox program"!The requirement uses self-inspection (run(p)(p)) and then "flips" the answer --- in particular, ↑ is flipped to ↓1, which goes against intuition. (How can a computer program detect that another computer program "has gone lost"?)
It is not surprising that the above requirement can never become a program, but its consequences are many: There is an incredibly useful program that Turing proved is impossible to implement:
TURING'S HALTING PROBLEM: Can we implement a program that analyzes another program and its input and predicts in advance whether the program-and-input will halt (converge to an answer)? p,d => ↓1 iff run(p)(d)↓answer => ↓0 iff run(p)(d)↑The answer is no: If some program, named Turing, implemented the above requirement, then we could build the Russell program, like this:
INPUT: a program, p, coded as an integer OUTPUT: ↓1 or ↑ ALGORITHM: 1. run(Turing)(p,p) and obtain the output answer, which must be 0 or 1 2. if answer = ↓1 then diverge: ↑ if answer = ↓0 then produce output ↓1 This generates an implementation of Russell's paradox, which cannot be.From the Halting Problem, researchers showed that many other important requirements are also impossible to implement:
In practice, there are still text-editor, analyzer, and error-finding programs that can examine other programs, even themselves. But they cannot answer all the interesting questions about programs that we might like them to answer --- the best one can do is build a program that converges to a useful answer some of the time (like the sqrt program seen earlier does) and diverges lots of other times (because the program cannot discover an answer and "goes lost".)
Computing is constructive mathematics, and since mathematics has its limitations, so does computing.
For better or worse, there are many critical requirements that have poor algorithms, and there is no known way to improve them. Such requirements are ``practically impossible'' or intractible. Here are examples of intractible problems:
Computational complexity theory is the study of ``how fast'' algorithms are. Many requirements have fast (log-time and linear-time) algorithms, others have slow-but-tolerable (polynomial-time) algorithms, and others have intractible (exponential-time and worse) algorithms. Learning the details of computational complexity will help you become a computer scientist.