At the end of the previous chapter, we saw that a function that calls (``restarts'') itself reuses its own pre- and post-conditions to deduce the knowledge gained from the recursive call. A loop is a command that restarts itself over and over, and it is just like a function that restarts itself over and over.
If we think of a loop as a function that restarts itself each iteration, then we must ask, ``What are the pre- and post-conditions?'' Since the loop iterations follow one another, it must be that the postcondition from the end of the previous iteration is the exactly the precondition for starting the next iteration --- the loop's ``pre'' and ``post'' conditions are one and the same. The loop's pre-post-condition is called the loop's invariant. We can understand this with an example.
n! == 1 * 2 * 3 * ...up to... * nand it is also traditional to define 0! = 1. It is easy to write a program with a loop that computes the repeated product; here is one version:
=================================================== n = readInt("Type a nonnegative int: ") assert n >= 0 i = 0 fac = 1 while i != n : i = i + 1 fac = fac * i print n, fac ===================================================The loop is constructed so that it adjoins the multiplications, *1, *2, *3, etc., to the running total, fac, until the loop reaches i = n. Consider some execution cases:
the loop repeats 0 times: it computes fac = 1 = 0! the loop repeats 1 time: it computes fac = 1 * 1 = 1! the loop repeats 2 times: it computes fac = (1 * 1) * 2 = 2! the loop repeats 3 times: it computes fac = (1 * 1 * 2) * 3 = 3! the loop repeats 4 times: it computes fac = (1 * 1 * 2 * 3) * 4 = 4! . . . the loop repeats k+1 times: it computes fac = (k!) * (k+1) = (k+1)!The examples indicate that the reason why the loop computes, for input 4, the correct answer of 4! in 4 iterations is because the loop computes 3! correctly in 3 iterations --- the fourth iteration builds upon the work of the previous three.
This is the standard use of a loop: each loop iteration builds on previous work and moves us one step closer to the final goal.
For the factorial example, the subgoals have the form, ``after i iterations, variable fac has the value, i!.'' When we execute the loop body once more, we generate the assertion, ``after i+1 iterations, variable fac has the value, (i+1)!.'' After n iterations, the loop stops and the goal is achieved.
We deduce this fact upon the loop body like this:
===================================================
"""{ fac == i! }""" (we have conducted i iterations so far)
i = i + 1
"""{ 1. i == iold + 1 premise
2. fac == (iold)! premise
3. fac == (i-1)! algebra 1 2
}"""
fac = fac * i
"""{ 1. fac == facold * i premise
2. facold == (i-1)! premise
3. fac == (i-1)! * i subst 2 1
4. fac == i! definition of i!
}"""
===================================================
That is, the assertion that results from one more iteration is the same
as the assertion we had when we started the iteration!
This is because the assignment, i = i + 1, ``hides'' the fact that
we have completed one more iteration.
But this is correct --- we read the assertion, fac == i! as saying, ``after i iterations, fac has value i.'' This property holds true --- it is invariant for no matter how many times the loop repeats.
This little deduction shows us that the precondition for entering the loop must be that fac == i!. The postcondition produced by the loop's body is exactly the same --- fac == i!. Since the loop's body is meant to repeat, it is critical that the loop body's postcondition matches its precondition.
There is a solid precedent for this concept:
If we think of a program as an electronic circuit, where
knowledge flows along the wires instead of voltage, then
we see that a loop program is a feedback circuit, where a
voltage, I, is forced backwards into the circuit's entry:
I|
v
+->while B :--+
| I| |
| v |
| C v
I| |
+----+
The voltage (knowledge) level, I, must be stable along the back arc
of the circuit, else the circuit will oscillate and misbehave.
A loop works the same way --- for the loop's iterations to
progress towards a goal, there must be a stable level of knowledge
along the back arc. This stable level of knowledge is the loop's
invariant property.
In the previous chapter, we saw that an invariant is required for maintaining a global variable --- we don't want the logical structure of the global variable to change just because it is shared by multiple functions and commands that might be repeatedly executed. We also saw that a function that calls itself relies on pre- and postconditions that are invariant. In the same way, the iterations of a loop build towards a goal because the knowledge its body generates is invariant, so that the iterations ``connect together.''
The invariant property is actually quite natural. Consider
this sequence of commands, which computes 3!:
===================================================
n = 3
i = 0
fac = 1
"""{ i == 0 ^ fac == i! }"""
i = i + 1
fac = fac * i
"""{ i == 1 ^ fac == i! }"""
i = i + 1
fac = fac * i
"""{ i == 2 ^ fac == i! }"""
i = i + 1
fac = fac * i
"""{ i == 3 ^ fac == i! }"""
print n, fac
===================================================
This little example is just the loop repeated three times.
The invariant, fac == i!, is critical to the success of the computation.
Notice how the knowledge generated by completing one iteration ``feeds into''
the next iteration. And, after each iteration, that knowledge is that
fac == i!.
When the loop for factorial quits, what do we know?
===================================================
n = readInt("Type a nonnegative int: ")
assert n >= 0
"""{ n >= 0 }"""
i = 0
fac = 1
while i != n :
"""{ fac == i! }"""
i = i + 1
fac = fac * i
"""{ fac == i! }"""
"""{ ??? }"""
print n, fac
===================================================
For certain, when the loop quits after i iterations,
we know that fac == i! (i remembers the number
of loop iterations). But we also know that the loop's test has
gone false, that is, ~(i != n), that is, i == n:
===================================================
n = readInt("Type a nonnegative int: ")
assert n >= 0
i = 0
fac = 1
"""{ i == 0 ^ fac == i! }"""
while i != n :
"""{ fac == i! }"""
i = i + 1
fac == fac * i
"""{ fac == i! }"""
"""{ 1. i == n premise (the loop's test has gone False)
2. fac == i! premise (the invariant holds at the end of each interation)
3. fac = n! subst 1 2 (the loop accomplished its goal)
}"""
print n, fac
===================================================
Since the loop terminated its iterations at the correct time,
fac holds the correct answer.
"""{ ... I }""" (we must prove this True before the loop is entered) while B : """{ invariant I modifies VARLIST (those variables updated in C) }""" """{ 1. B premise 2. I premise ... }""" (the premises for the loop's body) C """{ ... I }""" (we must prove I at the end of the body) """{ 1. ~B premise 2. I premise ... }""" (both ~B and I hold true when the loop terminates)That is, to deduce the knowledge produced by a while-loop (when we do not know in advance how many times the loop will iterate), we must deduce an invariant I that
Because the loop will update some variables in its body, we must know these variables' names, so that any premises other than the loop invariant that enter the loop body that mention these variables are cancelled. We see this below in an example:
Here is the factorial example, reassembled to use the while-law:
===================================================
"""{ def 0! == 1 # these are the recurrences that define n!
def k! == (k-1)! * k
}"""
n = readInt("Type a nonnegative int: ")
assert n >= 0
"""{ 1. n >= 0 premise }"""
i = 0
fac = 1
"""{ 1. i == 0 premise
2. fac == 1 premise
3. 0! == 1 def (of 0!)
4. fac == i! algebra 1 2 3
}"""
while i != n :
"""{ invariant fac == i!
modifies i, fac
}"""
# Here, the invariant is a legal premise, but
# i == 0, fac == 1, and fac == 0! ARE NOT,
# because i and fac are modified by the loop's body
i = i + 1
"""{ 1. i == iold + 1 premise
2. iold == i - 1 algebra 1
3. fac == (iold)! premise # from the invariant
4. fac == (i-1)! subst 2 3
}"""
fac = fac * i
"""{ 1. fac == facold * i premise
2. facold == (i-1)! premise
3. fac == (i-1)! * i subst 2 1
4. i! == (i-1)! * i def (of i!)
5. fac == i! algebra 3 4
}"""
# the loop ends here
"""{ 1. not(i != n) premise
2. i == n algebra 1
3. fac == i! premise
4. fac == n! subst 2 3
}"""
print n, fac
===================================================
After working a few examples, we see the challenge lies in discovering the appropriate invariant for deducing that the loop achives its goal. This is an art form; there cannot exist a finite, mechanical algorithm to do this. (This is a key result of computability theory, the study of what problems are mechanically solvable.)
In a formal, technical sense, it is possible to define a forwards law for computing the postcondition of a while-loop that does not use an invariant, I. But the postcondition assertion is an infinite disjunction of the form, Q0 v Q1 v Q2 v ... v Qi v ..., where assertion Qi asserts that the loop starts from an assertion P and repeats itself exactly i times and then stops. It is difficult to manipulate infinite-length assertions, and we will not try here. In a similar way, there is a backwards law for computing the precondition of a while-loop that does not require an invariant. But the assertion is an infinite conjunction of the form, R0 ^ R1 ^ R2 ^ ..., where each Ri lists the subgoal needed so that if the loop would repeat i times and stop, then the final goal G is achieved.
IMPORTANT: Saying what ``the loop is doing'' is different from saying what the loop ``will do'' before it starts or what the loop ``has done'' after it has finished. Another way of saying it is, ``Say that the loop has been running for a while --- what has it accomplished so far?'' The answer to this question gives insight into the inner structure of the loop.
Following are some examples of invariant discovery.
=================================================== x = readInt("Type an int: ") y = readInt("Type another: ") z = 0 count = 0 while count != x : print "(a) x =", x, " y =", y, " count =", count, " z =", z """{ invariant ??? modifies z, count }""" z = z + y count = count + 1 print "(b) x =", x, " y =", y, " count =", count, " z =", z ===================================================To better understand, we execute a test case and watch what is printed:
=================================================== Type an int: 3 Type another: 4 (a) x = 3 y = 4 count = 0 z = 0 (a) x = 3 y = 4 count = 1 z = 4 (a) x = 3 y = 4 count = 2 z = 8 (b) x = 3 y = 4 count = 3 z = 12 ===================================================The trace information shows this pattern between the values of the variables:
y * count == zThis is what the loop is doing --- what relationship it maintains between the variables it sees and modifies. Because the loop stops when count == x, we conclude that z == y * x. (This is what the loop has done.)
Now we must apply the necessary laws to prove our conjecture
that count * y == z is invariant for the loop's body:
===================================================
z = 0
count = 0
"""{ 1. z == 0 premise
2. count == 0 premise
3. count * y == z algebra 1 2
}"""
while count != x :
"""{ invariant count * y == z
modifies z count }"""
z = z + y
"""{ 1. z == zold + y premise
2. count * y == zold premise
3. (count + 1) * y == z algebra 2 1
}"""
count = count + 1
"""{ 1. count == countold + 1 premise
2. (countold + 1) * y == z premise
3. count * y == z subst 1 2
}"""
"""{ 1. ~(count != x) premise
2. count == x algebra 1
3. count * y == z premise
4. x * y == z subst 2 3
}"""
===================================================
Here is the program that does repeated subtraction like division
is meant to do:
===================================================
n = readInt("Type an nonegative int: ")
d = readInt("Type a positive int: ")
assert n >= 0 and d > 0
q = 0
r = n
while r >= d :
print "(a) n =", n, " d =", d, " q =", q, " r =", r
"""{ invariant: ??? modifies q r }"""
q = q + 1
r = r - d
print "(b) n =", n, " d =", d, " q =", q, " r =", r
print n, "divided by", d, "is", q, "with remainder", r
===================================================
Here is a sample execution with trace information printed:
===================================================
Type an nonegative int: 14
Type a positive int: 3
(a) n = 14 d = 3 q = 0 r = 14
(a) n = 14 d = 3 q = 1 r = 11
(a) n = 14 d = 3 q = 2 r = 8
(a) n = 14 d = 3 q = 3 r = 5
(b) n = 14 d = 3 q = 4 r = 2
14 divided by 3 is 4 with remainder 2
===================================================
This is a ``numbers game,'' where the underlying pattern (invariant)
at point (a) is
"""{ invariant (d * q) + r == n }"""
When the loop quits, that is, when there is no longer enough
value in r to allow yet one more subtraction of q, then
the result is exactly the quotient-remainder that results from
dividing n by d.
=================================================== count = 0 # how many scores read so far total = 0 # the points of all the scores read so far processing = True """{ total == 0 }""" while processing : """{ invariant total == score1 + score2 + ...up to... + scorecount modifies processing, total, count }""" score = readInt("Type next score (-1 to quit): ") if score < 0 : processing = False """{ total == score1 + score2 + ...up to... + scorecount }""" else : total = total + score """{ total == score1 + score2 + ...up to... + scorecount+1 }""" count = count + 1 """{ total == score1 + score2 + ...up to... + scorecount }""" """{ total == score1 + score2 + ...up to... + scorecount }""" print "The average is", (float(total) / count) # compute a fractional average ===================================================total holds the sum of the scores read so far: when count is 1, total holds the sum of one score; when count equals 2, total holds the sum of two scores, and so on. We write the pattern like this:
total = score1 + score2 + ...up to... + scorecountHere, scorei denotes the input score that was read at the loop's i-th iteration.
When the loop starts, there are no scores saved in total. In a technical, default sense, the sum of total = score1 + ...up to... + score0 is an ``empty sum'', which is treated by mathematicians as 0. (You can't count from 1 ''up to'' 0 --- it is an empty range of values.) This means the invariant holds for zero iterations of the loop, which allows us to enter the loop and then prove that the invariant is preserved as the loop repeats and then quits.
=================================================== def reverse(w) : """reverse reverses the letters in w and returns the answer""" """{ pre w is a string post ans == w[len(w)-1] w[len(w)-2] ...down to... w[1] w[0] return ans }""" ===================================================How to do it? Think of it as a game: count through the letters of w and copy each letter to the front of variable string ans. Round number i of the game would go
ans = w[i] + ans i = i + 1When we apply this repeatedly, we get the desired behavior --- we win the game:
=================================================== after 0 iterations: w = "abcd" i = 0 ans = "" after 1 iteration: w = "abcd" i = 1 ans = "a" after 2 iterations: w = "abcd" i = 2 ans = "ba" after 3 iterations: w = "abcd" i = 3 ans = "cba" after 4 iterations: w = "abcd" i = 4 ans = "dcba" ===================================================The invariant is that the first i letters of w are found in reverse order within ans:
ans = w[i-1] w[i-2] ...down to... w[0]You should check that this is indeed an invariant property of the proposed loop body above. This completes the function:
=================================================== def reverse(w) : """reverse reverses the letters in w and returns the answer""" """{ pre w is a string post ans == w[len(w)-1] w[len(w)-2] ...down to... w[1] w[0] return ans }""" ans = "" i = 0 """{ 1. ans == "" premise 2. ans == w[i-1] w[i-2] ...down to... w[0] algebra 1 # ``-1 down-to 0'' is an empty range of characters from w }""" while i != len(w) : """{ invariant ans == w[i-1] w[i-2] ...down to... w[0] modifies ans, i }""" ans = w[i] + ans """{ 1. ans == w[i] + ansold premise 2. ansold == w[i-1] w[i-2] ...down to... w[0] premise 3. ans == w[i] + w[i-1] w[i-2] ...down to... w[0] algebra 1 2 }""" i = i + 1 """{ 1. i == iold + 1 premise 2. ans == w[iold] + w[iold-1] w[iold-2] ...down to... w[0] premise 3. iold == i - 1 algebra 1 4. ans == w[i-1] w[i-2] ...down to... w[0] subst 3 2 }""" """{ 1. i = len(w) premise 2. ans == w[i-1] w[i-2] ...down to... w[0] premise 3. ans == w[len(w)-1] w[len(w)-2] ...down to... w[0] subst 1 2 }""" # the function's postcondition is proved return ans ===================================================The ellisis (...down to...) in the invariant is a bit imprecise; we can construct the proof more precisely with the assistance of these two recurrences:
rev(w, ans, 0) if ans == "" rev(w, ans, k+1) if rev(w, a', k) and ans == w[k] + a'That is, rev(w, a, i) says that the first i letters of string w are saved in reverse order in string a, that is,
rev(w, a, i) exactly when a == w[i-1] + w[i-2] + ...downton... + w[0]Now, we need not depend on the ellipsis in our deduction; we use the recurrence to prove that the loop's invariant is rev(w, ans, i):
=================================================== """{ def rev(w, ans, 0) if ans == "" def rev(w, ans, k+1) if rev(w, a', k) and ans == w[k] + a' }""" def reverse(w) : """reverse reverses the letters in w and returns the answer""" """{ pre w is a string post rev(w, ans, len(w)) return ans }""" ans = "" i = 0 """{ 1. ans == "" premise 2. rev(w, ans, 0) if ans == "" def 3. rev(w, ans, 0) ife 2 1 # ``if elimination'' 4. i == 0 premise 5. rev(w, ans, i) subst 4 3 }""" while i != len(w) : """{ invariant rev(w, ans, i) modifies ans, i }""" ans = w[i] + ans """{ 1. ans == w[i] + ansold premise 2. rev(w, ansold, i) premise 3. rev(w, ansold, i) and ans == w[i] + ansold andi 2 1 4. rev(w, ans, i+1) if rev(w, a', i) and ans == w[i] + a' def 5. rev(w, ans, i+1) ife 3 4 }""" i = i + 1 """{ 1. i == iold + 1 premise 2. rev(w, ans, iold+1) premise 3. rev(w, ans, i) subst 1 2 }""" """{ 1. not(i != len(w)) premise 2. rev(w, ans, i) premise 3. i == len(w) algebra 1 4. rev(w, ans, len(w)) subst 3 2 }""" # the function's postcondition is proved return ans ===================================================Recurrences are the standard way of eliminating ellipses in mathematical statements. We will study them again later in the chapter.
=================================================== a = [5, 10, 15, 20, ... ] i = 0 while i != len(a) : """{ invariant: ??? }""" a[i] = a[i] * a[i] i = i + 1 print a ===================================================In words, the loop's invariant is that ``while the loop is running, all of a's elements, from 0 up to i, are squared.'' It is a little difficult to state this precisely with algebra notation. Our first attempt might read like this:
Let ain be the starting value of a. The invariant is a[0] = ain[0] * ain[0] ^ a[1] = ain[1] * ain[1] ^ ... ^ a[i-1] = ain[i-1] * ain[i-1] ^ a[i] = ain[i] ^ a[i+1] = ain[i+1] ^ ... ^ a[len(a)-1] = ain[len(a)-1]This is overly wordy.
There is a better way to represent the assertion (even better
than using a recurrence), using the
logical operator, ``forall''. (We will see that the for-all
operator is written FORALL, but for now we use the words, ``for all''.)
The invariant is
forall 0 <= j < i, a[j] = ain[j] * ain[j]
^
forall i <= j < len(a), a[j] = ain[j]
That is, for all j in the range from
0 to i-1, a[j] = ain[j] * ain[j], and
for all j in the range, i to len(a)-1, a[j] = ain[j].
This indicates clearly that array a is split into the segment
from a[0] up to a[i-1], whose elements are squared, and the segment,
a[i] to the end, whose elements are not yet altered.
Notice that when the loop quits, it is because i == len(a).
In this situation, the range from i to len(a)-1 is empty, and the
segment of unaltered elements is also empty.
, which we will
study shortly. It looks like this:
for all j,
( (0 <= j < i) --> (a[j] = ain[j] * ain[j]) )
^
( (i <= j < len(a)) --> (a[j] = ain[j]) )
(The notation, 0 <= j < i, should be read as 0 <= j ^ j <= i.)
-->
=================================================== def find(c, s) : """find locates an occurrence of c in s and returns its index. If c is not present in s, then -1 is returned. parameters: c - a letter; s - a string """ index = 0 # the position of the letter in s we are examining found = False # did we find c in s yet? while index != len(s) and not found : # so far, c is not any of s[0], s[1], ..., s[index-1] if s[index] == c : found = True else : index = index + 1 # We have exited the loop. This happened for one of two reasons: # (1) found = True, meaning that s[index] == c # (2) index = len(s), meaning all of s was examined and c was not found if not found : index = -1 else : pass return index ===================================================Examples of execution are find("d", "abcded") returns 3, and find("d", "abc") returns -1.
The loop's invariant can be stated in terms of the value of boolean variable, found:
invariant: (~found --> (c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]) ^ (found --> c == s[index])So, if ~found holds, we know c has not been located so far, and if found holds, we know c has just now been located. This states exactly the purpose of the boolean variable, found.
Another, logically equivalent way (as we will see in Chapter 5)
of stating the invariant uses an v operator:
(~found ^ (c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]))
v
(found ^ c == s[index])
which indicates that the value of found is set to match the situation
with c and s.
(As an exercise, remove the ellisis by using forall.)
We will use this latter invariant to analyze the loop's body. After we study the logical properties of --> in the next chapter, we will be equipped to use both forms of the invariant. c = s[index]) (we must drop this last fact, but...) implies: found --> c = s[index] (the new value of found reestablishes it!) ^ c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1] that is, the invariant still holds }""" else : """{ assert: s[index] != c ^ ~found ^ invariant }""" index = index + 1 """{ assert: index = indexold + 1 ^ s[indexold] != c ^ c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[indexold-1] ^ ~found ^ (found --> c = s[indexold]) implies: c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1] ^ ~found ^ (found --> c = s[index]) (because found=False, the implication is ``empty'' but technically true) }""" """{ assert: invariant }""" =================================================== --> The loop's body contains the key if-command; both of its arms are interesting:
if s[index] == c : # have we found letter c at position index ? """{ s[index] == c ^ ~found ^ invariant }"""First, the invariant and ~found carry through into the then-arm. If indeed, s[index] = c, we enact the assignment,
found = Trueand this enables the truth of the assertion, found ^ c == s[index] together, and surely then, we have the weaker claim that
"""{ (~found ^ (c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1])) v (found ^ c == s[index]) }"""which is the invariant. (The ``weaker claim'' is justified by ``or-introduction'', which we study in the next chapter.)
else : """{ s[index] != c ^ ~found ^ invariant }"""Since we have s[index] != c, we add this to the invariant. We do so with the assignment,
index = index + 1This lets us assert
c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]We can correctly assert ~found ^ c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]. This again can be weakened to give us the invariant. c = s[indexold]) But this is not exactly what we want. We want found --> c = s[index]. Now, do we even know yet what is the value of s[index]? Well, no - - - we will check it during the next, upcoming iteration. Fortunately, we have found = False. So, we can argue, in a technical sense, that the claim is ``vacuously true''; whenever found goes to True, then c = s[index] follows. This is still technically true (remember, found = False at the moment!), and using the laws of symbolic logic (which are carefully developed in the next chapter), we deduce
found --> c = s[index]So, the invariant is established by the else-arm, -->
Let's summarize our analysis:
===================================================
invariant:
(~found ^ (c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]))
v
(found ^ c == s[index])
===================================================
===================================================
def find(c, s) :
"""find locates an occurrence of c in s and returns its index.
If c is not present in s, then -1 is returned.
parameters: c - a letter; s - a string
"""
"""{ pre c is a character and s is a string
post s[index] == c v
((index = -1) ^ s[0]!=c ^ s[1]!=c ^ ...upto... s[len(s)-1]!=c)
return index
}"""
index = 0 # the position of the letter in s we are examining
found = False # did we find c in s yet?
"""{ invariant }""" # because found==False and there are no comparisons
# in the range from s[0]==c ...upto... s[0-1]==c)
while index != len(s) and not found :
"""{ invariant
(~found ^
(c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]))
v
(found ^ c == s[index])
modifies found, index
}"""
if s[index] == c :
found = True
"""{ found ^ c == s[index] }"""
else :
index = index + 1
"""{ ~found ^
c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]
}"""
"""{ 1. (found ^ c == s[index]) v
(~found ^ c != s[0] ^ ...upto... ^ c!=s[index-1]) premise }"""
# exited the loop:
"""{ 1. invariant premise
2. ~(index != len(s) and not found) premise
3. index == len(s) v found algebra 2
}"""
if not found :
"""{ 1. ~found premise
2. invariant premise
3. c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[len(s)-1] ve 2 1
}"""
index = -1
"""{ 1. index == -1 premise
2. c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[len(s)-1] premise
3. return 1 2 # collects the two lines into one
}"""
else :
"""{ 1. found premise
2. invariant premise
3. c == s[index] ve 2 1
}"""
pass
"""{ 1. c == s[index] premise }"""
"""{ 1. (index == -1 ^ c != s[0] ...upto... ^ c!=s[len(s)-1] )
v (c == s[index]) premise }"""
return index
===================================================
The if-command after the loop uses the value of found plus the invariant
to return the correct answer.
This deduces the function's postcondition.
This example used subtle deduction steps involving ve, and we are exceeding our limits of knowledge about how to deduce logical facts with the logical operators. For this reason, we must pause our study of programming logic for a study of symbolic logic, that is, the algebra of logical assertions.
c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]There is a better notation, known as the universal quantifier, ``for all'': FORALL. The phrase reads better like this:
FORALL 0 <= i < index: c != s[i]Read the line as ``for all i such that i is >= 0 and < index, c != s[i] holds.'' This is the shortened form of stating,
c != s[0] ^ c !=s[1] ^ ...upto... ^ c!=s[index-1]and there are standard laws for using FORALL in assertions.
The specification of the previous example now looks like this:
===================================================
def find(c, s) :
"""find locates an occurrence of c in s and returns its index.
If c is not present in s, then -1 is returned.
parameters: c - a letter; s - a string
"""
"""{ pre c is a character and s is a string
post s[index] == c v
((index = -1) ^ FORALL 0 <= i < len(s): s[i] != c)
return index
}"""
===================================================
There is a dual operator to FORALL that asserts existence of a value. To understand how it might be used, consider this variation of the previous example.
(Before we get started, please remember that Python allows you to compute a ``slice'' (substring) of a string s, like this:
x = "abcde" y = x[:3] # y = "abc" z = x[:0] # z = ""
x = "abcde" v = x[2:] # v = "cde" w = x[5:] # w = ""
Here is the example:
===================================================
def delete(c, s) :
"""delete locates an occurrence of c in s and
removes it and returns the resulting string.
If c is not in s, a copy of s is returned, unchanged.
parameters: c - a letter; s - a string
returns: answer, a new string that looks like s with c removed
"""
"""{ pre c is a letter and s is a string
post (s[index] == c ^ answer == s[:index] + s[index+1:])
v
(answer == s ^ (FORALL 0 <= i < len(s): s[i] != c))
return answer
}"""
index = 0
found = False
answer = ""
while index != len(s) and not found :
if s[index] == c :
found = True
else :
ans = ans + s[index]
index = index + 1
if found :
answer = answer + s[index+1:]
else :
pass
return answer
===================================================
Examples:
delete("d", "abcded") returns "abced", and
delete("d", "abc") returns "abc".
You are welcome to deduce that the program meets its postcondition. (Reuse the loop invariant from find in the previous exercise.)
But there is a technical problem --- variable index is a variable local to the function's body and is not part of the precondition nor is it part of the answer returned by delete --- it makes no sense to include it in the postcondition. Worse yet, its presence can lead to false deductions at the point where the function is called!
(An example:
index = 2
t = "abcd"
u = delete("a", t)
"""{ at this point, we certainly cannot assert that t[2] = "a"! }"""
)
We must hide the name, index, from the function's postcondition.
We do it like this:
EXIST 0 <= i < len(s): s[i] = c ^ answer = s[:i] + s[i+1:]
Read EXIST as ``there exists''; it is called an
existential quantifier.
So, the assertion reads,
``there exists some value i, such that i is >=0 and < len(s)
such that s[i] = c ^ answer = s[:i] + s[i+1:].''
The existential quantifier hides the local variable name inside function delete so that it is not incorrectly used by the command that calls the function. We will study both FORALL and EXIST in Chapter 6.
=================================================== def fact(n) : """{ pre n >= 0 post answer == n! return answer }""" i = 0 answer = 1 """{ answer == i! }""" while i != n : """{ invariant answer == i! modifies fac i }""" answer = answer * (i+1) """{ answer == i! * (i+1) }""" i = i + 1 """{ answer == i! }""" """{ 1.i == n 2. answer == i! 3. answer == n! }""" return answer ===================================================What happens for fact(-1) ? No answer is returned because the precondition is violated and the loop is unable to terminate. What if we ignored the function's precondition --- the proof of the loop remains the same?!
The deduction law for loops guarantees, if the loop terminates,
then the postcondition must hold true.
There can be silly applications of the loop law. Consider this
faulty program:
===================================================
def f(n) :
"""{ pre n is an int }"""
"""{ post answer == n! }"""
i = 0
answer = 1
"""{ answer == i! }"""
while i != n :
"""{ invariant answer == i! }"""
pass
"""{ answer == i! }""" # but no variables are modified!
"""{ 1. i == n
2. answer = i!
3. answer = n!
}"""
return answer
===================================================
The proof of f's postcondition is correct:
But the loop body preserves the invariant only because its body, pass,
is too timid to make any progress at all towards the goal.
So, the loop never terminates. Now, if the loop would terminate,
then the proof shows we will achieve the goal. But, for every argument
but 0,
the loop will not terminate.
Because of this limitation of the loop law,
it is called a partial correctness law.
To ensure total correctness, that is, to prove the loop must terminate
and satisfies its goal, we must use additional reasoning.
The reasoning is usually based on a numerical, ``count down'' measure,
which measures the number of iterations the loop needs to do before
it quits and achieves its goal. In the first example,
===================================================
def fact(n) :
"""{ pre n >= 0
post answer == n!
return answer
}"""
i = 0
answer = 1
"""{ answer == i! }"""
while i != n :
"""{ invariant answer = i!
modifies answer, i
termination measure n - i # must compute to a nonnegative int!
}"""
answer = answer * (i+1)
i = i + 1
"""{ answer == i! }"""
# at this point, the termination measure has decreased
# at this point, the termination measure equals 0
"""{ i == n ^ answer == i! }"""
return answer
===================================================
The numerical expression, n - i, measures an upper bound of
how many iterations
are needed to cause the loop to quit. The value of the measure
must always compute to a nonnegative integer, and after each iteration of the
loop, the measure must decrease by at least 1. This means, after
some finite number of iterations, the measure hits 0 and the loop stops.
We won't develop this further....
When we apply the loop law and prove a loop property with an invariant, we make the claim, ``no matter how many times the loop iterates, if/when it finally quits, the invariant property holds true.''
Why do we know this?
We can support our claim with this explanation.
Say that we proved these assertions:
"""{ I }""" (prove this)
while B :
"""{ B ^ I }""" (assume this)
C
"""{ I }""" (prove this)
That is, we showed that property I is true the first time
we reach the loop, and we showed that the loop's body, C
preserves I.
Now, say that the loop iterates
If we can argue the correctness of an assertion or a behavior in terms of some count that must be a nonnegative integer, then we can write the proof in terms of two cases --- the ``0-count case'' and the ``k+1-count case''. The 0-count case must be written so that it is a self-contained proof (like the above example), and k+1-count case must be written so that it appeals to the previous case (k) to finish its proof (like the above example).
Then, for every nonnegative integer, we can assemble a proof that the assertion or behavior is true, just like in the above example. A proof written in this style is called a proof by mathematical induction.
By using the above argument by mathematical induction,
what we have proved is this:
If we have proved,
"""{ I }"""
while B :
"""{ B ^ I }"""
C
"""{ I }"""
then,
"""{ forall i >= 0:
(the while-loop terminates in i iterations)
-->
(on termination, ~B ^ I holds true)
}"""
This is the justification for the while-loop law ---
all possible finite executions must preserve I.
Say that we have a question to answer, or a problem to solve, or a program to test, and there are an infinite number of possibilities/cases/tests to consider. We cannot consider all the possibilities one by one. How do we cover them all? Proof by mathematical induction shows us how:
For the question/problem/program, say that we can arrange its possibilities/cases/tests so that they are ordered as 0, 1, 2, .... To cover all the cases and prove our goal, we write two proofs:
This is because every nonnegative int, n, has a position in the infinite chain, 0, 1, 2, ..., n-1, n, ... and we can therefore use the Basis and Induction proofs to argue that all of 0, then 1, then 2, etc., all the way up to n-1 and then n are correct.
In this way, we intellectually cover all the possibilities with just two cleverly formulated proof schemes.
n = readInt("Type a nonnegative int: ") assert n >= 0 i = 0 sum = 0 while i != n : i = i + 1 sum = sum + i print sumYou might test this program with some sample input integers to see if it behaves properly. Which test cases should you try? How many test cases should you try? Are you ever certain that you have tested the program sufficiently? (These questions are famous ones and do not have clear-cut answers. There are even mathematical proofs that show it is impossible to have a guaranteed-sufficient testing strategy for an arbitrary program.)
For our example, we might do this ``systematic testing'':
The test cases are connected. Rather than test, 5, 6, ..., 6000, ..., we realize that there are really only two distinct test cases: the one for 0, which makes the loop stop immediately, and the one for a positive int, call it k+1, where if the test case for k worked correctly, then we can argue that running the loop body one more time will make the k+1 test work correctly. When we make an argument in this style, we are using mathematical induction as a kind of ``systematic testing,'' covering all possible test cases.
Here is our mathematical induction proof for the above example:
CLAIM: For every input n in the set, {0,1,2,...}, the program will compute sum == 0 + ...up to... + n in n iterations.
The proof is made by mathematical induction:
Programs that use ``counting loops'' can often be argued correct using this technique. Indeed, if you reread the previous section (``Loop invariants and mathematical induction''), you will realize the point of finding a loop invariant is so that a mathematical induction argument can be made with the invariant --- when the loop quits, the invariant must hold true.
Summing a nonnegative int up to n can be defined as
0 + 1 + ...up to... + n. But in algebra, it is defined like this:
SIGMA0 = 0
SIGMA(k + 1) = SIGMAk + (k + 1)
These two definitions ``look'' the same, but we can prove it:
Proposition: For all nonnegative ints, n >= 0, SIGMAn == 0 + 1 + ...up to... + n.
The proof is done by mathematical induction:
Here is a traditional use of mathematical induction --- proving a fact about algebra and numbers.
It might appear a bit surprising, but the repeated sum,
0 + 1 + 2 + ... + n,
always totals up to
((n*n)+n)/2, no matter what value nonnegative integer, n, might be.
Try some examples, and you will see this is true, e.g.,
0 + 1 + 2 + 3 + 4 + 5 + 6 = 21 equals (6*6)+6 / 2, that is, 42/2 = 21
How do we know this formula, ((n*n)+n)/2, works for all possible
nonnegative integers? Should we try them all? We can use
the mathematical induction technique to prove, once and for all,
that the formula works correctly for all nonnegatives.
Here is what we want to prove:
n2 + n
For all integers, n >= 0, 0 + 1 + 2 + ... + n = -------
2
(If you wish, you can think of n as the ``input,''
and think of
0 + 1 + 2 + ... + n as a ``loop program,'' and think of
the formula, ((n*n)+n)/2, as the program's ``correctness property.'')
We make the proof by mathematical induction, meaning there are two cases to analyze:
0 + 1 + 2 + ... + k = ((k*k)+k)/2We must show that 0 + 1 + 2 + ... + k + (k+1) equals (((k+1)*(k+1)) + (k+1)) / 2. We do algebra on the second expression, like this:
(k+1)*(k+1) + (k+1) (k*k) + 2k + 1 + (k+1) (k*k) + k + 2k + 2 --------------------- = ----------------------- = ------------------ 2 2 2and we can split this fraction into two pieces:
(k*k) + k + 2k + 2 (k*k) + k 2k + 2 (k*k) + k ------------------- = ----------- + -------- = ----------- + (k + 1) 2 2 2 2But we recall the induction hypothesis, which we use to substitute:
(k*k) + k ----------- + (k + 1) = (0 + 1 + 2 + ... + k) + (k + 1) 2This proves the desired result.
The result is proved, with the basis step and the induction step, by mathematical induction. These two cases cover all the nonnegative ints, starting from zero and counting upwards as often as needed for arbitrarily large positive integers. (Example: we now know that 0+1+...+500 equals ((500*500)+500)/2, because we can use the basis step to start and apply the induction step 500 times to follow and to obtain the result for 500.)
To finish, we repeat the mathematical-induction proof law: When you are asked to prove a property of the form, ``for all nonnegative integers, i, Pi holds true'', you can do so in two steps:
"""{ ... I }""" while B : """{ invariant I modifies VARLIST (those variables updated in C) }""" """{ 1. B premise 2. I premise ... }""" C """{ ... I }""" """{ 1. ~B premise 2. I premise ... }"""
The mathematical-induction proof law is: to prove a property of the form, ``for all nonnegative integers, i, Pi holds true'', do so in two steps: