Copyright © 2008 David Schmidt

Chapter 0:
Circuits and truth tables


0.1 Base 2 arithmetic
0.2 Gates and truth tables
0.3 Hardware description language
0.4 Programming an adder
0.5 Knowledge travels along the wires



0.1 Base 2 arithmetic

Electronic computers do arithmetic in base 2, where numbers are expressed with 1s and 0s. (E.g., 13 is expressed as 00001101 in base 2.) A base 2 number is stored in a register or computer word, where each position (bit) in the word is wired so that it can hold a low-voltage charge (0) or a high-voltage charge (1). The wiring of each register/word lets the computer sense the voltage level at each bit and change the voltage level.

A computer does arithmetic by sensing the voltage levels at the bits and computing new voltage levels, which are deposited into another word. For example, the addition of two words is done by sensing the bits of the words, right to left, and computing new bits in the answer. Consider the addition of 21 to 30, which we write like this:

  0 0 0 1 0 1 0 1
+ 0 0 0 1 1 1 1 0
------------------
                1
              1
            0      (with a ``carry'' of 1)
          0        (because carry 1 + 0 + 1 is 0 with carry of 1)
        1          (because carry 1 + 1 + 1 is 1 with carry of 1)
      1            (because carry 1 + 0 + 0 is 1)      
    0
  0

So, 
  0 0 1 1 0 0 1 1
(51) is deposited into a new register/word
The computation steps for each bit-addition are wired together with electronic sensors, called gates. There are three basic gates that suffice for doing all base-2 computations.


0.2 Gates and truth tables

Here are the three basic gates:
===================================================

AND: 

OR: 

NOT: 

===================================================
In the above drawings, the input wires are labelled with the names P and Q. The output that is computed is emitted from the rightmost wire that exits the gate. The AND gate emits a high voltage (1) exactly when high voltages are sensed at input wires P and Q; otherwise low voltage (0) is emitted. The gate's physical behavior is summarized by a table called a truth table:
AND: P Q | 
---------------
     1 1 | 1
     1 0 | 0
     0 1 | 0
     0 0 | 0
For the remainder of this course, we will use t (read ``true'') for 1 and f (read ``false'') for 0. This is because we will examine applications that go far beyond circuit theory and base 2 arithmetic. Here are the truth tables for the AND, OR, and NOT gates:
AND: P Q |       OR: P Q |        NOT: P |
-------------    ------------     -----------
     t t | t         t t | t           t | f
     t f | f         t f | t           f | t
     f t | f         f t | t
     f f | f         f f | f
Note that OR is sometimes called ``inclusive or'', because as long as one of its inputs is true, then its output is true.

Each gate defines a one-bit arithmetic operation. (Later, we will see how to make a multi-bit operation, e.g., an 8-bit adder, as was illustrated in the example.)

It is standard to write each gate in a linear notation, that is, instead of drawing , we write P ^ Q instead. (The tradition of writing linear notations to represent two-dimensional structures goes back centuries in physics and math.) The notations are

AND  is  ^
OR   is  v
NOT  is  ~

We can also compose the gates to define new operations. For example, this circuit,


written ~(P ^ Q), defines this computation of outputs:
     P Q |
---------------
     t t | f
     t f | t
     f t | t
     f f | t
which we can work out in stages, like this:
     P Q | ~ (P ^ Q)
    ------------------------- 
     t t | f     t
     t f | t     f
     f t | t     f
     f f | t     f
(Notice that the circuit's output is written in the column underneath the NOT symbol, ~, which is the outermost (last) operator/gate in the circuit.

We can make circuits that take three or more inputs, e.g., (~(P ^ Q)) v R computes

     P Q R |  (~ (P ^ Q) )  v  R
    ------------------------------------------
     t t t |   f    t       t
     t t f |   f    t       f
     t f t |   t    f       t
     t f f |   t    f       t
     f f t |   t    f       t
     f t f |   t    f       t
     f t t |   f    t       t 
     f f f |   t    f       t
(Here, the rightmost column, underneath OR, defines the output.) We see this circuit emits false only when P and Q are both true and R is false.


0.3 Hardware description language

The examples show that circuit theory is an arithmetic built from true, false, and AND, OR, NOT gates. Indeed, hardware description languages for circuit building are little more than arithmetic expressions written to look like assignment commands. Here's how we do this: when we write

A = P ^ Q
we mean that the inputs to an AND gate are wires named P and Q and the output from the gate is a wire named A. Using this assignment notation, we can write the earlier example, ~(P ^ Q), like this:
A = P ^ Q
OUT = ~ A
where A and B are the inputs to the compound circuit and OUT is the name of the final output from the circuit. In fact, we might even embed the assignment equations that define the circuit into a function, like this:
===================================================

def MyGate(P, Q):
    A = P ^ Q
    OUT = ~ A
    return OUT

===================================================
(We will use Python notation for our programming examples.)

Then, every time we must manufacture a MyGate for a chip, we use the name, MyGate. Here is an example:

# say that the inputs to this circuit are  X, Y, and Z,  and the output is  OUT:
M = MyGate(X, Y)
N = MyGate(M, Z)
OUT = ~N


0.4 Programming an adder

Our programming notation is a hardware desciption language, and it will be useful when we build an 8-bit adder. To do this, we must implement these two tables, which define the two computations for one-bit addition. The first table defines how to add the bits from two registers P and Q (plus the incoming carry bit, C); the second defines how to compute the carry bit, which will supplied to the next addition step:

ADDONE:  C P Q |             CARRY:  C P Q |
--------------------         -------------------
         t t t |  t                  t t t |  t
         t t f |  f                  t t f |  t
         t f t |  f                  t f t |  t
         t f f |  t                  t f f |  f
         f f t |  t                  f f t |  f
         f t f |  t                  f t f |  f
         f t t |  f                  f t t |  t
         f f f |  f                  f f f |  f
For example, ADDONE(t,t,t) = t, because a carry of 1 (true) plus the two bits 1 and 1 (t and t) yield an answer of 1 (t). CARRY(t,t,t) = t because the carry bit is 1 (t) also.

Here is a coding of CARRY:

def CARRY(C, P, Q) :
    A = P ^ Q
    B = P ^ R
    C = Q ^ R
    OUT = (A v B) v C
    return OUT
You are left the exercise of writing the function that computes ADDONE from its three inputs. (Hint: define a helper function, XOR(P,Q), that defines (P v Q) ^ ~(P ^ Q), and call this function twice in your definition of ADDONE.)

Given these two functions, we can program a 4-bit adder like this: Let register P be an array (list) of 4 t-f values; let register Q be the same. (That is, P[0] is the first (left) bit of P, P[1] is the next bit of P, etc.)

Say that we have designed (coded) ADDONE and CARRY. We can define the adder, ADD4, which computes as its answer an array (register) of four t-f values:

===================================================

def ADD4(P, Q, R):
   """ADD4  reads the t-f values in arrays/registers P and Q and deposits
      its answer into a register, R, that holds the sum of P and Q."""

   R[3] = ADDONE(False, P[3], Q[3])   # no carry when we start
   C3 = CARRY(False, P[3], Q[3])

   R[2] = ADDONE(C3, P[2], Q[2])
   C2 = CARRY(C3, P[2], Q[2])

   R[1] = ADDONE(C2, P[1], Q[1])
   C1 = CARRY(C2, P[1], Q[1])

   R[0] = ADDONE(C1, P[0], Q[0])  # and we lose (forget) the carry bit

===================================================
We can try it:
RegP = [0,0,1,0]
RegQ = [0,0,1,1]
RegR = [0,0,0,0]
ADD4(RegP, RegQ, RegR)
The example adds 2 to 3 and deposits 5 into RegR (that is, [0,1,0,1]).

You can read the coding of ADD4 as a computer program, but it is also a sequence of instructions for wiring a circuit that connects to three registers (that is, three arrays of four cells each) and sends voltage levels along the connection points to alter the values in the registers' cells.

A true hardware description language would let us use loop code to define the adder's wiring. Here is a coding for an 8-bit adder that uses a loop:

===================================================

size = 8   # how many bits are held by a register

def ADD(P, Q, R):
   """ADD  reads the t-f values in registers P and Q and deposits the
      their sum into register R."""
   num = size - 1
   carry = False
   while num >= 0 :
       R[num] = ADDONE(carry, P[num], Q[num]) 
       carry = CARRY(carry, P[num], Q[num])
       num = num - 1

===================================================
A hardware fabrication machine converts code like the above into microcode that is burned into a chip's ROM (read-only memory for its controller) or it might even generate a wiring layout on a chip template itself. (In the latter case, the loop is unfolded into size-many copies, which are translated into wirings.) Notice that the leftmost carry bit is lost, which is standard --- this causes arithmetic overflow.


0.5 Knowledge travels along the wires

So far, we have pretended that low- and high-voltage levels travel along the wiring of a circuit. But we can also argue that it is knowledge that travels the circuit. This can be seen with some pictures.

Here is a circuit:


and here is its coding in the hardware description language:
R = P ^ Q
S = R v Q
T = ~ S
Let's redraw the circuit vertically and lay it side by side with the assignment equations:
===================================================



===================================================
Now, at each of the program points, marked by stars, what information is kept in the wires? First, we might use the circuit with some inputs to see ``what happens.'' Say that we supply t for P and f for Q:
===================================================



===================================================
The wiring diagram shows the values on the wires labelled P, Q, R and S as they travel through the circuit. But this is just tracking the values of the variables in the assignment program we wrote!

What if we don't know what Q will be? Say that P is t. What can we calculate about the circuit anyway? It's this:

===================================================



===================================================
In the diagram, we see that R = Q is stated after the AND gate. How do we know this?

First, we do know that R = P ^ Q. But P = t. We substitute t for P and obtain R = t ^ Q. Next, we do a cases analysis and consider the cases of Q's possible value: If Q is t, then t ^ Q is t ^ t, which simplifies to t, that is, to Q's value. Similarly, when Q is f, then t ^ Q is f as well. Hence, in both cases, t ^ Q equals Q.

The above reasoning is a deduction --- we deduced from the facts P = t and R = P ^ Q that R = Q. The whole point of this course is to learn how to make such deductions.

The other deductions in the example are calculated with similar uses of substitution, simplification, and cases analysis.

Next, say that we don't know anything about P and Q as inputs. What can be calculate about the circuit? It's this:

===================================================



===================================================
Notice that in the definition of S, we can deduce that S = Q from the information that R = P ^ Q and S = R v Q. The deduction can be made with substitution, simplification, and cases analysis. Or, we can build the truth tables for (P ^ Q) v Q and Q and see that they are the same. For humans, applying deduction steps is quicker than writing huge truth tables. (For computers, it is often --- not always -- the opposite.)

Even though we knew nothing about the inputs to the circuit, we deduced that the circuit's output will be just ~ Q. The example shows that we can reason about what the circuit will do in terms of the design we followed. This idea generalizes nicely to computer programs that are ``circuits'' on numbers (and not just on true and false)!