It is important to understand binary addition from module 0283 first. This is because subtraction has many of the same properties, except for one minor and subtle difference.
Recall how the result and carry are defined in addition. In subtraction, we have the result and borrow instead of carry.
It is always help to first take a look at base-10 subtraction. \(9-2=7\), meaning the result of \(9-2\) is \(7\), but there is also an implicit borrow of \(0\). What about \(2-9\)? Normally, we can that \(2-9=-7\), which is correct. However, this view is not useful for multi-digit subtraction. As a result, we see the subtraction more like \(2-9=3-1\cdot 10\). The \(3\) is the result, but the \(1\cdot 10\) is a borrow from the next digit. In other words, \(2-9\) has a result of \(3\) and a borrow of \(1\).
Unlike addition, subtraction is not commutative. As a result, the subtraction table that we had to memorize has no symmetry and there are more entries to remember. What about binary subtraction? Using the same symbols as for addition, let us take a look at the result and borrow functions. We do change the carry function \(c\) to borrow \(b\).
That’s it! You can see how binary subtraction is easier to learn than base-10 subtraction!
You may recognize that \(r\) is the same for both addition and subtraction. Indeed, we don’t have to change \(r\) at all for subtraction. \(b\) is different from \(c\) is a subtle way. With carry, the only time there is a carry of 1 is when both binary digits being added are 1.
In the case of borrow, the only time there is a borrow of 1 is when the minuend (the first number) is a 0 and the subtrahend (the second number) is a 1. There is no single common boolean operator to do this, but we can define borrow as follows:
\(b(x,y)=!xy\)
You may want to verify the validity of this function using a truth table. Do this as an exercise.
The bottom line, however, is that we can define the borrow function using logical operators. The term “logical operator” simply means it is a symbol used in mathematics and programming. Each logical operator has a corresponding logic gate in circuit design.
We can define half subtractors and full subtractors much like the half adders and full adders in module 0283. Implementing a 3-bit subtractor using this method in Logisim is a good exercise.
As a hint, a subtractor implemented this way shares much of the structure of a matching adder, including the main disadvantage of a carry propagated adder: speed. In order for a subtraction to conclude, the borrow bits must be propagated from the least significant digit to the most significant digit. This leads to a computation time that is linear (proportional) to the width of the number.
Obviously, a better method must exist!
The main difference between an adder and a subtractor is the carry function is replaced by the borrow function. As a result, the combined borrow from the next digit is also different.
Recall that in an adder, the carry to bit \(i+1\) is \(k_{i+1}=c(x_i,y_i)+c(q_i,k_i)\). The “addition” symbol is really disjunction because we are replacing all the arithmetic operators with logical operators. We can, similarly, define a combined borrow (using the letter “t” for “take”) as follows:
\(t_{i+1}=b(x_i,y_i)+b(q_i,t_i)\)
This is because there are two chances where a borrow is needed. The first one is when we perform a simple subtract of \(y_i\) from \(x_i\), the subtrahend and minuend, respectively. The second is when we subtract the borrow of bit \(i\), \(t_i\) from the result of \(x_i-y_i\). At the most, only one of the these two situations may make a borrow necessary.
Let’s see what happens when we expand \(t_{i+1}\):
\begin {align*} t_{i+1} & = b(x_i,y_i) + b(q_i,t_i) \\ & = !x_iy_i + !q_it_i \\ & = !x_iy_i + !(x_i!y_i+!x_iy_i)t_i \\ & = !x_iy_i + (!(x_i!y_i)!(!x_iy_i))t_i \\ & = !x_iy_i + (!x_i+y_i)(x_i+!y_i)t_i \\ & = !x_iy_i + (!x_ix_i+!x_i!y_i+y_ix_i+y_i!y_i)t_i \\ & = !x_iy_i + (0+!x_i!y_i+y_ix_i+0)t_i \\ & = !x_iy_i + (!x_i!y_i+y_ix_i)t_i \\ & = !x_iy_i(1) + (!x_i!y_i+y_ix_i)t_i \\ & = !x_iy_i(1+t_i+t_i) + (!x_i!y_i+y_ix_i)t_i \\ & = !x_iy_i+!x_iy_it_i+!x_iy_it_i + !x_i!y_it_i+y_ix_it_i \\ & = !x_iy_i+!x_iy_it_i+!x_i!y_it_i+!x_iy_it_i + y_ix_it_i \\ & = !x_iy_i+(!x_iy_i+!x_i!y_i)t_i+(!x_iy_i + y_ix_i)t_i \\ & = !x_iy_i+(y_i+!y_i)!x_it_i+(!x_i + x_i)y_it_i \\ & = !x_iy_i+(1)!x_it_i+(1)y_it_i \\ & = !x_iy_i+!x_it_i+y_it_i \\ & = !x_iy_i+(!x_i+y_i)t_i \end {align*}
How we can redefine \(g\) and \(p\) terms as follows. Let \(g_i=!x_iy_i\) and \(p_i=!x_i+y_i\). With these definitions, we can see that
\begin {align*} t_{i+1} = g_i + p_it_i \end {align*}
But wait, haven’t we seen this before? Yes, this is the exact equation for \(k_{i+1}\)! One may argue and say that the \(g\) and \(p\) terms are defined differently, and that is correct. However, the recurring relationship of \(k_{i+1}\) and \(k_i\) is the same. This means the overall function to compute \(t_{i+1}\) is the same as the one for \(k_{i+1}\). We now have the following general expression:
\begin {align*} t_{n+1} &= \bigvee _{i=0}^{n} g_i(\bigwedge _{j=i+1}^{n}p_j) + t_0\bigwedge _{i=0}^{n} p_i \end {align*}
How handy! This means that once we have the redefined \(g\) and \(p\) terms, we can reuse the same overall structure to compute the individual lookahead borrow bits. With the borrow bits computed, the actual difference bit is easy to compute.