< previous page page_528 next page >

Page 528
Because our scheme does not allow for a sign for the exponent, we shall change it slightly. The sign that we have will be the sign of the exponent, and a sign can be added to the far left to represent the sign of the number itself (see Figure 10-3).
All the numbers between 9999 * 10-9 and 9999 * 109 can now be represented accurately to four digits. Adding negative exponents to our scheme has allowed representation of fractional numbers.
Figure 10-4 shows how we would encode some floating point numbers. Note that our precision is still only four digits. The numbers 0.1032, -5.406, and 1,000,000 can be represented exactly. The number 476.0321, however, with seven significant digits, is represented as 476.0; the 321 cannot be represented. (We should point out that some computers perform rounding rather than simple truncation when excess digits are discarded. Using our assumption of four significant digits, such a machine would store 476.0321 as 476.0 but would store 476.0823 as 476.1. We continue our discussion assuming simple truncation rather than rounding.)
Arithmetic with Floating Point Numbers
When we use integer arithmetic, our results are exact. Floating point arithmetic, however, is seldom exact. We can illustrate this by adding three floating point numbers x, y, and z, using our coding scheme.
First, we add x to y and then we add z to the result. Next, we perform the operations in a different order, adding y to z, and then adding x to that result. The associative law of arithmetic says that the two answers should be the samebut are they? Let's use the following values for x, y, and z:
0528-01.gif
On the next page is the result of adding z to the sum of x and y.
0528-02.gif
Figure 10-3
Coding Using Positive and Negative Exponents

 
< previous page page_528 next page >