next up previous contents
Next: Exact numerical computation Up: Floating-point computation Previous: Floating-point computation

Example--the logistic map

This is the function 2#2 defined by

f(x)=ax(1-x),

for some constant a. According to Devaney [18], this was first considered as a model of population growth by the Belgian mathematician Pierre Verhulstby in 1845. For example, a value 0.45may represent 45% of the maximum population of fish in given lake. Our task is, given x0, to compute the orbit
3#3,
which collects the population values of successive generations. We wish to compute an initial segment of the orbit for a given initial population x0. We choose a=4, as at this value of the constant the logistic map becomes chaotic, in a precise mathematical sense--see e.g. Devaney [18]. For the purposes of our discussion, it suffices to say that its value is sensitive to small variations of its variable.

Let's compute orbits for the same initial value, in simple and double precision. Here is a C program.

  #include <stdio.h>
  #include <math.h>

  void main(void)
  {
    float const  a = 4.0;
    float const x0 = 0.671875; /* this is machine representable */
    float        x = x0;
    double       y = x0;
    int i;

    for (i = 0; i <= 60; i++) {
         printf("%d\t%f\t%f\n",i,x,y);

         x = a * x * (1.0-x); 
         y = a * y * (1.0-y); 
        }
  }


The last entry of the table produced by the program is

60      0.934518        0.928604
So f60(x0) seems to be approximately 0.93. But, is it? Let's see. Here is the full table.


0       0.671875        0.671875
1       0.881836        0.881836
2       0.416805        0.416805
3       0.972315        0.972315
4       0.107676        0.107676
5       0.384327        0.384327
6       0.946479        0.946479
7       0.202625        0.202625
8       0.646273        0.646274
9       0.914417        0.914416
10      0.313034        0.313037
11      0.860174        0.860179
12      0.481098        0.481084
13      0.998571        0.998569
14      0.005708        0.005717
15      0.022702        0.022736
16      0.088748        0.088876
17      0.323486        0.323907
18      0.875371        0.875965
19      0.436387        0.434601
20      0.983813        0.982892
21      0.063698        0.067261
22      0.238564        0.250949
23      0.726604        0.751894
24      0.794603        0.746197 <-- The tables are similar up to here
25      0.652837        0.757549
26      0.906564        0.734675
27      0.338824        0.779711
28      0.896089        0.687047
29      0.372453        0.860054
30      0.934927        0.481445
31      0.243355        0.998623
32      0.736534        0.005501
33      0.776207        0.021884
34      0.694838        0.085621
35      0.848153        0.313159
36      0.515158        0.860362
37      0.999081        0.480558
38      0.003673        0.998488
39      0.014638        0.006039
40      0.057696        0.024009
41      0.217467        0.093730
42      0.680701        0.339779
43      0.869388        0.897317
44      0.454209        0.368558
45      0.991613        0.930892
46      0.033268        0.257328
47      0.128646        0.764442
48      0.448383        0.720282
49      0.989343        0.805903
50      0.042174        0.625693
51      0.161581        0.936805
52      0.541891        0.236806
53      0.992980        0.722916
54      0.027881        0.801234
55      0.108415        0.637033
56      0.386646        0.924888
57      0.948604        0.277882
58      0.195019        0.802655
59      0.627947        0.633600
60      0.934518        0.928604

From the table, we can't conclude anything about the value of f60(x0). Thus, machine-number computation can be

1.
ineffective (the answer may not be correct, so we don't get a solution),
2.
unreliable (we don't know whether the answer is correct).
This is of course well-known. Numerical analysis tries to predict these problems, and avoid them whenever possible. For example, we learn from numerical analysis that if a square matrix is ill-conditioned then floating-point inversion is unlikely to produce a result close to mathematical inversion. Some proposed solutions to this problem include the following.
1.
Interval arithmetic [40].
(a)
Sometimes effective (the intervals may grow very large),
(b)
always reliable.
2.
Stochastic arithmetic [58,17]. In this approach, results are computed a number of times in floating-point arithmetic, usual two or three, with the intermediate results randomly perturbed. Then probability theory is used to estimate the number of correct digits of the result.
(a)
Sometimes effective,
(b)
probabilistically reliable.
3.
Multiple-precision arithmetic (libraries, Mathematica, Maple).
(a)
More effective, but still ineffective,
(b)
more reliable, but still unreliable,
(c)
inefficient.
4.
Exact arithmetic, which is the subject of this paper.
(a)
Reliable,
(b)
often effective,
(c)
inefficient--but maybe not as multiple precision,
(d)
sometimes requires different programming techniques.
In principle, multiple-precision arithmetic is as good as exact arithmetic, because for any problem that can be solved using exact arithmetic there is a precision for which multiple-precision computation produces accurate results. There are two problems with this, however. Firstly, it may be difficult, or effectively impossible, to determine the necessary precision in advance. Secondly, the necessary precision may be excessively large, so that all intermediate results are computed with that precision when, in practice, only a few of them will actually need that precision to guarantee an accurate overall result.


next up previous contents
Next: Exact numerical computation Up: Floating-point computation Previous: Floating-point computation
Martin Escardo
2000-10-02