This is inefficient, compared to floating-point computation, but works. All sorts of improvements (empirically and/or theoretically justified) have been proposed, with good gains in performance, sometimes with more than one order of magnitude, for both time and space. But, despite these advances and unforeseen advances, one is most likely to continue using floating-point approximations and live with round-off errors when this is possible.
The point is, however, as the logistic map illustrates, that sometimes floating-point computations don't produce correct answers. In this case, as inefficient as it can be, exact numerical computation is of help. Of course, one can sometimes use symbolic methods, and this is the whole point of computer algebra. But it is often the case that computer algebra gives a symbolic solution to a problem, which then has to be numerically evaluated for some values of the variables. Then, even if the symbolic solution is mathematically correct, one tends to have little confidence on the numerical solution. It is very likely that computer algebra systems will include engines for exact numerical evaluation of symbolic expressions in the near future.
As we mentioned in the introduction, many (non-commercial) packages for exact real number computation have been implemented in a variety of programming languages. In particular, David Plume [46] implemented a framework for exact real number computation as his BSc honour project in the functional programming languages Haskell. His project report, which is available from my home page, is a well-written introductory account to some technical aspects of the subject.