Exact real arithmetic using Centred Dyadic Approximations
|LTS Haskell 20.23:||0.1.0.4|
|Stackage Nightly 2023-05-31:||0.1.0.4|
|Latest on Hackage:||0.1.0.4|
Module documentation for 0.1.0.4
Implementation of computable real numbers, which is also often referred to as Exact Real Arithmetic.
Please see section at the bottom how this branch (mBound) differs from the master branch.
Computable Real Numbers
Computable real numbers form a countable subset of the real numbers. It contains all real numbers that can be described by a finite program. This includes all numbers that are commonly used in mathematics, including π and e. It is also closed under all field operations and all common transcendental functions such as exponential and trigonometric functions.
Interval Arithmetic and Computable Real numbers
This implementation used interval arithmetic internally but that doesn’t mean that this is Interval Arithmetic. Computable real numbers are exact, so we are working under the assumption that arbitrary good approximations of input values can be found. This is obviously different from interval arithmetic where one of the main starting points is that input values may not be exact.
Is based on Centred Dyadic Approximations as described in Blanck 2006.
It is also heavily inspired by the iRRAM implementation by Norbert Müller. In particular, it uses nested intervals rather than Cauchy sequences with a computable modulus function.
This is implementation should have comparable efficiency to implementations using Cauchy sequences for shallow expressions. However, for deeply nested expressions, such as iterated functions, it should be significantly faster.
Each computable real can be viewed as an infinite list of rapidly shrinking intervals. Thus, laziness of Haskell is used to keep the computations finite.
Other Related stuff on Hackage
MPFR Arbitrary precision floating point numbers with specified rounding modes. While arbitrary precision can be used it is still fixed for a computation so this is still floating point arithmetic, but with larger precision.
AERN2 Computable real numbers and continuous functions using Cauchy sequences.
- AERN2 uses on the CDAR type
Approxas safely-rounded multi-precision floating-point numbers and builds interval arithmetic on top of them.
- AERN2 uses on the CDAR type
ireal Computable real numbers using Cauchy sequences with a fixed modulus.
constrible From the description this appears to be the real closure of the field of rational numbers. This allows for decidable equality, but excludes transcendental functions.
exact-real Computable real numbers using Cauchy sequences with a fixed modulus.
ERA (Can’t find a link at the moment) Computable real number using Cauchy sequences with computable modulus.
Comparison with ireal and AERN2
Should build under
Although the terminology Exact Real Arithmetic promises the ability to compute arbitrarily the result with huge precision, this is not the real strength of this implementation. It is certainly possible to compute 10000 digits of π, and it doesn’t take very long. But, even more useful is the fact that if you ask for 10 digits from your resulting computable real number, then you will get 10 correct digits even if the computation has to compute intermediate values with much higher precision.
Main usage cases:
- To provide exact results for important computations.
- To provide numerical computations without having to understand the problems of floating point computations. See Goldberg 1991.
- To provide numerical analysis without having to do error analysis.
A word of warning though: Some operations are, by necessity, partial. In particular, comparisons are partial, and so is 1/x near 0.
Variation mBound vs master
In the mBounds branch, each
Approx has an additional integer component, called
mBound, a bound for the number of bits in the integer component
m. Thus there is a limit on the bit size of the results of arithmetic operations.
In the original variation, the size of
m could grow arbitrarily for large numbers.
mBound plays a similar role for
Approx numbers as mantissa size does for floating-point numbers.