2
0
mirror of https://github.com/boostorg/test.git synced 2026-01-26 07:02:12 +00:00
Files
test/doc/testing_tools/testing_floating_points.qbk
Raffi Enficiaud bc2cd6cfaa Moving the new documentation to doc/
Moving the old documentation to old_doc/
2014-12-30 23:50:30 +01:00

171 lines
7.6 KiB
Plaintext

[/
/ Copyright (c) 2003-2014 Gennadiy Rozental
/ Copyright (c) 2013-2014 Raffi Enficiaud
/
/ Distributed under the Boost Software License, Version 1.0. (See accompanying
/ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
/]
[section:testing_floating_points Floating-point testing]
In most cases it is unreasonable to use an `operator==(...)` for a floating-point values equality check.
The simple, absolute value comparison based, solution for a floating-point values `u`,
`v` and a tolerance `epsilon`:
[#equ1]
``
abs(u - v) <= epsilon; // (1)
``
does not produce expected results in many circumstances - specifically for very small or very big values (see
[link Squassabia] for examples). The __UTF__ implements floating-point comparison algorithm that is
based on the more confident solution first presented in [link KnuthII Knuth]:
[#equ2]
``
abs(u - v) <= epsilon * abs(u)
&& abs(u - v) <= epsilon * abs(v)); // (2)
``
defines a ['very close with tolerance `epsilon`] relationship between `u` and `v`, while
[#equ3]
``
abs(u - v) <= epsilon * abs(u)
|| abs(u - v) <= epsilon * abs(v); // (3)
``
defines a ['close enough with tolerance `epsilon`] relationship between `u` and `v`.
Both relationships are commutative but are not transitive. The relationship defined by inequations
[link equ2 (2)] is stronger that the relationship defined by inequations [link equ3 (3)] since [link equ2 (2)] necessarilly implies [link equ3 (3)].
The multiplication in the right side of inequations may cause an unwanted underflow condition. To prevent this,
the implementation is using modified version of the
inequations [link equ2 (2)] and [link equ3 (3)] where all underflow, overflow conditions can be guarded safely:
[#equ4]
``
abs(u - v)/abs(u) <= epsilon * abs(u)
&& abs(u - v)/abs(v) <= epsilon * abs(v); // (4)
``
[#equ5]
``
abs(u - v)/abs(u) <= epsilon * abs(u)
|| abs(u - v)/abs(v) <= epsilon * abs(v); // (5)
``
Checks based on equations [link equ4 (4)] and [link equ5 (5)] are implemented by two predicates with
alternative interfaces:
# binary predicate `close_at_tolerance` [footnote check type and tolerance value are fixed at predicate construction time]
# and predicate with four arguments `check_is_close` [footnote check type and tolerance value are the arguments of the predicate].
While equations [link equ4 (4)] and [link equ5 (5)] in general are preferred for the general floating
point comparison check over equation [link equ1 (1)], they are
unusable for the test on closeness to zero. The later check is still might be useful in some cases and the __UTF__
implements an algorithm based on equation [link equ1 (1)] in
binary predicate `check_is_small` [footnote `v` is zero].
On top of the generic, flexible predicates the __UTF__ implements macro based family of tools
__BOOST_CHECK_CLOSE__ and __BOOST_CHECK_SMALL__. These tools limit the check
flexibility to strong-only checks, but automate failed check arguments reporting.
[h3 Tolerance selection considerations]
In case of absence of domain specific requirements the value of tolerance can be chosen as a sum of the predicted
upper limits for "relative rounding errors" of compared values. The "rounding" is the operation by which a real
value 'x' is represented in a floating-point format with 'p' binary digits (bits) as the floating-point value [*X].
The "relative rounding error" is the difference between the real and the floating point values in relation to real
value: `abs(x-X)/abs(x)`. The discrepancy between real and floating point value may be caused by several reasons:
* Type promotion
* Arithmetic operations
* Conversion from a decimal presentation to a binary presentation
* Non-arithmetic operation
The first two operations proved to have a relative rounding error that does not exceed
half_epsilon = half of the 'machine epsilon value'
for the appropriate floating point type `FPT` [footnote [*machine epsilon value] is represented by `std::numeric_limits<FPT>::epsilon()`].
Conversion to binary presentation, sadly, does not have such requirement. So we can't assume that `float(1.1)` is close
to the real number `1.1` with tolerance `half_epsilon` for float (though for 11./10 we can). Non-arithmetic operations either do not have a
predicted upper limit relative rounding errors.
[note Note that both arithmetic and non-arithmetic operations might also
produce others "non-rounding" errors, such as underflow/overflow, division-by-zero or "operation errors".]
All theorems about the upper limit of a rounding error, including that of `half_epsilon`, refer only to
the 'rounding' operation, nothing more. This means that the 'operation error', that is, the error incurred by the
operation itself, besides rounding, isn't considered. In order for numerical software to be able to actually
predict error bounds, the __IEEE754__ standard requires arithmetic operations to be 'correctly or exactly rounded'.
That is, it is required that the internal computation of a given operation be such that the floating point result
is the exact result rounded to the number of working bits. In other words, it is required that the computation used
by the operation itself doesn't introduce any additional errors. The __IEEE754__ standard does not require same behaviour
from most non-arithmetic operation. The underflow/overflow and division-by-zero errors may cause rounding errors
with unpredictable upper limits.
At last be aware that `half_epsilon` rules are not transitive. In other words combination of two
arithmetic operations may produce rounding error that significantly exceeds `2*half_epsilon`. All
in all there are no generic rules on how to select the tolerance and users need to apply common sense and domain/
problem specific knowledge to decide on tolerance value.
To simplify things in most usage cases latest version of algorithm below opted to use percentage values for
tolerance specification (instead of fractions of related values). In other words now you use it to check that
difference between two values does not exceed x percent.
For more reading about floating-point comparison see references below.
[h4 Bibliographic references]
[variablelist Books
[
[[#KnuthII]The art of computer programming (vol II)]
[Donald. E. Knuth, 1998, Addison-Wesley Longman, Inc., ISBN 0-201-89684-2, Addison-Wesley Professional; 3rd edition]
]
[
[Rounding near zero, in [@http://www.amazon.com/Advanced-Arithmetic-Digital-Computer-Kulisch/dp/3211838708 Advanced Arithmetic for the Digital Computer]]
[Ulrich W. Kulisch, 2002, Springer, Inc., ISBN 0-201-89684-2, Springer; 1st edition]
]
]
[variablelist Periodicals
[
[[#Squassabia][@http://www.adtmag.com/joop/carticle.aspx?ID=396
Comparing Floats: How To Determine if Floating Quantities Are Close Enough Once a Tolerance Has Been Reached]]
[Alberto Squassabia, in C++ Report (March 2000)]
]
[
[The Journeyman's Shop: Trap Handlers, Sticky Bits, and Floating-Point Comparisons]
[Pete Becker, in C/C++ Users Journal (December 2000)]
]
]
[variablelist Publications
[
[[@http://dl.acm.org/citation.cfm?id=103163
What Every Computer Scientist Should Know About Floating-Point Arithmetic]]
[David Goldberg, pages 150-230, in Computing Surveys (March 1991), Association for Computing Machinery, Inc.]
]
[
[[@http://hal.archives-ouvertes.fr/docs/00/07/26/81/PDF/RR-3967.pdf From Rounding Error Estimation to Automatic Correction with Automatic Differentiation]]
[Philippe Langlois, Technical report, INRIA]
]
[
[[@http://www.cs.berkeley.edu/~wkahan/
William Kahan home page]]
[Lots of information on floating point arithmetics.]
]
]
[endsect] [/ floating points]