Fix mistakes (#729)

* Update Jamfile.v2

* Update introduction.qbk

* Update tutorial.qbk

* Update tutorial_cpp_int.qbk

* Update tutorial_gmp_int.qbk

* Update tutorial_tommath.qbk

* Update integer_examples.cpp

* Update tutorial_cpp_bin_float.qbk

* Update tutorial_cpp_dec_float.qbk

* Update tutorial_gmp_float.qbk

* Update tutorial_mpfr_float.qbk

* Update tutorial_float128.qbk

* Update tutorial_float_builtin_ctor.qbk

* Update big_seventh.cpp

* Update tutorial_float_eg.qbk

* Update floating_point_examples.cpp

* Update mpfr_precision.cpp

* Update gauss_laguerre_quadrature.cpp

* Update tutorial_interval_mpfi.qbk

* Update tutorial_cpp_complex.qbk

* Update tutorial_mpc_complex.qbk

* Update tutorial_float128_complex.qbk

* Update tutorial_complex_adaptor.qbk

* Update tutorial_rational.qbk

* Update tutorial_tommath_rational.qbk

* Update tutorial_logged_adaptor.qbk

* Update tutorial_debug_adaptor.qbk

* Update tutorial_visualizers.qbk

* Update tutorial_fwd.qbk

* Update tutorial_conversions.qbk

* Update tutorial_random.qbk

* Update random_snips.cpp

* Update tutorial_constexpr.qbk

* Update tutorial_import_export.qbk

* Update cpp_int_import_export.cpp

* Update tutorial_mixed_precision.qbk

* Update tutorial_variable_precision.qbk

* Update scoped_precision_example.cpp

* Update tutorial_numeric_limits.qbk

* Update tutorial_numeric_limits.qbk

* Update numeric_limits_snips.cpp

* Update numeric_limits_snips.cpp

* Update tutorial_numeric_limits.qbk

* Update numeric_limits_snips.cpp

* Update numeric_limits_snips.cpp

* Update tutorial_io.qbk

* Update reference_number.qbk

* Update reference_cpp_bin_float.qbk

* Update reference_cpp_double_fp_backend.qbk

* Update reference_internal_support.qbk

* Update reference_backend_requirements.qbk

* Update performance.qbk

* Update performance_overhead.qbk

* Update performance_real_world.qbk

* Update performance_integer_real_world.qbk

* Update performance_rational_real_world.qbk

* Update reference_number.qbk

* Update tutorial_numeric_limits.qbk

* Update reference_backend_requirements.qbk
This commit is contained in:
ivanpanch
2025-08-18 13:14:39 +02:00
committed by GitHub
parent 3d32b38f57
commit 55bf069621
51 changed files with 246 additions and 246 deletions

View File

@@ -1,7 +1,7 @@
# copyright John Maddock 2008
# Distributed under the Boost Software License, Version 1.0.
# (See accompanying file LICENSE_1_0.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt.
# http://www.boost.org/LICENSE_1_0.txt)
import-search /boost/config/checks ;

View File

@@ -60,7 +60,7 @@ but a header-only Boost license version is always available (if somewhat slower)
Should you just wish to 'cut to the chase' just to get bigger integers and/or bigger and more precise reals as simply and portably as possible,
close to 'drop-in' replacements for the __fundamental_type analogs,
then use a fully Boost-licensed number type, and skip to one of more of :
then use a fully Boost-licensed number type, and skip to one or more of:
* __cpp_int for multiprecision integers,
* __cpp_rational for rational types,
@@ -133,8 +133,8 @@ Conversions are also allowed:
However conversions that are inherently lossy are either declared explicit or else forbidden altogether:
d = 3.14; // Error implicit conversion from double not allowed.
d = static_cast<mp::int512_t>(3.14); // OK explicit construction is allowed
d = 3.14; // Error, implicit conversion from double not allowed.
d = static_cast<mp::int512_t>(3.14); // OK, explicit construction is allowed
Mixed arithmetic will fail if the conversion is either ambiguous or explicit:
@@ -195,9 +195,9 @@ of references to the arguments of the function, plus some compile-time informati
is.
The great advantage of this method is the ['elimination of temporaries]: for example, the "naive" implementation
of `operator*` above, requires one temporary for computing the result, and at least another one to return it. It's true
of `operator*` above requires one temporary for computing the result, and at least another one to return it. It's true
that sometimes this overhead can be reduced by using move-semantics, but it can't be eliminated completely. For example,
lets suppose we're evaluating a polynomial via Horner's method, something like this:
let's suppose we're evaluating a polynomial via Horner's method, something like this:
T a[7] = { /* some values */ };
//....
@@ -206,7 +206,7 @@ lets suppose we're evaluating a polynomial via Horner's method, something like t
If type `T` is a `number`, then this expression is evaluated ['without creating a single temporary value]. In contrast,
if we were using the [mpfr_class] C++ wrapper for [mpfr] - then this expression would result in no less than 11
temporaries (this is true even though [mpfr_class] does use expression templates to reduce the number of temporaries somewhat). Had
we used an even simpler wrapper around [mpfr] like [mpreal] things would have been even worse and no less that 24 temporaries
we used an even simpler wrapper around [mpfr] like [mpreal] things would have been even worse and no less than 24 temporaries
are created for this simple expression (note - we actually measure the number of memory allocations performed rather than
the number of temporaries directly, note also that the [mpf_class] wrapper supplied with GMP-5.1 or later reduces the number of
temporaries to pretty much zero). Note that if we compile with expression templates disabled and rvalue-reference support
@@ -247,7 +247,7 @@ is created in this case.
Given the comments above, you might be forgiven for thinking that expression-templates are some kind of universal-panacea:
sadly though, all tricks like this have their downsides. For one thing, expression template libraries
like this one, tend to be slower to compile than their simpler cousins, they're also harder to debug
like this one tend to be slower to compile than their simpler cousins, they're also harder to debug
(should you actually want to step through our code!), and rely on compiler optimizations being turned
on to give really good performance. Also, since the return type from expressions involving `number`s
is an "unmentionable implementation detail", you have to be careful to cast the result of an expression
@@ -256,23 +256,23 @@ to the actual number type when passing an expression to a template function. Fo
template <class T>
void my_proc(const T&);
Then calling:
Then calling
my_proc(a+b);
Will very likely result in obscure error messages inside the body of `my_proc` - since we've passed it
will very likely result in obscure error messages inside the body of `my_proc` - since we've passed it
an expression template type, and not a number type. Instead we probably need:
my_proc(my_number_type(a+b));
Having said that, these situations don't occur that often - or indeed not at all for non-template functions.
In addition, all the functions in the Boost.Math library will automatically convert expression-template arguments
to the underlying number type without you having to do anything, so:
to the underlying number type without you having to do anything, so
mpfr_float_100 a(20), delta(0.125);
boost::math::gamma_p(a, a + delta);
Will work just fine, with the `a + delta` expression template argument getting converted to an `mpfr_float_100`
will work just fine, with the `a + delta` expression template argument getting converted to an `mpfr_float_100`
internally by the Boost.Math library.
[caution In C++11 you should never store an expression template using:
@@ -299,7 +299,7 @@ dramatic as the reduction in number of temporaries would suggest. For example,
we see the following typical results for polynomial execution:
[table Evaluation of Order 6 Polynomial.
[[Library] [Relative Time] [Relative number of memory allocations]]
[[Library] [Relative Time] [Relative Number of Memory Allocations]]
[[number] [1.0 (0.00957s)] [1.0 (2996 total)]]
[[[mpfr_class]] [1.1 (0.0102s)] [4.3 (12976 total)]]
[[[mpreal]] [1.6 (0.0151s)] [9.3 (27947 total)]]
@@ -311,13 +311,13 @@ a number of reasons for this:
* The cost of extended-precision multiplication and division is so great, that the times taken for these tend to
swamp everything else.
* The cost of an in-place multiplication (using `operator*=`) tends to be more than an out-of-place
`operator*` (typically `operator *=` has to create a temporary workspace to carry out the multiplication, where
as `operator*` can use the target variable as workspace). Since the expression templates carry out their
`operator*` (typically `operator *=` has to create a temporary workspace to carry out the multiplication,
whereas `operator*` can use the target variable as workspace). Since the expression templates carry out their
magic by converting out-of-place operators to in-place ones, we necessarily take this hit. Even so the
transformation is more efficient than creating the extra temporary variable, just not by as much as
one would hope.
Finally, note that `number` takes a second template argument, which, when set to `et_off` disables all
Finally, note that `number` takes a second template argument, which, when set to `et_off`, disables all
the expression template machinery. The result is much faster to compile, but slower at runtime.
We'll conclude this section by providing some more performance comparisons between these three libraries,

View File

@@ -5,7 +5,7 @@
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
http://www.boost.org/LICENSE_1_0.txt.)
]
[section:perf Performance Comparison]
@@ -30,7 +30,7 @@ share similar relative performances.
The backends which effectively wrap GMP/MPIR and MPFR
retain the superior performance of the low-level big-number engines.
When these are used (in association with at least some level of optmization)
When these are used (in association with at least some level of optimization)
they achieve and retain the expected low-level performances.
At low digit counts, however, it is noted that the performances of __cpp_int,
@@ -47,7 +47,7 @@ the performances of the Boost-licensed, self-written backends.
At around a few hundred to several thousands of digits,
factors of about two through five are observed,
whereby GMP/MPIR-based calculations are (performance-wise)
supreior ones.
superior ones.
At a few thousand decimal digits, the upper end of
the Boost backends is reached. At the moment,

View File

@@ -18,16 +18,16 @@ type which was custom written for this specific task:
[table
[[Integer Type][Relative Performance (Actual time in parenthesis)]]
[[checked_int1024_t][1.53714(0.0415328s)]]
[[checked_int256_t][1.20715(0.0326167s)]]
[[checked_int512_t][1.2587(0.0340095s)]]
[[cpp_int][1.80575(0.0487904s)]]
[[extended_int][1.35652(0.0366527s)]]
[[int1024_t][1.36237(0.0368107s)]]
[[int256_t][1(0.0270196s)]]
[[int512_t][1.0779(0.0291243s)]]
[[mpz_int][3.83495(0.103619s)]]
[[tom_int][41.6378(1.12504s)]]
[[checked_int1024_t][1.53714 (0.0415328s)]]
[[checked_int256_t][1.20715 (0.0326167s)]]
[[checked_int512_t][1.2587 (0.0340095s)]]
[[cpp_int][1.80575 (0.0487904s)]]
[[extended_int][1.35652 (0.0366527s)]]
[[int1024_t][1.36237 (0.0368107s)]]
[[int256_t][1 (0.0270196s)]]
[[int512_t][1.0779 (0.0291243s)]]
[[mpz_int][3.83495 (0.103619s)]]
[[tom_int][41.6378 (1.12504s)]]
]
Note how for this use case, any dynamic allocation is a performance killer.
@@ -38,18 +38,18 @@ since that is the rate limiting step:
[table
[[Integer Type][Relative Performance (Actual time in parenthesis)]]
[[checked_uint1024_t][9.52301(0.0422246s)]]
[[cpp_int][11.2194(0.0497465s)]]
[[cpp_int (1024-bit cache)][10.7941(0.0478607s)]]
[[cpp_int (128-bit cache)][11.0637(0.0490558s)]]
[[cpp_int (256-bit cache)][11.5069(0.0510209s)]]
[[cpp_int (512-bit cache)][10.3303(0.0458041s)]]
[[cpp_int (no Expression templates)][16.1792(0.0717379s)]]
[[mpz_int][1.05106(0.00466034s)]]
[[mpz_int (no Expression templates)][1(0.00443395s)]]
[[tom_int][5.10595(0.0226395s)]]
[[tom_int (no Expression templates)][61.9684(0.274765s)]]
[[uint1024_t][9.32113(0.0413295s)]]
[[checked_uint1024_t][9.52301 (0.0422246s)]]
[[cpp_int][11.2194 (0.0497465s)]]
[[cpp_int (1024-bit cache)][10.7941 (0.0478607s)]]
[[cpp_int (128-bit cache)][11.0637 (0.0490558s)]]
[[cpp_int (256-bit cache)][11.5069 (0.0510209s)]]
[[cpp_int (512-bit cache)][10.3303 (0.0458041s)]]
[[cpp_int (no Expression templates)][16.1792 (0.0717379s)]]
[[mpz_int][1.05106 (0.00466034s)]]
[[mpz_int (no Expression templates)][1 (0.00443395s)]]
[[tom_int][5.10595 (0.0226395s)]]
[[tom_int (no Expression templates)][61.9684 (0.274765s)]]
[[uint1024_t][9.32113 (0.0413295s)]]
]
It's interesting to note that expression templates have little effect here - perhaps because the actual expressions involved

View File

@@ -34,7 +34,7 @@ for the [@../../performance/voronoi_performance.cpp voronoi-diagram builder test
[table
[[Type][Relative time]]
[[`int64_t`][[*1.0](0.0128646s)]]
[[`int64_t`][[*1.0] (0.0128646s)]]
[[`number<arithmetic_backend<int64_t>, et_off>`][1.005 (0.0129255s)]]
]

View File

@@ -55,7 +55,7 @@ calculate the n'th Bernoulli number via mixed rational/integer arithmetic to giv
[[198][167.594528][0.8947434429][1.326461858]]
]
In this use case, most the of the rational numbers are fairly small and so the times taken are dominated by
In this use case, most of the rational numbers are fairly small and so the times taken are dominated by
the number of allocations performed. The following table illustrates how well each type performs on suppressing
allocations:

View File

@@ -23,12 +23,12 @@ increases contension inside new/delete.
[[cpp_dec_float_50 (3 concurrent threads)][5.66114 (0.524077s)][424]]
[[mpf_float_50][1.03648 (0.0959515s)][640961]]
[[mpf_float_50 (3 concurrent threads)][1.50439 (0.139268s)][2563517]]
[[mpf_float_50 (no expression templates][1 (0.0925745s)][1019039]]
[[mpf_float_50 (no expression templates (3 concurrent threads)][1.52451 (0.141131s)][4075842]]
[[mpf_float_50 (no expression templates)][1 (0.0925745s)][1019039]]
[[mpf_float_50 (no expression templates) (3 concurrent threads)][1.52451 (0.141131s)][4075842]]
[[mpfr_float_50][1.2513 (0.115838s)][583054]]
[[mpfr_float_50 (3 concurrent threads)][1.61301 (0.149324s)][2330876]]
[[mpfr_float_50 (no expression templates][1.42667 (0.132073s)][999594]]
[[mpfr_float_50 (no expression templates (3 concurrent threads)][2.00203 (0.185337s)][4000039]]
[[mpfr_float_50 (no expression templates)][1.42667 (0.132073s)][999594]]
[[mpfr_float_50 (no expression templates) (3 concurrent threads)][2.00203 (0.185337s)][4000039]]
[[static_mpfr_float_50][1.18358 (0.10957s)][22930]]
[[static_mpfr_float_50 (3 concurrent threads)][1.38802 (0.128496s)][93140]]
[[static_mpfr_float_50 (no expression templates)][1.14598 (0.106089s)][46861]]
@@ -41,9 +41,9 @@ increases contension inside new/delete.
[[cpp_bin_float_50 (3 concurrent threads)][3.50535 (56.6s)][28]]
[[cpp_dec_float_50][4.82763 (77.9505s)][0]]
[[mpf_float_50][1.06817 (17.2475s)][123749688]]
[[mpf_float_50 (no expression templates][1 (16.1468s)][152610085]]
[[mpf_float_50 (no expression templates)][1 (16.1468s)][152610085]]
[[mpfr_float_50][1.18754 (19.1749s)][118401290]]
[[mpfr_float_50 (no expression templates][1.36782 (22.0858s)][152816346]]
[[mpfr_float_50 (no expression templates)][1.36782 (22.0858s)][152816346]]
[[static_mpfr_float_50][1.04471 (16.8686s)][113395]]
]

View File

@@ -20,7 +20,7 @@ Optional requirements have default implementations that are called if the backen
its own. Typically the backend will implement these to improve performance.
In the following tables, type B is the `Backend` template argument to `number`, `b` and `b2` are
a variables of type B, `pb` is a variable of type B*, `cb`, `cb2` and `cb3` are constant variables of type `const B`,
variables of type B, `pb` is a variable of type B*, `cb`, `cb2` and `cb3` are constant variables of type `const B`,
`rb` is a variable of type `B&&`, `a` and `a2` are variables of Arithmetic type,
`s` is a variable of type `const char*`, `ui` is a variable of type `unsigned`, `bb` is a variable of type `bool`,
`pa` is a variable of type pointer-to-arithmetic-type, `exp` is a variable of type `B::exp_type`,
@@ -40,7 +40,7 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`B()`][ ][Default constructor.][[space]]]
[[`B(cb)`][ ][Copy Constructor.][[space]]]
[[`b = b`][`B&`][Assignment operator.][[space]]]
[[`b = a`][`B&`][Assignment from an Arithmetic type. The type of `a` shall be listed in one of the type lists
[[`b = a`][`B&`][Assignment from an Arithmetic type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.][[space]]]
[[`b = s`][`B&`][Assignment from a string.][Throws a `std::runtime_error` if the string could not be interpreted as a valid number.]]
[[`b.swap(b)`][`void`][Swaps the contents of its arguments.][`noexcept`]]
@@ -50,7 +50,7 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`cb.compare(cb2)`][`int`][Compares `cb` and `cb2`, returns a value less than zero if `cb < cb2`, a value greater than zero if `cb > cb2` and zero
if `cb == cb2`.][`noexcept`]]
[[`cb.compare(a)`][`int`][Compares `cb` and `a`, returns a value less than zero if `cb < a`, a value greater than zero if `cb > a` and zero
if `cb == a`. The type of `a` shall be listed in one of the type lists
if `cb == a`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.][[space]]]
[[`eval_add(b, cb)`][`void`][Adds `cb` to `b`.][[space]]]
[[`eval_subtract(b, cb)`][`void`][Subtracts `cb` from `b`.][[space]]]
@@ -65,7 +65,7 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`eval_complement(b, cb)`][`void`][Computes the ones-complement of `cb` and stores the result in `b`, only required when `B` is an integer type.][[space]]]
[[`eval_left_shift(b, ui)`][`void`][Computes `b <<= ui`, only required when `B` is an integer type.][[space]]]
[[`eval_right_shift(b, ui)`][`void`][Computes `b >>= ui`, only required when `B` is an integer type.][[space]]]
[[`eval_convert_to(pa, cb)`][`void`][Converts `cb` to the type of `*pa` and store the result in `*pa`. Type `B` shall support
[[`eval_convert_to(pa, cb)`][`void`][Converts `cb` to the type of `*pa` and stores the result in `*pa`. Type `B` shall support
conversion to at least types `std::intmax_t`, `std::uintmax_t` and `long long`.
Conversion to other arithmetic types can then be synthesised using other operations.
Conversions to other types are entirely optional.][[space]]]
@@ -77,7 +77,7 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`eval_floor(b, cb)`][`void`][Stores the floor of `cb` in `b`, only required when `B` is a floating-point type.][[space]]]
[[`eval_ceil(b, cb)`][`void`][Stores the ceiling of `cb` in `b`, only required when `B` is a floating-point type.][[space]]]
[[`eval_sqrt(b, cb)`][`void`][Stores the square root of `cb` in `b`, only required when `B` is a floating-point type.][[space]]]
[[`boost::multiprecision::number_category<B>::type`][`std::integral_constant<int, N>`][`N` is one of the values `number_kind_integer`, `number_kind_floating_point`, `number_kind_complex`, `number_kind_rational` or `number_kind_fixed_point`.
[[`boost::multiprecision::number_category<B>::type`][`std::integral_constant<int, N>`][`N` is the value `number_kind_integer`, `number_kind_floating_point`, `number_kind_complex`, `number_kind_rational` or `number_kind_fixed_point`.
Defaults to `number_kind_floating_point`.][[space]]]
[[`eval_conj(b, cb)`][`void`][Sets `b` to the complex conjugate of `cb`. Required for complex types only - other types have a sensible default.][[space]]]
[[`eval_proj(b, cb)`][`void`][Sets `b` to the Riemann projection of `cb`. Required for complex types only - other types have a sensible default.][[space]]]
@@ -95,7 +95,7 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
Only destruction and assignment to the moved-from variable `rb` need be supported after the operation.][`noexcept`]]
[[`b = rb`][`B&`][Move-assign. Afterwards variable `rb` shall be in sane state, albeit with unspecified value.
Only destruction and assignment to the moved-from variable `rb` need be supported after the operation.][`noexcept`]]
[[`B(a)`][`B`][Direct construction from an arithmetic type. The type of `a` shall be listed in one of the type lists
[[`B(a)`][`B`][Direct construction from an arithmetic type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, this operation is simulated using default-construction followed by assignment.][[space]]]
[[`B(b2)`][`B`][Copy constructor from a different back-end type. When not provided, a generic interconversion routine is used.
@@ -112,31 +112,31 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`eval_eq(cb, cb2)`][`bool`][Returns `true` if `cb` and `cb2` are equal in value.
When not provided, the default implementation returns `cb.compare(cb2) == 0`.][`noexcept`]]
[[`eval_eq(cb, a)`][`bool`][Returns `true` if `cb` and `a` are equal in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, return the equivalent of `eval_eq(cb, B(a))`.][[space]]]
[[`eval_eq(a, cb)`][`bool`][Returns `true` if `cb` and `a` are equal in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version returns `eval_eq(cb, a)`.][[space]]]
[[`eval_lt(cb, cb2)`][`bool`][Returns `true` if `cb` is less than `cb2` in value.
When not provided, the default implementation returns `cb.compare(cb2) < 0`.][`noexcept`]]
[[`eval_lt(cb, a)`][`bool`][Returns `true` if `cb` is less than `a` in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default implementation returns `eval_lt(cb, B(a))`.][[space]]]
[[`eval_lt(a, cb)`][`bool`][Returns `true` if `a` is less than `cb` in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default implementation returns `eval_gt(cb, a)`.][[space]]]
[[`eval_gt(cb, cb2)`][`bool`][Returns `true` if `cb` is greater than `cb2` in value.
When not provided, the default implementation returns `cb.compare(cb2) > 0`.][`noexcept`]]
[[`eval_gt(cb, a)`][`bool`][Returns `true` if `cb` is greater than `a` in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default implementation returns `eval_gt(cb, B(a))`.][[space]]]
[[`eval_gt(a, cb)`][`bool`][Returns `true` if `a` is greater than `cb` in value.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default implementation returns `eval_lt(cb, a)`.][[space]]]
[[`eval_is_zero(cb)`][`bool`][Returns `true` if `cb` is zero, otherwise `false`. The default version of this function
@@ -148,87 +148,87 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
`typename std::tuple_element<0, typename B::unsigned_types>::type`.][[space]]]
[[['Basic arithmetic:]]]
[[`eval_add(b, a)`][`void`][Adds `a` to `b`. The type of `a` shall be listed in one of the type lists
[[`eval_add(b, a)`][`void`][Adds `a` to `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_add(b, B(a))`][[space]]]
[[`eval_add(b, cb, cb2)`][`void`][Add `cb` to `cb2` and stores the result in `b`.
[[`eval_add(b, cb, cb2)`][`void`][Adds `cb` to `cb2` and stores the result in `b`.
When not provided, does the equivalent of `b = cb; eval_add(b, cb2)`.][[space]]]
[[`eval_add(b, cb, a)`][`void`][Add `cb` to `a` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_add(b, cb, a)`][`void`][Adds `cb` to `a` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_add(b, cb, B(a))`.][[space]]]
[[`eval_add(b, a, cb)`][`void`][Add `a` to `cb` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_add(b, a, cb)`][`void`][Adds `a` to `cb` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_add(b, cb, a)`.][[space]]]
[[`eval_subtract(b, a)`][`void`][Subtracts `a` from `b`. The type of `a` shall be listed in one of the type lists
[[`eval_subtract(b, a)`][`void`][Subtracts `a` from `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_subtract(b, B(a))`][[space]]]
[[`eval_subtract(b, cb, cb2)`][`void`][Subtracts `cb2` from `cb` and stores the result in `b`.
When not provided, does the equivalent of `b = cb; eval_subtract(b, cb2)`.][[space]]]
[[`eval_subtract(b, cb, a)`][`void`][Subtracts `a` from `cb` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_subtract(b, cb, a)`][`void`][Subtracts `a` from `cb` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_subtract(b, cb, B(a))`.][[space]]]
[[`eval_subtract(b, a, cb)`][`void`][Subtracts `cb` from `a` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_subtract(b, a, cb)`][`void`][Subtracts `cb` from `a` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_subtract(b, cb, a); b.negate();`.][[space]]]
[[`eval_multiply(b, a)`][`void`][Multiplies `b` by `a`. The type of `a` shall be listed in one of the type lists
[[`eval_multiply(b, a)`][`void`][Multiplies `b` by `a`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_multiply(b, B(a))`][[space]]]
[[`eval_multiply(b, cb, cb2)`][`void`][Multiplies `cb` by `cb2` and stores the result in `b`.
When not provided, does the equivalent of `b = cb; eval_multiply(b, cb2)`.][[space]]]
[[`eval_multiply(b, cb, a)`][`void`][Multiplies `cb` by `a` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_multiply(b, cb, a)`][`void`][Multiplies `cb` by `a` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_multiply(b, cb, B(a))`.][[space]]]
[[`eval_multiply(b, a, cb)`][`void`][Multiplies `a` by `cb` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_multiply(b, a, cb)`][`void`][Multiplies `a` by `cb` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_multiply(b, cb, a)`.][[space]]]
[[`eval_multiply_add(b, cb, cb2)`][`void`][Multiplies `cb` by `cb2` and adds the result to `b`.
When not provided does the equivalent of creating a temporary `B t` and `eval_multiply(t, cb, cb2)` followed by
`eval_add(b, t)`.][[space]]]
[[`eval_multiply_add(b, cb, a)`][`void`][Multiplies `a` by `cb` and adds the result to `b`.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided does the equivalent of creating a temporary `B t` and `eval_multiply(t, cb, a)` followed by
`eval_add(b, t)`.][[space]]]
[[`eval_multiply_add(b, a, cb)`][`void`][Multiplies `a` by `cb` and adds the result to `b`.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided does the equivalent of `eval_multiply_add(b, cb, a)`.][[space]]]
[[`eval_multiply_subtract(b, cb, cb2)`][`void`][Multiplies `cb` by `cb2` and subtracts the result from `b`.
When not provided does the equivalent of creating a temporary `B t` and `eval_multiply(t, cb, cb2)` followed by
`eval_subtract(b, t)`.][[space]]]
[[`eval_multiply_subtract(b, cb, a)`][`void`][Multiplies `a` by `cb` and subtracts the result from `b`.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided does the equivalent of creating a temporary `B t` and `eval_multiply(t, cb, a)` followed by
`eval_subtract(b, t)`.][[space]]]
[[`eval_multiply_subtract(b, a, cb)`][`void`][Multiplies `a` by `cb` and subtracts the result from `b`.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided does the equivalent of `eval_multiply_subtract(b, cb, a)`.][[space]]]
[[`eval_multiply_add(b, cb, cb2, cb3)`][`void`][Multiplies `cb` by `cb2` and adds the result to `cb3` storing the result in `b`.
When not provided does the equivalent of `eval_multiply(b, cb, cb2)` followed by
`eval_add(b, cb3)`.
For brevity, only a version showing all arguments of type `B` is shown here, but you can replace up to any 2 of
`cb`, `cb2` and `cb3` with any type listed in one of the type lists
`cb`, `cb2` and `cb3` with any type listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.][[space]]]
[[`eval_multiply_subtract(b, cb, cb2, cb3)`][`void`][Multiplies `cb` by `cb2` and subtracts from the result `cb3` storing the result in `b`.
When not provided does the equivalent of `eval_multiply(b, cb, cb2)` followed by
`eval_subtract(b, cb3)`.
For brevity, only a version showing all arguments of type `B` is shown here, but you can replace up to any 2 of
`cb`, `cb2` and `cb3` with any type listed in one of the type lists
`cb`, `cb2` and `cb3` with any type listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.][[space]]]
[[`eval_divide(b, a)`][`void`][Divides `b` by `a`. The type of `a` shall be listed in one of the type lists
[[`eval_divide(b, a)`][`void`][Divides `b` by `a`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_divide(b, B(a))`]
[`std::overflow_error` if `a` has the value zero, and `std::numeric_limits<number<B> >::has_infinity == false`]]
[[`eval_divide(b, cb, cb2)`][`void`][Divides `cb` by `cb2` and stores the result in `b`.
When not provided, does the equivalent of `b = cb; eval_divide(b, cb2)`.]
[`std::overflow_error` if `cb2` has the value zero, and `std::numeric_limits<number<B> >::has_infinity == false`]]
[[`eval_divide(b, cb, a)`][`void`][Divides `cb` by `a` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_divide(b, cb, a)`][`void`][Divides `cb` by `a` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_divide(b, cb, B(a))`.]
[`std::overflow_error` if `a` has the value zero, and `std::numeric_limits<number<B> >::has_infinity == false`]]
[[`eval_divide(b, a, cb)`][`void`][Divides `a` by `cb` and stores the result in `b`. The type of `a` shall be listed in one of the type lists
[[`eval_divide(b, a, cb)`][`void`][Divides `a` by `cb` and stores the result in `b`. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_divide(b, B(a), cb)`.]
[`std::overflow_error` if cb has the value zero, and `std::numeric_limits<number<B> >::has_infinity == false`]]
@@ -240,52 +240,52 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
Where `ui_type` is `typename std::tuple_element<0, typename B::unsigned_types>::type`.][[space]]]
[[['Integer specific operations:]]]
[[`eval_modulus(b, a)`][`void`][Computes `b %= cb`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_modulus(b, a)`][`void`][Computes `b %= cb`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_modulus(b, B(a))`]
[`std::overflow_error` if `a` has the value zero.]]
[[`eval_modulus(b, cb, cb2)`][`void`][Computes `cb % cb2` and stores the result in `b`, only required when `B` is an integer type.
When not provided, does the equivalent of `b = cb; eval_modulus(b, cb2)`.]
[`std::overflow_error` if `a` has the value zero.]]
[[`eval_modulus(b, cb, a)`][`void`][Computes `cb % a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_modulus(b, cb, a)`][`void`][Computes `cb % a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_modulus(b, cb, B(a))`.]
[`std::overflow_error` if `a` has the value zero.]]
[[`eval_modulus(b, a, cb)`][`void`][Computes `cb % a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_modulus(b, a, cb)`][`void`][Computes `cb % a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_modulus(b, B(a), cb)`.]
[`std::overflow_error` if `a` has the value zero.]]
[[`eval_bitwise_and(b, a)`][`void`][Computes `b &= cb`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_and(b, a)`][`void`][Computes `b &= cb`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_bitwise_and(b, B(a))`][[space]]]
[[`eval_bitwise_and(b, cb, cb2)`][`void`][Computes `cb & cb2` and stores the result in `b`, only required when `B` is an integer type.
When not provided, does the equivalent of `b = cb; eval_bitwise_and(b, cb2)`.][[space]]]
[[`eval_bitwise_and(b, cb, a)`][`void`][Computes `cb & a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_and(b, cb, a)`][`void`][Computes `cb & a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_and(b, cb, B(a))`.][[space]]]
[[`eval_bitwise_and(b, a, cb)`][`void`][Computes `cb & a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_and(b, a, cb)`][`void`][Computes `cb & a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_and(b, cb, a)`.][[space]]]
[[`eval_bitwise_or(b, a)`][`void`][Computes `b |= cb`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_or(b, a)`][`void`][Computes `b |= cb`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_bitwise_or(b, B(a))`][[space]]]
[[`eval_bitwise_or(b, cb, cb2)`][`void`][Computes `cb | cb2` and stores the result in `b`, only required when `B` is an integer type.
When not provided, does the equivalent of `b = cb; eval_bitwise_or(b, cb2)`.][[space]]]
[[`eval_bitwise_or(b, cb, a)`][`void`][Computes `cb | a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_or(b, cb, a)`][`void`][Computes `cb | a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_or(b, cb, B(a))`.][[space]]]
[[`eval_bitwise_or(b, a, cb)`][`void`][Computes `cb | a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_or(b, a, cb)`][`void`][Computes `cb | a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_or(b, cb, a)`.][[space]]]
[[`eval_bitwise_xor(b, a)`][`void`][Computes `b ^= cb`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_xor(b, a)`][`void`][Computes `b ^= cb`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, the default version calls `eval_bitwise_xor(b, B(a))`][[space]]]
[[`eval_bitwise_xor(b, cb, cb2)`][`void`][Computes `cb ^ cb2` and stores the result in `b`, only required when `B` is an integer type.
When not provided, does the equivalent of `b = cb; eval_bitwise_xor(b, cb2)`.][[space]]]
[[`eval_bitwise_xor(b, cb, a)`][`void`][Computes `cb ^ a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_xor(b, cb, a)`][`void`][Computes `cb ^ a` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_xor(b, cb, B(a))`.][[space]]]
[[`eval_bitwise_xor(b, a, cb)`][`void`][Computes `a ^ cb` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in one of the type lists
[[`eval_bitwise_xor(b, a, cb)`][`void`][Computes `a ^ cb` and stores the result in `b`, only required when `B` is an integer type. The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
When not provided, does the equivalent of `eval_bitwise_xor(b, cb, a)`.][[space]]]
[[`eval_left_shift(b, cb, ui)`][`void`][Computes `cb << ui` and stores the result in `b`, only required when `B` is an integer type.
@@ -315,37 +315,37 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
[[`eval_lcm(b, cb, cb2)`][`void`][Sets `b` to the least common multiple of `cb` and `cb2`. Only required when `B` is an integer type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_gcd(b, cb, a)`][`void`][Sets `b` to the greatest common divisor of `cb` and `cb2`. Only required when `B` is an integer type.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
The default version of this function calls `eval_gcd(b, cb, B(a))`.][[space]]]
[[`eval_lcm(b, cb, a)`][`void`][Sets `b` to the least common multiple of `cb` and `cb2`. Only required when `B` is an integer type.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
The default version of this function calls `eval_lcm(b, cb, B(a))`.][[space]]]
[[`eval_gcd(b, a, cb)`][`void`][Sets `b` to the greatest common divisor of `cb` and `a`. Only required when `B` is an integer type.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
The default version of this function calls `eval_gcd(b, cb, a)`.][[space]]]
[[`eval_lcm(b, a, cb)`][`void`][Sets `b` to the least common multiple of `cb` and `a`. Only required when `B` is an integer type.
The type of `a` shall be listed in one of the type lists
The type of `a` shall be listed in
`B::signed_types`, `B::unsigned_types` or `B::float_types`.
The default version of this function calls `eval_lcm(b, cb, a)`.][[space]]]
[[`eval_powm(b, cb, cb2, cb3)`][`void`][Sets `b` to the result of ['(cb^cb2)%cb3].
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_powm(b, cb, cb2, a)`][`void`][Sets `b` to the result of ['(cb^cb2)%a].
The type of `a` shall be listed in one of the type lists
`B::signed_types`, `B::unsigned_types`.
The type of `a` shall be listed in
`B::signed_types` or `B::unsigned_types`.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_powm(b, cb, a, cb2)`][`void`][Sets `b` to the result of ['(cb^a)%cb2].
The type of `a` shall be listed in one of the type lists
`B::signed_types`, `B::unsigned_types`.
The type of `a` shall be listed in
`B::signed_types` or `B::unsigned_types`.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_powm(b, cb, a, a2)`][`void`][Sets `b` to the result of ['(cb^a)%a2].
The type of `a` shall be listed in one of the type lists
`B::signed_types`, `B::unsigned_types`.
The type of `a` shall be listed in
`B::signed_types` or `B::unsigned_types`.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_integer_sqrt(b, cb, b2)`][`void`][Sets `b` to the largest integer which when squared is less than `cb`, also
sets `b2` to the remainder, ie to ['cb - b[super 2]].
sets `b2` to the remainder, i.e. to ['cb - b[super 2]].
The default version of this function is synthesised from other operations above.][[space]]]
[[['Sign manipulation:]]]
@@ -357,39 +357,39 @@ and `ff` is a variable of type `std::ios_base::fmtflags`.
`eval_get_sign(cb) < 0`.][[space]]]
[[['floating-point functions:]]]
[[`eval_fpclassify(cb)`][`int`][Returns one of the same values returned by `std::fpclassify`. Only required when `B` is an floating-point type.
[[`eval_fpclassify(cb)`][`int`][Returns one of the same values returned by `std::fpclassify`. Only required when `B` is a floating-point type.
The default version of this function will only test for zero `cb`.][[space]]]
[[`eval_trunc(b, cb)`][`void`][Performs the equivalent operation to `std::trunc` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_trunc(b, cb)`][`void`][Performs the equivalent operation to `std::trunc` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_round(b, cb)`][`void`][Performs the equivalent operation to `std::round` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_round(b, cb)`][`void`][Performs the equivalent operation to `std::round` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_exp(b, cb)`][`void`][Performs the equivalent operation to `std::exp` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_exp(b, cb)`][`void`][Performs the equivalent operation to `std::exp` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_exp2(b, cb)`][`void`][Performs the equivalent operation to `std::exp2` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_exp2(b, cb)`][`void`][Performs the equivalent operation to `std::exp2` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is implemented in terms of `eval_pow`.][[space]]]
[[`eval_log(b, cb)`][`void`][Performs the equivalent operation to `std::log` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_log(b, cb)`][`void`][Performs the equivalent operation to `std::log` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_log10(b, cb)`][`void`][Performs the equivalent operation to `std::log10` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_log10(b, cb)`][`void`][Performs the equivalent operation to `std::log10` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_sin(b, cb)`][`void`][Performs the equivalent operation to `std::sin` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_sin(b, cb)`][`void`][Performs the equivalent operation to `std::sin` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_cos(b, cb)`][`void`][Performs the equivalent operation to `std::cos` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_cos(b, cb)`][`void`][Performs the equivalent operation to `std::cos` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_tan(b, cb)`][`void`][Performs the equivalent operation to `std::exp` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_tan(b, cb)`][`void`][Performs the equivalent operation to `std::exp` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_asin(b, cb)`][`void`][Performs the equivalent operation to `std::asin` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_asin(b, cb)`][`void`][Performs the equivalent operation to `std::asin` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_acos(b, cb)`][`void`][Performs the equivalent operation to `std::acos` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_acos(b, cb)`][`void`][Performs the equivalent operation to `std::acos` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_atan(b, cb)`][`void`][Performs the equivalent operation to `std::atan` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_atan(b, cb)`][`void`][Performs the equivalent operation to `std::atan` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_sinh(b, cb)`][`void`][Performs the equivalent operation to `std::sinh` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_sinh(b, cb)`][`void`][Performs the equivalent operation to `std::sinh` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_cosh(b, cb)`][`void`][Performs the equivalent operation to `std::cosh` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_cosh(b, cb)`][`void`][Performs the equivalent operation to `std::cosh` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_tanh(b, cb)`][`void`][Performs the equivalent operation to `std::tanh` on argument `cb` and stores the result in `b`. Only required when `B` is an floating-point type.
[[`eval_tanh(b, cb)`][`void`][Performs the equivalent operation to `std::tanh` on argument `cb` and stores the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_fmod(b, cb, cb2)`][`void`][Performs the equivalent operation to `std::fmod` on arguments `cb` and `cb2`, and store the result in `b`. Only required when `B` is an floating-point type.
[[`eval_fmod(b, cb, cb2)`][`void`][Performs the equivalent operation to `std::fmod` on arguments `cb` and `cb2`, and store the result in `b`. Only required when `B` is a floating-point type.
The default version of this function is synthesised from other operations above.][[space]]]
[[`eval_modf(b, cb, pb)`][`void`][Performs the equivalent operation to `std::modf` on argument `cb`, and store the integer result in `*pb` and the fractional part in `b`.
Only required when `B` is an floating-point type.

View File

@@ -79,7 +79,7 @@ Square root uses integer square root in a manner analogous to division.
Decimal string to binary conversion proceeds as follows: first parse the digits to
produce an integer multiplied by a decimal exponent. Note that we stop parsing digits
once we have parsed as many as can possibly effect the result - this stops the integer
once we have parsed as many as can possibly affect the result - this stops the integer
part growing too large when there are a very large number of input digits provided.
At this stage if the decimal exponent is positive then the result is an integer and we
can in principle simply multiply by 10^N to get an exact integer result. In practice
@@ -97,6 +97,6 @@ so that the result is an N bit integer assuming we want N digits printed in the
As before we use limited precision arithmetic to calculate the result and up the
precision as necessary until the result is unambiguously correctly rounded. In addition
our initial calculation of the decimal exponent may be out by 1, so we have to correct
that and loop as well in the that case.
that and loop as well in that case.
[endsect]

View File

@@ -33,7 +33,7 @@ to change.
The class takes one template parameter:
[variablelist
[[FloatingPointType][The consituent IEEE floating-point value of the two limbs of the composite type.
[[FloatingPointType][The constituent IEEE floating-point value of the two limbs of the composite type.
This can be `float`, `double`, `long double` or when available `boost::float128_type`.]]
]

View File

@@ -30,7 +30,7 @@ Inherits from `std::integral_constant<bool,true>` if type `From` has an explicit
Member `value` is true if the conversion from `From` to `To` would result in a loss of precision, and `false` otherwise.
The default version of this trait simply checks whether the ['kind] of conversion (for example from a floating-point to an integer type)
is inherently lossy. Note that if either of the types `From` or `To` are of an unknown number category (because `number_category` is not
is inherently lossy. Note that if either of the types `From` and `To` is of an unknown number category (because `number_category` is not
specialised for that type) then this trait will be `true`.
template<typename From, typename To>

View File

@@ -398,7 +398,7 @@ integer type with a positive value (negative values result in a `std::runtime_er
Returns an ['unmentionable-type] that is usable in Boolean contexts (this allows `number` to be used in any
Boolean context - if statements, conditional statements, or as an argument to a logical operator - without
type `number` being convertible to type `bool`.
type `number` being convertible to type `bool`).
This operator also enables the use of `number` with any of the following operators:
`!`, `||`, `&&` and `?:`.
@@ -633,7 +633,7 @@ that allows arithmetic to be performed on native integer types producing an exte
bool isunordered(const ``['number-or-expression-template-type]``&, const ``['number-or-expression-template-type]``&);
These functions all behave exactly as their standard library C++11 counterparts do: their argument is either an instance of `number` or
an expression template derived from it; If the argument is of type `number<Backend, et_off>` then that is also the return type,
an expression template derived from it; if the argument is of type `number<Backend, et_off>` then that is also the return type,
otherwise the return type is an expression template unless otherwise stated.
The integer type arguments to `ldexp`, `frexp`, `scalbn` and `ilogb` may be either type `int`, or the actual
@@ -651,7 +651,7 @@ Complex number types support the following functions:
``['number]`` proj (const ``['number-or-expression-template-type]``&);
``['number]`` polar (const ``['number-or-expression-template-type]``&, const ``['number-or-expression-template-type]``&);
In addition the functions `real`, `imag`, `arg`, `norm`, `conj` and `proj` are overloaded for scalar (ie non-complex) types in the same
In addition the functions `real`, `imag`, `arg`, `norm`, `conj` and `proj` are overloaded for scalar (i.e. non-complex) types in the same
manner as `<complex>` and treat the argument as a value whose imaginary part is zero.
There are also some functions implemented for compatibility with the Boost.Math functions of the same name:
@@ -668,7 +668,7 @@ don't have native support for these functions. Please note however, that this d
to be a compile time constant - this means for example that the [gmp] MPF Backend will not work with these functions when that type is
used at variable precision.
Also note that with the exception of `abs` that these functions can only be used with floating-point Backend types (if any other types
Also note that with the exception of `abs` these functions can only be used with floating-point Backend types (if any other types
such as fixed precision or complex types are added to the library later, then these functions may be extended to support those number types).
The precision of these functions is generally determined by the backend implementation. For example the precision
@@ -693,7 +693,7 @@ here are exact (tested on Win32 with VC++10, MPFR-3.0.0, MPIR-2.1.1):
[[acos][0eps][0eps][0eps]]
[[asin][0eps][0eps][0eps]]
[[atan][1eps][0eps][0eps]]
[[cosh][1045eps[footnote It's likely that the inherent error in the input values to our test cases are to blame here.]][0eps][0eps]]
[[cosh][1045eps[footnote It's likely that inherent errors in the input values to our test cases are to blame here.]][0eps][0eps]]
[[sinh][2eps][0eps][0eps]]
[[tanh][1eps][0eps][0eps]]
[[pow][0eps][4eps][3eps]]

View File

@@ -13,8 +13,8 @@
In order to use this library you need to make two choices:
* What kind of number do I want ([link boost_multiprecision.tut.ints integer],
[link boost_multiprecision.tut.floats floating-point], [link boost_multiprecision.tut.rational rational], or [link boost_multiprecision.tut.complex complex]).
* Which back-end do I want to perform the actual arithmetic (Boost-supplied, GMP, MPFR, MPC, Tommath etc)?
[link boost_multiprecision.tut.floats floating-point], [link boost_multiprecision.tut.rational rational], or [link boost_multiprecision.tut.complex complex])?
* Which back-end do I want to perform the actual arithmetic (Boost-supplied, GMP, MPFR, MPC, Tommath etc.)?
A practical, comprehensive, instructive, clear and very helpful video regarding the use of Multiprecision
can be found [@https://www.youtube.com/watch?v=mK4WjpvLj4c here].

View File

@@ -17,7 +17,7 @@
}}
Class template `complex_adaptor` is designed to sit inbetween class `number` and an actual floating point backend,
Class template `complex_adaptor` is designed to sit in between class `number` and an actual floating point backend,
in order to create a new complex number type.
It is the means by which we implement __cpp_complex and __complex128.

View File

@@ -5,23 +5,23 @@
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
http://www.boost.org/LICENSE_1_0.txt.)
]
[section:lits Literal Types and `constexpr` Support]
There are two kinds of `constexpr` support in this library:
* The more basic version requires only C++11 and allow the construction of some number types as literals.
* The more basic version requires only C++11 and allows the construction of some number types as literals.
* The more advanced support permits constexpr arithmetic and requires at least C++14
constexpr support, and for many operations C++2a support
constexpr support, and for many operations C++2a support.
[h4 Declaring numeric literals]
There are two backend types which are literals:
* __float128__ (which requires GCC), and
* Instantiations of `cpp_int_backend` where the Allocator parameter is type `void`.
* instantiations of `cpp_int_backend` where the Allocator parameter is type `void`.
In addition, prior to C++14 the Checked parameter must be `boost::multiprecision::unchecked`.
For example:
@@ -114,13 +114,13 @@ or GCC-6 or later in C++14 mode.
Compilers other than GCC and without `std::is_constant_evaluated()` will support a very limited set of operations:
expect to hit roadblocks rather easily.
See __compiler_support for __is_constant_evaluated;
See __compiler_support for __is_constant_evaluated.
For example given:
For example, given
[constexpr_circle]
We can now calculate areas and circumferences, using all compile-time `constexpr` arithmetic:
we can now calculate areas and circumferences, using all compile-time `constexpr` arithmetic:
[constexpr_circle_usage]

View File

@@ -16,14 +16,14 @@ In particular:
* Any number type can be constructed (or assigned) from any __fundamental arithmetic type, as long
as the conversion isn't lossy (for example float to int conversion):
cpp_dec_float_50 df(0.5); // OK construction from double
cpp_int i(450); // OK constructs from signed int
cpp_dec_float_50 df(0.5); // OK, construction from double
cpp_int i(450); // OK, constructs from signed int
cpp_int j = 3.14; // Error, lossy conversion.
* A number can be explicitly constructed from an arithmetic type, even when the conversion is lossy:
cpp_int i(3.14); // OK explicit conversion
i = static_cast<cpp_int>(3.14) // OK explicit conversion
cpp_int i(3.14); // OK, explicit conversion
i = static_cast<cpp_int>(3.14) // OK, explicit conversion
i.assign(3.14); // OK, explicit assign and avoid a temporary from the cast above
i = 3.14; // Error, no implicit assignment operator for lossy conversion.
cpp_int j = 3.14; // Error, no implicit constructor for lossy conversion.
@@ -50,7 +50,7 @@ this functionality is only available on compilers supporting C++11's explicit co
mpz_int z(2);
int i = z; // Error, implicit conversion not allowed.
int j = static_cast<int>(z); // OK explicit conversion.
int j = static_cast<int>(z); // OK, explicit conversion.
* Any number type can be ['explicitly] constructed (or assigned) from a `const char*` or a `std::string`:
@@ -93,7 +93,7 @@ are not lossy:
loss of precision is involved, and explicit if it is:
int128_t i128 = 0;
int266_t i256 = i128; // OK implicit widening conversion
int266_t i256 = i128; // OK, implicit widening conversion
i128_t = i256; // Error, no assignment operator found, narrowing conversion is explicit.
i128_t = static_cast<int128_t>(i256); // OK, explicit narrowing conversion.
@@ -126,6 +126,6 @@ loss of precision is involved, and explicit if it is:
More information on what additional types a backend supports conversions from are given in the tutorial for each backend.
The converting constructor will be implicit if the backend's converting constructor is also implicit, and explicit if the
backends converting constructor is also explicit.
backend's converting constructor is also explicit.
[endsect] [/section:conversions Constructing and Interconverting Between Number Types]

View File

@@ -47,10 +47,10 @@ to declare a `cpp_bin_float` with exactly the same precision as `double` one wou
`number<cpp_bin_float<53, digit_base_2> >`. The typedefs `cpp_bin_float_single`, `cpp_bin_float_double`,
`cpp_bin_float_quad`, `cpp_bin_float_oct` and `cpp_bin_float_double_extended` provide
software analogues of the IEEE single, double, quad and octuple float data types, plus the Intel-extended-double type respectively.
Note that while these types are functionally equivalent to the native IEEE types, but they do not have the same size
Note that while these types are functionally equivalent to the native IEEE types, they do not have the same size
or bit-layout as true IEEE compatible types.
Normally `cpp_bin_float` allocates no memory: all of the space required for its digits are allocated
Normally `cpp_bin_float` allocates no memory: all of the space required for its digits is allocated
directly within the class. As a result care should be taken not to use the class with too high a digit count
as stack space requirements can grow out of control. If that represents a problem then providing an allocator
as a template parameter causes `cpp_bin_float` to dynamically allocate the memory it needs: this
@@ -77,13 +77,13 @@ Things you should know when using this type:
* The type supports both infinities and NaNs. An infinity is generated whenever the result would overflow,
and a NaN is generated for any mathematically undefined operation.
* There is a `std::numeric_limits` specialisation for this type.
* Any `number` instantiated on this type, is convertible to any other `number` instantiated on this type -
* Any `number` instantiated on this type is convertible to any other `number` instantiated on this type -
for example you can convert from `number<cpp_bin_float<50> >` to `number<cpp_bin_float<SomeOtherValue> >`.
Narrowing conversions round to nearest and are `explicit`.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted
as a valid floating-point number.
* All arithmetic operations are correctly rounded to nearest. String conversions and the `sqrt` function
are also correctly rounded, but transcendental functions (sin, cos, pow, exp etc) are not.
are also correctly rounded, but transcendental functions (sin, cos, pow, exp etc.) are not.
[h5 cpp_bin_float example:]

View File

@@ -63,7 +63,7 @@ is completely opaque, the suggestion would be to use an allocator with `char` `v
The next template parameters determine the type and range of the exponent: parameter `Exponent` can be
any signed integer type, but note that `MinExponent` and `MaxExponent` can not go right up to the limits
of the `Exponent` type as there has to be a little extra headroom for internal calculations. You will
get a compile time error if this is the case. In addition if MinExponent or MaxExponent are zero, then
get a compile time error if this is the case. In addition if MinExponent or MaxExponent is zero, then
the library will choose suitable values that are as large as possible given the constraints of the type
and need for extra headroom for internal calculations.
@@ -75,13 +75,13 @@ There is full standard library support available for this type, comparable with
Things you should know when using this type:
* Default constructed `cpp_complex`s have a value of zero.
* Default constructed `cpp_complex`es have a value of zero.
* The radix of this type is 2, even when the precision is specified as decimal digits.
* The type supports both infinities and NaNs. An infinity is generated whenever the result would overflow,
and a NaN is generated for any mathematically undefined operation.
* There is no `std::numeric_limits` specialisation for this type: this is the same behaviour as `std::complex`. If you need
`std::numeric_limits` support you need to look at `std::numeric_limits<my_complex_number_type::value_type>`.
* Any `number` instantiated on this type, is convertible to any other `number` instantiated on this type -
* Any `number` instantiated on this type is convertible to any other `number` instantiated on this type -
for example you can convert from `number<cpp_complex<50> >` to `number<cpp_bin_float<SomeOtherValue> >`.
Narrowing conversions round to nearest and are `explicit`.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted

View File

@@ -52,7 +52,7 @@ Normally these should not be visible to the user.
* The type supports both infinities and NaNs. An infinity is generated whenever the result would overflow,
and a NaN is generated for any mathematically undefined operation.
* There is a `std::numeric_limits` specialisation for this type.
* Any `number` instantiated on this type, is convertible to any other `number` instantiated on this type -
* Any `number` instantiated on this type is convertible to any other `number` instantiated on this type -
for example you can convert from `number<cpp_dec_float<50> >` to `number<cpp_dec_float<SomeOtherValue> >`.
Narrowing conversions are truncating and `explicit`.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted

View File

@@ -97,13 +97,13 @@ and unchecked integers have the following properties:
[table
[[Condition][Checked-Integer][Unchecked-Integer]]
[[Numeric overflow in fixed precision arithmetic][Throws a `std::overflow_error`.][Performs arithmetic modulo 2[super MaxBits]]]
[[Numeric overflow in fixed precision arithmetic][Throws a `std::overflow_error`.][Performs arithmetic modulo 2[super MaxBits].]]
[[Constructing an integer from a value that can not be represented in the target type][Throws a `std::range_error`.]
[Converts the value modulo 2[super MaxBits], signed to unsigned conversions extract the last MaxBits bits of the
2's complement representation of the input value.]]
[[Unsigned subtraction yielding a negative value.][Throws a `std::range_error`.][Yields the value that would
[[Unsigned subtraction yielding a negative value][Throws a `std::range_error`.][Yields the value that would
result from treating the unsigned type as a 2's complement signed type.]]
[[Attempting a bitwise operation on a negative value.][Throws a `std::range_error`][Yields the value, but not the bit pattern,
[[Attempting a bitwise operation on a negative value][Throws a `std::range_error`][Yields the value, but not the bit pattern,
that would result from performing the operation on a 2's complement integer type.]]
]
@@ -150,7 +150,7 @@ this includes the "checked" variants. Small signed types will always have an ex
* When used at fixed precision and MaxBits is smaller than the number of bits in the largest native integer type, then
internally `cpp_int_backend` switches to a "trivial" implementation where it is just a thin wrapper around a single
integer. Note that it will still be slightly slower than a bare native integer, as it emulates a
signed-magnitude representation rather than simply using the platforms native sign representation: this ensures
signed-magnitude representation rather than simply using the platform's native sign representation: this ensures
there is no step change in behavior as a cpp_int grows in size.
* Fixed precision `cpp_int`'s have some support for `constexpr` values and user-defined literals, see
[link boost_multiprecision.tut.lits here] for the full description. For example `0xfffff_cppi1024`

View File

@@ -40,7 +40,7 @@ It works for all the backend types equally too, here it is inspecting a `number<
[$../debugger3.png]
The template alias `debug_adaptor_t` is used as a shortcut for converting some other number type to it's debugged equivalent,
The template alias `debug_adaptor_t` is used as a shortcut for converting some other number type to its debugged equivalent,
for example:
using mpfr_float_debug = debug_adaptor_t<mpfr_float>;

View File

@@ -21,7 +21,7 @@
}} // namespaces
The `float128` number type is a very thin wrapper around GCC's `__float128` or Intel's `_Quad` data types
and provides an real-number type that is a drop-in replacement for the native C++ floating-point types, but with
and provides a real-number type that is a drop-in replacement for the native C++ floating-point types, but with
a 113 bit mantissa, and compatible with FORTRAN's 128-bit QUAD real.
All the usual standard library and `std::numeric_limits` support are available, performance should be equivalent
@@ -37,7 +37,7 @@ function of `float128_backend`.
Things you should know when using this type:
* Default constructed `float128`s have the value zero.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move aware.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move-aware.
* This type is fully `constexpr` aware - basic constexpr arithmetic is supported from C++14 and onwards, comparisons,
plus the functions `fabs`, `abs`, `fpclassify`, `isnormal`, `isfinite`, `isinf` and `isnan` are also supported if either
the compiler implements C++20's `std::is_constant_evaluated()`, or if the compiler is GCC.

View File

@@ -33,7 +33,7 @@ copy constructible and assignable from GCC's `__float128` and Intel's `_Quad` da
Things you should know when using this type:
* Default constructed `complex128`s have the value zero.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move aware.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move-aware.
* It is not possible to round-trip objects of this type to and from a string and get back
exactly the same value when compiled with Intel's C++ compiler and using `_Quad` as the underlying type: this is a current limitation of
our code. Round tripping when using `__float128` as the underlying type is possible (both for GCC and Intel).

View File

@@ -12,7 +12,7 @@
Construction of multiprecision types from built-in floating-point types
can lead to potentially unexpected, yet correct, results.
Consider, for instance constructing an instance of `cpp_dec_float_50`
Consider, for instance, constructing an instance of `cpp_dec_float_50`
from the literal built-in floating-point `double` value 11.1.
#include <iomanip>
@@ -35,7 +35,7 @@ from the literal built-in floating-point `double` value 11.1.
<< std::endl;
}
In this example, the system has a 64-bit built in `double` representation.
In this example, the system has a 64-bit built-in `double` representation.
The variable `f11` is initialized with the literal
`double` value 11.1. Recall that built-in floating-point representations
are based on successive binary fractional approximations.

View File

@@ -38,11 +38,11 @@ Typical output is:
[endsect] [/section:caveats Caveats]
[section:jel Defining a Special Function.]
[section:jel Defining a Special Function]
[JEL]
[endsect] [/section:jel Defining a Special Function.]
[endsect] [/section:jel Defining a Special Function]
[section:nd Calculating a Derivative]

View File

@@ -11,7 +11,7 @@
[section:fwd Forward Declarations]
The header `<boost/multiprecision/fwd.hpp>` contains forward declarations for class `number` plus all of the
available backends in the this library:
available backends in this library:
namespace boost {
namespace multiprecision {

View File

@@ -25,8 +25,8 @@
}} // namespaces
The `gmp_float` back-end is used in conjunction with `number` : it acts as a thin wrapper around the [gmp] `mpf_t`
to provide an real-number type that is a drop-in replacement for the native C++ floating-point types, but with
The `gmp_float` back-end is used in conjunction with `number`: it acts as a thin wrapper around the [gmp] `mpf_t`
to provide a real-number type that is a drop-in replacement for the native C++ floating-point types, but with
much greater precision.
Type `gmp_float` can be used at fixed precision by specifying a non-zero `Digits10` template parameter, or
@@ -50,7 +50,7 @@ Things you should know when using this type:
* Default constructed `gmp_float`s have the value zero (this is the [gmp] library's default behavior).
* No changes are made to the [gmp] library's global settings, so this type can be safely mixed with
existing [gmp] code.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move aware.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move-aware.
* It is not possible to round-trip objects of this type to and from a string and get back
exactly the same value. This appears to be a limitation of [gmp].
* Since the underlying [gmp] types have no notion of infinities or NaNs, care should be taken

View File

@@ -37,7 +37,7 @@ existing code that uses [gmp].
* Default constructed `gmp_int`s have the value zero (this is GMP's default behavior).
* Formatted IO for this type does not support octal or hexadecimal notation for negative values,
as a result performing formatted output on this type when the argument is negative and either of the flags
`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.
`std::ios_base::oct` or `std::ios_base::hex` are set will result in a `std::runtime_error` being thrown.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted
as a valid integer.
* Division by zero results in a `std::overflow_error` being thrown.

View File

@@ -77,7 +77,7 @@ that presents it in native order (see [@http://www.boost.org/doc/libs/release/li
[note
Note that this function is optimized for the case where the data can be `memcpy`ed from the source to the integer - in this case both
iterators much be pointers, and everything must be little-endian.]
iterators must be pointers, and everything must be little-endian.]
[h4 Examples]

View File

@@ -10,7 +10,7 @@
[section:interval Interval Number Types]
There is one currently only one interval number type supported - [mpfi].
There is currently only one interval number type supported - [mpfi].
[section:mpfi mpfi_float]
@@ -30,13 +30,13 @@ There is one currently only one interval number type supported - [mpfi].
}} // namespaces
The `mpfi_float_backend` type is used in conjunction with `number`: It acts as a thin wrapper around the [mpfi] `mpfi_t`
to provide an real-number type that is a drop-in replacement for the native C++ floating-point types, but with
to provide a real-number type that is a drop-in replacement for the native C++ floating-point types, but with
much greater precision and implementing interval arithmetic.
Type `mpfi_float_backend` can be used at fixed precision by specifying a non-zero `Digits10` template parameter, or
at variable precision by setting the template argument to zero. The `typedef`s `mpfi_float_50`, `mpfi_float_100`,
`mpfi_float_500`, `mpfi_float_1000` provide arithmetic types at 50, 100, 500 and 1000 decimal digits precision
respectively. The `typedef mpfi_float` provides a variable precision type whose precision can be controlled via theF
respectively. The `typedef mpfi_float` provides a variable precision type whose precision can be controlled via the
`number`s member functions.
[note This type only provides `numeric_limits` support when the precision is fixed at compile time.]

View File

@@ -50,7 +50,7 @@ so a reasonable test strategy is to use a large number of random values.
The test at
[@../../test/test_cpp_bin_float_io.cpp test_cpp_bin_float_io.cpp]
allows any floating-point type to be ['round_tripped] using a wide range of fairly random values.
It also includes tests compared a collection of
It also includes tests comparing a collection of
[@../../test/string_data.ipp stringdata] test cases in a file.
[h4 Comparing with output using __fundamental types]
@@ -102,7 +102,7 @@ So to conform to the C99 standard (incorporated by C++)
Confusingly, Microsoft (and MinGW) do not conform to this standard and provide
[*at least three digits], for example `1e+001`.
So if you want the output to match that from
__fundamental floating-point types on compilers that use Microsofts runtime then use:
__fundamental floating-point types on compilers that use Microsoft's runtime then use:
#define BOOST_MP_MIN_EXPONENT_DIGITS 3

View File

@@ -49,7 +49,7 @@ for the particular backend being observed.
This type provides `numeric_limits` support whenever the template argument Backend does so.
Template alias `logged_adaptor_t` can be used as a shortcut for converting some instantiation of `number<>` to it's logged euqivalent.
Template alias `logged_adaptor_t` can be used as a shortcut for converting some instantiation of `number<>` to its logged equivalent.
This type is particularly useful when combined with an interval number type - in this case we can use `log_postfix_event`
to monitor the error accumulated after each operation. We could either set some kind of trap whenever the accumulated error

View File

@@ -97,7 +97,7 @@ There are three backends that define this trait by default:
* __mpfr_float_backend's provided they are of the same precision.
In addition, while this feature can be used with expression templates turned off, this feature minimises temporaries
and hence memory allocations when expression template are turned on.
and hence memory allocations when expression templates are turned on.
By way of an example, consider the dot product of two vectors of __cpp_int's, our first, fairly trivial
implementation might look like this:
@@ -120,7 +120,7 @@ value vectors for random data with various bit counts:
[table
[[Bit Count][Allocations Count Version 1][Allocations Count Version 2][Allocations Count Version 3]]
[[32][1[footnote Here everything fits within __cpp_int's default internal cache, so no allocation are required.]][0][0]]
[[32][1[footnote Here everything fits within __cpp_int's default internal cache, so no allocation is required.]][0][0]]
[[64][1001][1[footnote A single allocation for the return value.]][1]]
[[128][1002][1][2]]
[[256][1002][1][3[footnote Here the input data is such that more than one allocation is required for the temporary.]]]
@@ -141,13 +141,13 @@ Timings for the three methods are as follows (MSVC-16.8.0, x64):
]
As you can see, there is a sweet spot for middling-sized integers where we gain: if the values are small, then
__cpp_int's own internal cache is large enough anyway, and no allocation occur. Conversely, if the values are
__cpp_int's own internal cache is large enough anyway, and no allocation occurs. Conversely, if the values are
sufficiently large, then the cost of the actual arithmetic dwarfs the memory allocation time. In this particular
case, carefully writing the code (version 3) is clearly at least as good as using a separate type with a larger cache.
However, there may be times when it's not practical to re-write existing code, purely to optimise it for the
multiprecision use case.
A typical example where we can't rewrite our code to avoid unnecessary allocations, occurs when we're calling an
A typical example where we can't rewrite our code to avoid unnecessary allocations occurs when we're calling an
external routine. For example the arc length of an ellipse with radii ['a] and ['b] is given by:
[pre L(a, b) = 4aE(k)]
@@ -187,7 +187,7 @@ non-allocating types so much, it does depend very much on the special function b
The following backends have at least some direct support for mixed-precision arithmetic,
and therefore avoid creating unnecessary temporaries when using the interfaces above.
Therefore when using these types it's more efficient to use mixed-precision arithmetic,
Therefore when using these types it's more efficient to use mixed-precision arithmetic
than it is to explicitly cast the operands to the result type:
__mpfr_float_backend, __mpf_float, __cpp_int.

View File

@@ -26,7 +26,7 @@
}} // namespaces
The `mpc_complex_backend` type is used in conjunction with `number`: It acts as a thin wrapper around the [mpc] `mpc_t`
to provide an real-number type that is a drop-in replacement for `std::complex`, but with
to provide a real-number type that is a drop-in replacement for `std::complex`, but with
much greater precision.
Type `mpc_complex_backend` can be used at fixed precision by specifying a non-zero `Digits10` template parameter, or

View File

@@ -80,7 +80,7 @@ Things you should know when using this type:
* No changes are made to [gmp] or [mpfr] global settings, so this type can coexist with existing
[mpfr] or [gmp] code.
* The code can equally use [mpir] in place of [gmp] - indeed that is the preferred option on Win32.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move aware.
* This backend supports rvalue-references and is move-aware, making instantiations of `number` on this backend move-aware.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted
as a valid floating-point number.
* Division by zero results in an infinity.

View File

@@ -34,7 +34,7 @@ The chosen backend often determines how completely `std::numeric_limits` is avai
Compiler options, processor type, and definition of macros or assembler instructions to control denormal numbers will alter
the values in the tables given below.
[warning GMP's extendable floatin-point `mpf_t` does not have a concept of overflow:
[warning GMP's extendable floating-point `mpf_t` does not have a concept of overflow:
operations that lead to overflow eventually run of out of resources
and terminate with stack overflow (often after several seconds).]
@@ -118,7 +118,7 @@ store a [*fixed precision], so half cents or pennies (or less) cannot be stored.
The results of computations are rounded up or down,
just like the result of integer division stored as an integer result.
There are number of proposals to
There are various proposals to
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3407.html
add Decimal floating-point Support to C++].
@@ -158,7 +158,7 @@ Overflow of signed integers can be especially unexpected,
possibly causing change of sign.
Boost.Multiprecision integer type `cpp_int` is not modulo
because as an __arbitrary_precision types,
because as an __arbitrary_precision type,
it expands to hold any value that the machine resources permit.
However fixed precision __cpp_int's may be modulo if they are unchecked
@@ -177,14 +177,14 @@ or 10 (for decimal types).
[h4 digits]
The number of `radix` digits that be represented without change:
The number of `radix` digits that can be represented without change:
* for integer types, the number of [*non-sign bits] in the significand.
* for floating types, the number of [*radix digits] in the significand.
The values include any implicit bit, so for example, for the ubiquious
The values include any implicit bit, so for example, for the ubiquitous
`double` using 64 bits
([@http://en.wikipedia.org/wiki/Double_precision_floating-point_format IEEE binary64 ]),
([@http://en.wikipedia.org/wiki/Double_precision_floating-point_format IEEE binary64]),
`digits` == 53, even though there are only 52 actual bits of the significand stored in the representation.
The value of `digits` reflects the fact that there is one implicit bit which is always set to 1.
@@ -327,9 +327,9 @@ and references therein
and
[@https://arxiv.org/pdf/1310.8121.pdf Easy Accurate Reading and Writing of Floating-Point Numbers, Aubrey Jaffer (August 2018)].
Microsoft VS2017 and other recent compilers, now use the
Microsoft VS2017 and other recent compilers now use the
[@https://doi.org/10.1145/3192366.3192369 Ryu fast float-to-string conversion by Ulf Adams]
algorithm, claimed to be both exact and fast for 32 and 64-bit floating-point numbers.
algorithm, claimed to be both exact and fast for 32- and 64-bit floating-point numbers.
] [/note]
[h4 round_style]
@@ -382,7 +382,7 @@ gradual underflow, so that, if type T is `double`.
A type may have any of the following `enum float_denorm_style` values:
* `std::denorm_absent`, if it does not allow denormalized values.
(Always used for all integer and exact types).
(Always used for all integer and exact types.)
* `std::denorm_present`, if the floating-point type allows denormalized values.
*`std::denorm_indeterminate`, if indeterminate at compile time.
@@ -391,9 +391,9 @@ A type may have any of the following `enum float_denorm_style` values:
`bool std::numeric_limits<T>::tinyness_before`
`true` if a type can determine that a value is too small
to be represent as a normalized value before rounding it.
to be represented as a normalized value before rounding it.
Generally true for `is_iec559` floating-point __fundamantal types,
Generally true for `is_iec559` floating-point __fundamental types,
but false for integer types.
Standard-compliant IEEE 754 floating-point implementations may detect the floating-point underflow at three predefined moments:
@@ -591,7 +591,7 @@ used thus:
BOOST_CHECK_CLOSE_FRACTION(expected, calculated, tolerance);
(There is also a version BOOST_CHECK_CLOSE using tolerance as a [*percentage] rather than a fraction;
usually the fraction version is simpler to use).
usually the fraction version is simpler to use.)
[tolerance_2]
@@ -607,7 +607,7 @@ For IEEE754 system (for which `std::numeric_limits<T>::is_iec559 == true`)
[@http://en.wikipedia.org/wiki/IEEE_754-1985#Positive_and_negative_infinity positive and negative infinity]
are assigned bit patterns for all defined floating-point types.
Confusingly, the string resulting from outputting this representation, is also
Confusingly, the string resulting from outputting this representation is also
implementation-defined. And the string that can be input to generate the representation is also implementation-defined.
For example, the output is `1.#INF` on Microsoft systems, but `inf` on most *nix platforms.
@@ -615,7 +615,7 @@ For example, the output is `1.#INF` on Microsoft systems, but `inf` on most *nix
This implementation-defined-ness has hampered use of infinity (and NaNs)
but __Boost_Math and __Boost_Multiprecision work hard to provide a sensible representation
for [*all] floating-point types, not just the __fundamental_types,
which with the use of suitable facets to define the input and output strings, makes it possible
which with the use of suitable facets to define the input and output strings makes it possible
to use these useful features portably and including __Boost_Serialization.
[h4 Not-A-Number NaN]
@@ -631,11 +631,11 @@ provides an implementation-defined representation for NaN.
result of an assignment or computation is meaningless.
A typical example is `0/0` but there are many others.
NaNs may also be used, to represent missing values: for example,
NaNs may also be used to represent missing values: for example,
these could, by convention, be ignored in calculations of statistics like means.
Many of the problems with a representation for
[@http://en.wikipedia.org/wiki/NaN Not-A-Number] has hampered portable use,
[@http://en.wikipedia.org/wiki/NaN Not-A-Number] have hampered portable use,
similar to those with infinity.
[nan_1]

View File

@@ -14,7 +14,7 @@ Random numbers are generated in conjunction with Boost.Random.
There is a single generator that supports generating random integers with large bit counts:
[@http://www.boost.org/doc/html/boost/random/independent_bits_engine.html `independent_bits_engine`].
This type can be used with either ['unbounded] integer types, or with ['bounded] (ie fixed precision) unsigned integers:
This type can be used with either ['unbounded] integer types, or with ['bounded] (i.e. fixed precision) unsigned integers:
[random_eg1]
@@ -41,7 +41,7 @@ with a multiprecision generator such as [@http://www.boost.org/doc/html/boost/ra
Or to use [@http://www.boost.org/doc/html/boost/random/uniform_smallint.html `uniform_smallint`] or
[@http://www.boost.org/doc/html/boost/random/random_number_generator.html `random_number_generator`] with multiprecision types.
floating-point values in \[0,1) are most easily generated using [@http://www.boost.org/doc/html/boost/random/generate_canonical.html `generate_canonical`],
Floating-point values in \[0,1) are most easily generated using [@http://www.boost.org/doc/html/boost/random/generate_canonical.html `generate_canonical`],
note that `generate_canonical` will call the generator multiple times to produce the requested number of bits, for example we can use
it with a regular generator like so:
@@ -50,7 +50,7 @@ it with a regular generator like so:
[random_eg3_out]
Note however, the distributions do not invoke the generator multiple times to fill up the mantissa of a multiprecision floating-point type
with random bits. For these therefore, we should probably use a multiprecision generator (ie `independent_bits_engine`) in combination
with random bits. For these therefore, we should probably use a multiprecision generator (i.e. `independent_bits_engine`) in combination
with the distribution:
[random_eg4]
@@ -59,7 +59,7 @@ with the distribution:
And finally, it is possible to use the floating-point generators [@http://www.boost.org/doc/html/boost/random/lagged_fibonacci_01_engine.html `lagged_fibonacci_01_engine`]
and [@http://www.boost.org/doc/html/boost/random/subtract_with_idp144360752.html `subtract_with_carry_01_engine`] directly with multiprecision floating-point types.
It's worth noting however, that there is a distinct lack of literature on generating high bit-count random numbers, and therefore a lack of "known good" parameters to
It's worth noting, however, that there is a distinct lack of literature on generating high bit-count random numbers, and therefore a lack of "known good" parameters to
use with these generators in this situation. For this reason, these should probably be used for research purposes only:
[random_eg5]

View File

@@ -18,7 +18,7 @@ The following back-ends provide rational number arithmetic:
[[`gmp_rational`][boost/multiprecision/gmp.hpp][2][[gmp]][Very fast and efficient back-end.][Dependency on GNU licensed [gmp] library.]]
[[`tommath_rational`][boost/multiprecision/tommath.hpp][2][[tommath]][All C/C++ implementation that's Boost Software Licence compatible.][Slower than [gmp].]]
[[`rational_adaptor`][boost/multiprecision/rational_adaptor.hpp][N/A][none][All C++ adaptor that allows any integer back-end type to be used as a rational type.][Requires an underlying integer back-end type.]]
[[`boost::rational`][boost/rational.hpp][N/A][None][A C++ rational number type that can used with any `number` integer type.][The expression templates used by `number` end up being "hidden" inside `boost::rational`: performance may well suffer as a result.]]
[[`boost::rational`][boost/rational.hpp][N/A][None][A C++ rational number type that can be used with any `number` integer type.][The expression templates used by `number` end up being "hidden" inside `boost::rational`: performance may well suffer as a result.]]
]
[include tutorial_cpp_rational.qbk]

View File

@@ -32,7 +32,7 @@ rather strange beast as it's a signed type that is not a 2's complement type. A
operator`~` is deliberately not implemented for this type.
* Formatted IO for this type does not support octal or hexadecimal notation for negative values,
as a result performing formatted output on this type when the argument is negative and either of the flags
`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.
`std::ios_base::oct` and `std::ios_base::hex` is set will result in a `std::runtime_error` being thrown.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be interpreted
as a valid integer.
* Division by zero results in a `std::overflow_error` being thrown.

View File

@@ -23,7 +23,7 @@ The `tommath_rational` back-end is used via the typedef `boost::multiprecision::
`boost::rational<tom_int>`
to provide a rational number type that is a drop-in replacement for the native C++ number types, but with unlimited precision.
The advantage of using this type rather than `boost::rational<tom_int>` directly, is that it is expression-template enabled,
The advantage of using this type rather than `boost::rational<tom_int>` directly is that it is expression-template enabled,
greatly reducing the number of temporaries created in complex expressions.
There are also non-member functions:
@@ -35,12 +35,12 @@ which return the numerator and denominator of the number.
Things you should know when using this type:
* Default constructed `tom_rational`s have the value zero (this the inherited Boost.Rational behavior).
* Default constructed `tom_rational`s have the value zero (this is the inherited Boost.Rational behavior).
* Division by zero results in a `std::overflow_error` being thrown.
* Conversion from a string results in a `std::runtime_error` being thrown if the string can not be
interpreted as a valid rational number.
* No changes are made to [tommath]'s global state, so this type can safely coexist with other [tommath] code.
* Performance of this type has been found to be pretty poor - this need further investigation - but it appears that Boost.Rational
* Performance of this type has been found to be pretty poor - this needs further investigation - but it appears that Boost.Rational
needs some improvement in this area.
[h5 Example:]

View File

@@ -3,7 +3,7 @@
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
http://www.boost.org/LICENSE_1_0.txt.)
]
[section:variable Variable Precision Arithmetic]
@@ -67,23 +67,23 @@ The enumerated values have the following meanings, with `preserve_related_precis
[[preserve_target_precision][All expressions are evaluated at the precision of the highest precision variable within the expression, and then rounded to the precision
of the target variable upon assignment. The precision of other types (including related or component types - see preserve_component_precision/preserve_related_precision)
contained within the expression are ignored.
This option has the unfortunate side effect, that moves may become full deep copies.]]
This option has the unfortunate side effect that moves may become full deep copies.]]
[[preserve_source_precision][All expressions are evaluated at the precision of the highest precision variable within the expression, and that precision is preserved upon assignment.
The precision of other types (including related or component types - see preserve_component_precision/preserve_related_precision) contained within the expression are ignored.
Moves, are true moves not copies.]]
Moves are true moves, not copies.]]
[[preserve_component_precision][All expressions are evaluated at the precision of the highest precision variable within the expression, and that precision is preserved upon assignment.
If the expression contains component types then these are also considered when calculating the precision of the expression. Component types are the types which make up the two
components of the number when dealing with interval or complex numbers. They are the same type as `Num::value_type`.
Moves, are true moves not copies.]]
Moves are true moves, not copies.]]
[[preserve_related_precision][All expressions are evaluated at the precision of the highest precision variable within the expression, and that precision is preserved upon assignment.
If the expression contains component types then these are also considered when calculating the precision of the expression. In addition to component types,
all related types are considered when evaluating the precision of the expression.
Related types are considered to be instantiations of the same template, but with different parameters. So for example `mpfr_float_100` would be a related type to `mpfr_float`, and all expressions
containing an `mpfr_float_100` variable would have at least 100 decimal digits of precision when evaluated as an `mpfr_float` expression. Moves, are true moves not copies.]]
containing an `mpfr_float_100` variable would have at least 100 decimal digits of precision when evaluated as an `mpfr_float` expression. Moves are true moves, not copies.]]
[[preserve_all_precision][All expressions are evaluated at the precision of the highest precision variable within the expression, and that precision is preserved upon assignment.
In addition to component and related types, all types are considered when evaluating the precision of the expression.
For example, if the expression contains an `mpz_int`, then the precision of the expression will be sufficient to store all of the digits in the integer unchanged.
This option should generally be used with extreme caution, as it can easily cause unintentional precision inflation. Moves, are true moves not copies.]]
This option should generally be used with extreme caution, as it can easily cause unintentional precision inflation. Moves are true moves, not copies.]]
]
Note how the values `preserve_source_precision`, `preserve_component_precision`,
@@ -93,7 +93,7 @@ the precision of an expression:
[table
[[Value][Considers types (lowest in hierarchy first, each builds on the one before)]]
[[preserve_source_precision][Considers types the same as the result in the expression only.]]
[[preserve_component_precision][Also considers component types, ie `Num::value_type`.]]
[[preserve_component_precision][Also considers component types, i.e. `Num::value_type`.]]
[[preserve_related_precision][Also considers all instantiations of the backend-template, not just the same type as the result.]]
[[preserve_all_precision][Considers everything, including completely unrelated types such as (possibly arbitrary precision) integers.]]
]

View File

@@ -12,7 +12,7 @@
[important This section is seriously out of date compared to recent Visual C++ releases. A modernization of Multiprecision's visualizers is planned for Visual Studio 2022 (and beyond). The legacy description has been maintained and is provided below.]
Let's face it debugging multiprecision numbers is challenging - simply because we can't easily inspect the value of the numbers.
Let's face it, debugging multiprecision numbers is challenging - simply because we can't easily inspect the value of the numbers.
Visual C++ provides a partial solution in the shape of "visualizers" which provide improved views of complex data structures.
Previously, there was preliminary support for visualizers within older versions of Visual Studio.

View File

@@ -71,7 +71,7 @@ allows us to define variables with 50 decimal digit precision just like __fundam
0.14285714285714285714285714285714285714285714285714
We can also use Boost.Math __math_constants like [pi],
We can also use __math_constants like [pi],
guaranteed to be initialized with the very last bit of precision for the floating-point type.
*/
std::cout << "pi = " << boost::math::constants::pi<cpp_bin_float_50>() << std::endl;
@@ -140,7 +140,7 @@ but if one is implementing a formula involving a fraction from integers,
including decimal fractions like 1/10, 1/100, then comparison with other computations like __WolframAlpha
will reveal differences whose cause may be perplexing.
To get as precise-as-possible decimal fractions like 1.234, we can write
To get as-precise-as-possible decimal fractions like 1.234, we can write
*/
const cpp_bin_float_50 f1(cpp_bin_float_50(1234) / 1000); // Construct from a fraction.

View File

@@ -28,7 +28,7 @@ int main()
using boost::multiprecision::cpp_int;
// Create a cpp_int with just a couple of bits set:
cpp_int i;
bit_set(i, 5000); // set the 5000'th bit
bit_set(i, 5000); // set the 5000th bit
bit_set(i, 200);
bit_set(i, 50);
// export into 8-bit unsigned values, most significant bit first:

View File

@@ -121,8 +121,8 @@ Which outputs:
[pre 9.82266396479604757017335009796882833995903762577173e-01]
Now that we've seen some non-template examples, lets repeat the code again, but this time as a template
that can be called either with a builtin type (`float`, `double` etc), or with a multiprecision type:
Now that we've seen some non-template examples, let's repeat the code again, but this time as a template
that can be called either with a builtin type (`float`, `double` etc.), or with a multiprecision type:
*/

View File

@@ -18,7 +18,7 @@
// of this example) could be used for smaller arguments.
// This example has been tested with decimal digits counts
// ranging from 21...301, by adjusting the parameter
// ranging from 21 to 301, by adjusting the parameter
// local::my_digits10 at compile time.
// References:

View File

@@ -52,12 +52,12 @@ void print_factorials()
factorial *= i;
}
//
// Lets see how many digits the largest factorial was:
// Let's see how many digits the largest factorial was:
unsigned digits = results.back().str().size();
//
// Now print them out, using right justification, while we're at it
// we'll indicate the limit of each integer type, so begin by defining
// the limits for 16, 32, 64 etc bit integers:
// the limits for 16, 32, 64 etc. bit integers:
cpp_int limits[] = {
(cpp_int(1) << 16) - 1,
(cpp_int(1) << 32) - 1,

View File

@@ -12,12 +12,12 @@ beta distribution to an absurdly high precision and compare the
accuracy and times taken for various methods. That is, we want
to calculate the value of `x` for which ['I[sub x](a, b) = 0.5].
Ultimately we'll use Newtons method and set the precision of
Ultimately we'll use Newton's method and set the precision of
mpfr_float to have just enough digits at each iteration.
The full source of the this program is in [@../../example/mpfr_precision.cpp]
The full source of this program is in [@../../example/mpfr_precision.cpp].
We'll skip over the #includes and using declations, and go straight to
We'll skip over the #includes and using declarations, and go straight to
some support code, first off a simple stopwatch for performance measurement:
*/
@@ -106,11 +106,11 @@ We'll begin with a reference method that simply calls the Boost.Math function `i
full working precision of the arguments throughout. Our reference function takes 3 arguments:
* The 2 parameters `a` and `b` of the beta distribution, and
* The number of decimal digits precision to achieve in the result.
* the number of decimal digits precision to achieve in the result.
We begin by setting the default working precision to that requested, and then, since we don't know where
our arguments `a` and `b` have been or what precision they have, we make a copy of them - note that since
copying also copies the precision as well as the value, we have to set the precision expicitly with a
copying also copies the precision as well as the value, we have to set the precision explicitly with a
second argument to the copy. Then we can simply return the result of `ibeta_inv`:
*/
mpfr_float beta_distribution_median_method_1(mpfr_float const& a_, mpfr_float const& b_, unsigned digits10)
@@ -120,8 +120,8 @@ mpfr_float beta_distribution_median_method_1(mpfr_float const& a_, mpfr_float co
return ibeta_inv(a, b, half);
}
/*`
You be wondering why we needed to change the precision of our variables `a` and `b` as well as setting the default -
there are in fact two ways in which this can go wrong if we don't do that:
You may be wondering why we needed to change the precision of our variables `a` and `b` as well as setting the
default - there are in fact two ways in which this can go wrong if we don't do that:
* The variables have too much precision - this will cause all arithmetic operations involving those types to be
promoted to the higher precision wasting precious calculation time.

View File

@@ -99,7 +99,7 @@ BOOST_AUTO_TEST_CASE(test_numeric_limits_snips)
double write = 2./3; // Any arbitrary value that cannot be represented exactly.
double read = 0;
std::stringstream s;
s.precision(std::numeric_limits<double>::digits10); // or `float64_t` for 64-bit IEE754 double.
s.precision(std::numeric_limits<double>::digits10); // or `float64_t` for 64-bit IEEE 754 double.
s << write;
s >> read;
if(read != write)
@@ -123,7 +123,7 @@ BOOST_AUTO_TEST_CASE(test_numeric_limits_snips)
}
{
//[max_digits10_4
/*`and similarly for a much higher precision type:
/*`And similarly for a much higher precision type:
*/
using namespace boost::multiprecision;
@@ -392,7 +392,7 @@ so the default expression template parameter has been replaced by `et_off`.]
cpp_bin_float_quad expected = NaN;
cpp_bin_float_quad calculated = 2 * NaN;
// Comparisons of NaN's always fail:
// Comparisons of NaNs always fail:
bool b = expected == calculated;
std::cout << b << std::endl;
BOOST_CHECK_NE(expected, expected);
@@ -445,7 +445,7 @@ Then we can equally well use a multiprecision type cpp_bin_float_quad:
infinity output was inf
infinity input was inf
``
Similarly we can do the same with NaN (except that we cannot use `assert` (because any comparisons with NaN always return false).
Similarly we can do the same with NaN (except that we cannot use `assert` (because any comparisons with NaN always return false)).
*/
{
std::locale old_locale;

View File

@@ -195,7 +195,7 @@ void t4()
using namespace boost::multiprecision;
using namespace boost::random;
//
// Generate some distruted values:
// Generate some distributed values:
//
uniform_real_distribution<cpp_bin_float_50> ur(-20, 20);
gamma_distribution<cpp_bin_float_50> gd(20);
@@ -277,7 +277,7 @@ void t5()
// Generate some multiprecision values, note that the generator is so large
// that we have to allocate it on the heap, otherwise we may run out of
// stack space! We could avoid this by using a floating point type which
// allocates it's internal storage on the heap - cpp_bin_float will do
// allocates its internal storage on the heap - cpp_bin_float will do
// this with the correct template parameters, as will the GMP or MPFR
// based reals.
//

View File

@@ -21,14 +21,14 @@ can be displayed in your debugger of choice:
using mp_t = boost::multiprecision::debug_adaptor_t<boost::multiprecision::mpfr_float>;
/*`
Our first example will investigate calculating the Bessel J function via it's well known
Our first example will investigate calculating the Bessel J function via its well known
series representation:
[$../bessel2.svg]
This simple series suffers from catastrophic cancellation error
near the roots of the function, so we'll investigate slowly increasing the precision of
the calculation until we get the result to N-decimal places. We'll begin by defining
the calculation until we get the result to N decimal places. We'll begin by defining
a function to calculate the series for Bessel J, the details of which we'll leave in the
source code:
*/
@@ -158,11 +158,11 @@ mp_t Bessel_J_to_precision(mp_t v, mp_t x, unsigned digits10)
}
//
// We now have an accurate result, but it may have too many digits,
// so lets round the result to the requested precision now:
// so let's round the result to the requested precision now:
//
result.precision(saved_digits10);
//
// To maintain uniform precision during function return, lets
// To maintain uniform precision during function return, let's
// reset the default precision now:
//
scoped.reset(saved_digits10);
@@ -173,7 +173,7 @@ mp_t Bessel_J_to_precision(mp_t v, mp_t x, unsigned digits10)
So far, this is all well and good, but there is still a potential trap for the unwary here,
when the function returns the variable [/result] may be copied/moved either once or twice
depending on whether the compiler implements the named-return-value optimisation. And since this
all happens outside the scope of this function, the precision of the returned value may get unexpected
all happens outside the scope of this function, the precision of the returned value may get unexpectedly
changed - and potentially with different behaviour once optimisations are turned on!
To prevent these kinds of unintended consequences, a function returning a value with specified precision
@@ -328,7 +328,7 @@ mp_t reduce_n_pi(const mp_t& arg)
scope_2.reset(boost::multiprecision::variable_precision_options::preserve_target_precision);
mp_t result = reduced;
//
// As with previous examples, returning the result may create temporaries, so lets
// As with previous examples, returning the result may create temporaries, so let's
// make sure that we preserve the precision of result. Note that this isn't strictly
// required if the calling context is always of uniform precision, but we can't be sure
// of our calling context: