@@ -33,7 +33,7 @@
types, float, double, long
double and a Boost.Multiprecision
type cpp_bin_float_50. In
- each case the target accuracy was set using our "recomended" accuracy
+ each case the target accuracy was set using our "recommended" accuracy
limits (or at least limits that make a good starting point - which is likely
to give close to full accuracy without resorting to unnecessary iterations).
diff --git a/doc/html/math_toolkit/special_tut/special_tut_test.html b/doc/html/math_toolkit/special_tut/special_tut_test.html
index c9b641309..0e06ce3f1 100644
--- a/doc/html/math_toolkit/special_tut/special_tut_test.html
+++ b/doc/html/math_toolkit/special_tut/special_tut_test.html
@@ -69,7 +69,7 @@
Mathematica or it's online versions functions.wolfram.com
or www.wolframalpha.com
then it's a good idea to sanity check our implementation by having at least
- one independendly generated value for each code branch our implementation
+ one independently generated value for each code branch our implementation
may take. To slot these in nicely with our testing framework it's best to
tabulate these like this:
@@ -228,7 +228,7 @@
#include <boost/math/special_functions/my_special.hpp>
-
+
#include <boost/array.hpp>
#include "functor.hpp"
#include "handle_test_result.hpp"
diff --git a/doc/html/math_toolkit/stat_tut/overview/generic.html b/doc/html/math_toolkit/stat_tut/overview/generic.html
index 3f4de23a8..075409ffd 100644
--- a/doc/html/math_toolkit/stat_tut/overview/generic.html
+++ b/doc/html/math_toolkit/stat_tut/overview/generic.html
@@ -213,7 +213,7 @@
The quantile functions for these distributions are hard to specify in
a manner that will satisfy everyone all of the time. The default behaviour
is to return an integer result, that has been rounded outwards:
- that is to say, lower quantiles - where the probablity is less than 0.5
+ that is to say, lower quantiles - where the probability is less than 0.5
are rounded down, while upper quantiles - where the probability is greater
than 0.5 - are rounded up. This behaviour ensures that if an X% quantile
is requested, then at least the requested coverage
@@ -225,7 +225,7 @@
differently, or return a real-valued result using Policies.
It is strongly recommended that you read the tutorial Understanding
Quantiles of Discrete Distributions before using the quantile
- function on a discrete distribtion. The reference
+ function on a discrete distribution. The reference
docs describe how to change the rounding policy for these distributions.
diff --git a/doc/octonion/math-octonion.qbk b/doc/octonion/math-octonion.qbk
index 91fa423e0..2d8dcfa55 100644
--- a/doc/octonion/math-octonion.qbk
+++ b/doc/octonion/math-octonion.qbk
@@ -789,16 +789,16 @@ of the octonion.
template T abs(octonion const & o);
-This return the magnitude (Euclidian norm) of the octonion.
+This return the magnitude (Euclidean norm) of the octonion.
[h4 norm]
template T norm(octonionconst & o);
This return the (Cayley) norm of the octonion. The term "norm" might
-be confusing, as most people associate it with the Euclidian norm
+be confusing, as most people associate it with the Euclidean norm
(and quadratic functionals). For this version of (the mathematical
-objects known as) octonions, the Euclidian norm (also known as
+objects known as) octonions, the Euclidean norm (also known as
magnitude) is the square root of the Cayley norm.
[endsect] [/section:oct_value_ops Octonion Value Operations]
@@ -954,7 +954,7 @@ History section. Thank you to all who contributed to the discussion about this l
* 1.5.9 - 13/5/2013: Incorporated into Boost.Math.
* 1.5.8 - 17/12/2005: Converted documentation to Quickbook Format.
-* 1.5.7 - 25/02/2003: transitionned to the unit test framework; now included by the library header (rather than the test files), via .
+* 1.5.7 - 25/02/2003: transitioned to the unit test framework; now included by the library header (rather than the test files), via .
* 1.5.6 - 15/10/2002: Gcc2.95.x and stlport on linux compatibility by Alkis Evlogimenos (alkis@routescience.com).
* 1.5.5 - 27/09/2002: Microsoft VCPP 7 compatibility, by Michael Stevens (michael@acfr.usyd.edu.au); requires the /Za compiler option.
* 1.5.4 - 19/09/2002: fixed problem with multiple inclusion (in different translation units); attempt at an improved compatibility with Microsoft compilers, by Michael Stevens (michael@acfr.usyd.edu.au) and Fredrik Blomqvist; other compatibility fixes.
diff --git a/doc/overview/roadmap.qbk b/doc/overview/roadmap.qbk
index da7e22300..e09a61474 100644
--- a/doc/overview/roadmap.qbk
+++ b/doc/overview/roadmap.qbk
@@ -144,7 +144,7 @@ Patches:
see [@https://svn.boost.org/trac/boost/ticket/11557 11557].
* Fix comments in code for the hypergeometric to match what it actually does, also fixes the parameter access functions to return
the correct values. See [@https://svn.boost.org/trac/boost/ticket/11556 11556].
-* Stopped using hidden visiblity library build with the Oracle compiler as it leads to unresolved externals from the C++ standard library.
+* Stopped using hidden visibility library build with the Oracle compiler as it leads to unresolved externals from the C++ standard library.
See [@https://svn.boost.org/trac/boost/ticket/11547 11547].
* Fix unintended use of __declspec when building with Oracle C++. See [@https://svn.boost.org/trac/boost/ticket/11546 11546].
* Fix corner case bug in root bracketing code, see [@https://svn.boost.org/trac/boost/ticket/11532 11532].
@@ -249,7 +249,7 @@ Random variate can now be infinite.
* Fix bug in inverse incomplete beta that results in cancellation errors when the beta function is really an arcsine or Student's T distribution.
* Fix issue in Bessel I and K function continued fractions that causes spurious over/underflow.
* Add improvement to non-central chi squared distribution quantile due to Thomas Luu,
-[@http://discovery.ucl.ac.uk/1482128/ Fast and accurate parallel computation of quantile functions for random number generation, Doctorial Thesis 2016].
+[@http://discovery.ucl.ac.uk/1482128/ Fast and accurate parallel computation of quantile functions for random number generation, Doctoral Thesis 2016].
[@http://discovery.ucl.ac.uk/1463470/ Efficient and Accurate Parallel Inversion of the Gamma Distribution, Thomas Luu]
[h4 Boost-1.54]
diff --git a/doc/quadrature/double_exponential.qbk b/doc/quadrature/double_exponential.qbk
index 6f0d76ad8..b4fba3da3 100644
--- a/doc/quadrature/double_exponential.qbk
+++ b/doc/quadrature/double_exponential.qbk
@@ -481,7 +481,7 @@ Over (a, [infin]). As long as `a >= 0` both the tanh_sinh and the exp_sinh inte
a rather efficient method for this kind of integral. However, if we have `a < 0` then we are forced to adapt the range in a way that
produces abscissa values near zero that have an absolute error of [epsilon], and since all of the area of the integral is near zero
both integrators thrash around trying to reach the target accuracy, but never actually get there for `a << 0`. On the other hand, the
-simple expedient of breaking the integral into two domains: (a, 0) and (0, b) and integrating each seperately using the tanh-sinh
+simple expedient of breaking the integral into two domains: (a, 0) and (0, b) and integrating each separately using the tanh-sinh
integrator, works just fine.
Finally, some endpoint singularities are too strong to be handled by `tanh_sinh` or equivalent methods, for example consider integrating
@@ -517,7 +517,7 @@ seems sensible:
auto Q = integrator.integrate(f, crossover, constants::half_pi()) + pow(crossover, 1 - p) / (1 - p);
There is an alternative, more complex method, which is applicable when we are dealing with expressions which can be simplified
-by evaluating by logs. Let's suppose that as in this case, all the area under the graph is infinitely close to zero, now inagine
+by evaluating by logs. Let's suppose that as in this case, all the area under the graph is infinitely close to zero, now imagine
that we could expand that region out over a much larger range of abscissa values: that's exactly what happens if we perform
argument substitution, replacing `x` by `exp(-x)` (note that we must also multiply by the derivative of `exp(-x)`).
Now the singularity at zero is moved to +[infin], and the [pi]/2 bound to
diff --git a/doc/roots/root_comparison.qbk b/doc/roots/root_comparison.qbk
index bf660e53f..65a9ac0b1 100644
--- a/doc/roots/root_comparison.qbk
+++ b/doc/roots/root_comparison.qbk
@@ -34,7 +34,7 @@ more complex functions.
The program used was [@ ../../example/root_finding_algorithms.cpp root_finding_algorithms.cpp].
100000 evaluations of each floating-point type and algorithm were used and the CPU times were
judged from repeat runs to have an uncertainty of 10 %. Comparing MSVC for `double` and `long double`
-(which are identical on this patform) may give a guide to uncertainty of timing.
+(which are identical on this platform) may give a guide to uncertainty of timing.
The requested precision was set as follows:
@@ -72,7 +72,7 @@ search going wildly off-track. For a known precision, it may also be possible t
fix the number of iterations, allowing inlining and loop unrolling. It also
algebraically simplifies the Halley steps leading to a big reduction in the
number of floating point operations required compared to a "black box" implementation
-that calculates the derivatives seperately and then combines them in the Halley code.
+that calculates the derivatives separately and then combines them in the Halley code.
Typically, it was found that computation using type `double`
took a few times longer when using the various root-finding algorithms directly rather
than the hand coded/optimized `cbrt` routine.
@@ -87,7 +87,7 @@ test case as the cost of calling the functor is so cheap that the runtimes are l
dominated by the complexity of the iteration code.
* Compiling with optimisation halved computation times, and any differences between algorithms
-became nearly negligible. The optimisation speed-up of the __TOMS748 was especially noticable.
+became nearly negligible. The optimisation speed-up of the __TOMS748 was especially noticeable.
* Using a multiprecision type like `cpp_bin_float_50` for a precision of 50 decimal digits
took a lot longer, as expected because most computation
@@ -116,7 +116,7 @@ not in the way the algorithm works. This confirms previous observations using _
A second example compares four generalized nth-root finding algorithms for various n-th roots (5, 7 and 13)
of a single value 28.0, for four floating-point types, `float`, `double`,
`long double` and a __multiprecision type `cpp_bin_float_50`.
-In each case the target accuracy was set using our "recomended" accuracy limits
+In each case the target accuracy was set using our "recommended" accuracy limits
(or at least limits that make a good starting point - which is likely to give
close to full accuracy without resorting to unnecessary iterations).
@@ -199,7 +199,7 @@ And of course, compiler optimisation is crucial for speed.
[endsect] [/section:root_n_comparison Comparison of Nth-root Finding Algorithms]
-[section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algoritghms]
+[section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algorithms]
A second example compares four root finding algorithms for locating
the second radius of an ellipse with first radius 28 and arc length 300,
@@ -210,7 +210,7 @@ Which is to say we're solving:
[pre 4xE(sqrt(1 - 28[super 2] / x[super 2])) - 300 = 0]
-In each case the target accuracy was set using our "recomended" accuracy limits
+In each case the target accuracy was set using our "recommended" accuracy limits
(or at least limits that make a good starting point - which is likely to give
close to full accuracy without resorting to unnecessary iterations).
@@ -244,6 +244,6 @@ the second comes almost for free, consequently the third order methods (Halley)
guarantees of ['quadratic] convergence, while Halley relies on a smooth function with a single root to
give ['cubic] convergence. It's not entirely clear why Schroder iteration often does worse than Newton.
-[endsect] [/section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algoritghms]
+[endsect] [/section:elliptic_comparison Comparison of Elliptic Integral Root Finding Algorithms]
[endsect] [/section:root_comparison Comparison of Root Finding Algorithms]
diff --git a/doc/sf/expint.qbk b/doc/sf/expint.qbk
index 00be73569..89554730d 100644
--- a/doc/sf/expint.qbk
+++ b/doc/sf/expint.qbk
@@ -171,7 +171,7 @@ For x < 0 this function just calls __expint_n(1, -x): which in turn is implement
in terms of rational approximations when the type of x has 113 or fewer bits of
precision.
-For x > 0 the generic version is implemented using the infinte series:
+For x > 0 the generic version is implemented using the infinite series:
[equation expint_i_2]
@@ -211,7 +211,7 @@ were found to be ill-conditioned: Cody and Thacher solved this issue by
converting the polynomials to their J-Fraction equivalent. However, as long
as the interval of evaluation was \[-1,1\] and the number of terms carefully chosen,
it was found that the polynomials /could/ be evaluated to suitable precision:
-error rates are typically 2 to 3 epsilon which is comparible to the error
+error rates are typically 2 to 3 epsilon which is comparable to the error
rate that Cody and Thacher achieved using J-Fractions, but marginally more
efficient given that fewer divisions are involved.
diff --git a/doc/sf/factorials.qbk b/doc/sf/factorials.qbk
index 118dd8816..8c2e37755 100644
--- a/doc/sf/factorials.qbk
+++ b/doc/sf/factorials.qbk
@@ -33,7 +33,7 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::factorial(2);`
You will get a (perhaps perplexing) compiler error, usually indicating that there is no such function to be found.
-Instead you need to specify the return type explicity and write:
+Instead you need to specify the return type explicitly and write:
`boost::math::factorial(2);`
@@ -144,8 +144,8 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::double_factorial(2);`
-You will get a (possibly perplexing) compiler error, usually indicating that there is no such function to be found. Instead you need to specifiy
-the return type explicity and write:
+You will get a (possibly perplexing) compiler error, usually indicating that there is no such function to be found. Instead you need to specify
+the return type explicitly and write:
`boost::math::double_factorial(2);`
@@ -324,8 +324,8 @@ arguments passed to the function. Therefore if you write something like:
`boost::math::binomial_coefficient(10, 2);`
-You will get a compiler error, usually indicating that there is no such function to be found. Instead you need to specifiy
-the return type explicity and write:
+You will get a compiler error, usually indicating that there is no such function to be found. Instead you need to specify
+the return type explicitly and write:
`boost::math::binomial_coefficient(10, 2);`
diff --git a/example/policy_eg_7.cpp b/example/policy_eg_7.cpp
index 70a49504b..2865bea36 100644
--- a/example/policy_eg_7.cpp
+++ b/example/policy_eg_7.cpp
@@ -14,12 +14,12 @@ using std::cout; using std::endl;
//[policy_eg_7
#include // All distributions.
-// using boost::math::normal; // Would create an ambguity between
+// using boost::math::normal; // Would create an ambiguity between
// boost::math::normal_distribution boost::math::normal and
// 'anonymous-namespace'::normal'.
namespace
-{ // anonymous or unnnamed (rather than named as in policy_eg_6.cpp).
+{ // anonymous or unnamed (rather than named as in policy_eg_6.cpp).
using namespace boost::math::policies;
// using boost::math::policies::errno_on_error; // etc.
diff --git a/example/root_n_finding_algorithms.cpp b/example/root_n_finding_algorithms.cpp
index d926eda88..70f017923 100644
--- a/example/root_n_finding_algorithms.cpp
+++ b/example/root_n_finding_algorithms.cpp
@@ -137,7 +137,7 @@ struct root_info
// = digits * digits_accuracy
// Vector of values (4) for each algorithm, TOMS748, Newton, Halley & Schroder.
//std::vector< boost::int_least64_t> times; converted to int.
- std::vector times; // arbirary units (ticks).
+ std::vector times; // arbitrary units (ticks).
//boost::int_least64_t min_time = std::numeric_limits::max(); // Used to normalize times (as int).
std::vector normed_times;
int min_time = (std::numeric_limits::max)(); // Used to normalize times.
@@ -223,7 +223,7 @@ T nth_root_noderiv(T x)
template
struct nth_root_functor_1deriv
-{ // Functor also returning 1st derviative.
+{ // Functor also returning 1st derivative.
BOOST_STATIC_ASSERT_MSG(boost::is_integral::value == false, "Only floating-point type types can be used!");
BOOST_STATIC_ASSERT_MSG((N > 0) == true, "root N must be > 0!");
@@ -569,7 +569,7 @@ int test_root(cpp_bin_float_100 big_value, cpp_bin_float_100 answer, const char*
return 4; // eval_count of how many algorithms used.
} // test_root
-/*! Fill array of times, interations, etc for Nth root for all 4 types,
+/*! Fill array of times, iterations, etc for Nth root for all 4 types,
and write a table of results in Quickbook format.
*/
template
diff --git a/include/boost/math/distributions/non_central_chi_squared.hpp b/include/boost/math/distributions/non_central_chi_squared.hpp
index 5ef83aec7..fe469864d 100644
--- a/include/boost/math/distributions/non_central_chi_squared.hpp
+++ b/include/boost/math/distributions/non_central_chi_squared.hpp
@@ -46,7 +46,7 @@ namespace boost
// Computing discrete mixtures of continuous
// distributions: noncentral chisquare, noncentral t
// and the distribution of the square of the sample
- // multiple correlation coeficient.
+ // multiple correlation coefficient.
// D. Benton, K. Krishnamoorthy.
// Computational Statistics & Data Analysis 43 (2003) 249 - 267
//
@@ -141,7 +141,7 @@ namespace boost
// This uses a stable forward iteration to sum the
// CDF, unfortunately this can not be used for large
// values of the non-centrality parameter because:
- // * The first term may underfow to zero.
+ // * The first term may underflow to zero.
// * We may need an extra-ordinary number of terms
// before we reach the first *significant* term.
//
@@ -191,7 +191,7 @@ namespace boost
// Computing discrete mixtures of continuous
// distributions: noncentral chisquare, noncentral t
// and the distribution of the square of the sample
- // multiple correlation coeficient.
+ // multiple correlation coefficient.
// D. Benton, K. Krishnamoorthy.
// Computational Statistics & Data Analysis 43 (2003) 249 - 267
//
diff --git a/include/boost/math/octonion.hpp b/include/boost/math/octonion.hpp
index cff0c9eb5..10fe1a2a4 100644
--- a/include/boost/math/octonion.hpp
+++ b/include/boost/math/octonion.hpp
@@ -273,7 +273,7 @@ namespace boost
// but unlike them there is no meaningful notion of "imaginary part".
// Instead there is an "unreal part" which itself is an octonion, and usually
// nothing simpler (as opposed to the complex number case).
- // However, for practicallity, there are accessors for the other components
+ // However, for practicality, there are accessors for the other components
// (these are necessary for the templated copy constructor, for instance).
BOOST_OCTONION_ACCESSOR_GENERATOR(T)
@@ -1280,7 +1280,7 @@ namespace boost
// but unlike them there is no meaningful notion of "imaginary part".
// Instead there is an "unreal part" which itself is an octonion, and usually
// nothing simpler (as opposed to the complex number case).
- // However, for practicallity, there are accessors for the other components
+ // However, for practicality, there are accessors for the other components
// (these are necessary for the templated copy constructor, for instance).
BOOST_OCTONION_ACCESSOR_GENERATOR(float)
@@ -1344,7 +1344,7 @@ namespace boost
// but unlike them there is no meaningful notion of "imaginary part".
// Instead there is an "unreal part" which itself is an octonion, and usually
// nothing simpler (as opposed to the complex number case).
- // However, for practicallity, there are accessors for the other components
+ // However, for practicality, there are accessors for the other components
// (these are necessary for the templated copy constructor, for instance).
BOOST_OCTONION_ACCESSOR_GENERATOR(double)
@@ -1408,7 +1408,7 @@ namespace boost
// but unlike them there is no meaningful notion of "imaginary part".
// Instead there is an "unreal part" which itself is an octonion, and usually
// nothing simpler (as opposed to the complex number case).
- // However, for practicallity, there are accessors for the other components
+ // However, for practicality, there are accessors for the other components
// (these are necessary for the templated copy constructor, for instance).
BOOST_OCTONION_ACCESSOR_GENERATOR(long double)
@@ -3971,7 +3971,7 @@ namespace boost
#undef BOOST_OCTONION_VALARRAY_LOADER
- // Note: This is the Cayley norm, not the Euclidian norm...
+ // Note: This is the Cayley norm, not the Euclidean norm...
template
inline T norm(octonion const & o)
diff --git a/include/boost/math/quaternion.hpp b/include/boost/math/quaternion.hpp
index 3fcbf1eaf..2ffec2897 100644
--- a/include/boost/math/quaternion.hpp
+++ b/include/boost/math/quaternion.hpp
@@ -148,7 +148,7 @@ namespace boost
// but unlike them there is no meaningful notion of "imaginary part".
// Instead there is an "unreal part" which itself is a quaternion, and usually
// nothing simpler (as opposed to the complex number case).
- // However, for practicallity, there are accessors for the other components
+ // However, for practicality, there are accessors for the other components
// (these are necessary for the templated copy constructor, for instance).
BOOST_CONSTEXPR T real() const
@@ -1004,7 +1004,7 @@ inline BOOST_CXX14_CONSTEXPR quaternion operator / (const quaternion& a, c
}
- // Note: This is the Cayley norm, not the Euclidian norm...
+ // Note: This is the Cayley norm, not the Euclidean norm...
template
inline BOOST_CXX14_CONSTEXPR T norm(quaternionconst & q)
diff --git a/include/boost/math/special_functions/detail/bessel_y1.hpp b/include/boost/math/special_functions/detail/bessel_y1.hpp
index 98389a90e..1d7713482 100644
--- a/include/boost/math/special_functions/detail/bessel_y1.hpp
+++ b/include/boost/math/special_functions/detail/bessel_y1.hpp
@@ -24,7 +24,7 @@
// This is the only way we can avoid
// warning: non-standard suffix on floating constant [-Wpedantic]
// when building with -Wall -pedantic. Neither __extension__
-// nor #pragma dianostic ignored work :(
+// nor #pragma diagnostic ignored work :(
//
#pragma GCC system_header
#endif
@@ -159,7 +159,7 @@ T bessel_y1(T x, const Policy& pol)
if (x <= 0)
{
- return policies::raise_domain_error("bost::math::bessel_y1<%1%>(%1%,%1%)",
+ return policies::raise_domain_error("boost::math::bessel_y1<%1%>(%1%,%1%)",
"Got x == %1%, but x must be > 0, complex result not supported.", x, pol);
}
if (x <= 4) // x in (0, 4]
diff --git a/include/boost/math/special_functions/detail/hypergeometric_1F1_large_abz.hpp b/include/boost/math/special_functions/detail/hypergeometric_1F1_large_abz.hpp
index 03394e885..176b87474 100644
--- a/include/boost/math/special_functions/detail/hypergeometric_1F1_large_abz.hpp
+++ b/include/boost/math/special_functions/detail/hypergeometric_1F1_large_abz.hpp
@@ -136,7 +136,7 @@
T first = 1;
T second = ((1 + crossover_a - b_local) / crossover_a) + ((b_local - 1) / crossover_a) / b_ratio;
//
- // Recurse down to a_local, compare values and re-nomralise first and second:
+ // Recurse down to a_local, compare values and re-normalise first and second:
//
boost::math::detail::hypergeometric_1F1_recurrence_a_coefficients a_coef(crossover_a, b_local, x);
int backwards_scale = 0;
@@ -404,7 +404,7 @@
BOOST_MATH_STD_USING
//
// This is the selection logic to pick the "best" method.
- // We have a,b,z >> 0 and need to comute the approximate cost of each method
+ // We have a,b,z >> 0 and need to compute the approximate cost of each method
// and then select whichever wins out.
//
enum method
diff --git a/include/boost/math/special_functions/detail/hypergeometric_pFq_checked_series.hpp b/include/boost/math/special_functions/detail/hypergeometric_pFq_checked_series.hpp
index 13a29c7c9..521559f95 100644
--- a/include/boost/math/special_functions/detail/hypergeometric_pFq_checked_series.hpp
+++ b/include/boost/math/special_functions/detail/hypergeometric_pFq_checked_series.hpp
@@ -42,7 +42,7 @@
// 3 solutions: The series diverges to a maxima, converges to a minima before diverging again to a second maxima before final convergence.
// 4 solutions: The series converges to a minima before diverging to a maxima, converging to a minima, diverging to a second maxima and then converging.
//
- // The first 2 situations are adequately handled by direct series evalution, while the 2,3 and 4 solutions are not.
+ // The first 2 situations are adequately handled by direct series evaluation, while the 2,3 and 4 solutions are not.
//
Real a = *aj.begin();
Real b = *bj.begin();
diff --git a/include/boost/math/special_functions/detail/ibeta_inverse.hpp b/include/boost/math/special_functions/detail/ibeta_inverse.hpp
index 26c1a83f1..4ec944a9c 100644
--- a/include/boost/math/special_functions/detail/ibeta_inverse.hpp
+++ b/include/boost/math/special_functions/detail/ibeta_inverse.hpp
@@ -264,7 +264,7 @@ T temme_method_2_ibeta_inverse(T /*a*/, T /*b*/, T z, T r, T theta, const Policy
// is 1:1 with the sign of eta and x-sin^2(theta) being the same.
// So we can check if we have the right root of 3.2, and if not
// switch x for 1-x. This transformation is motivated by the fact
- // that the distribution is *almost* symetric so 1-x will be in the right
+ // that the distribution is *almost* symmetric so 1-x will be in the right
// ball park for the solution:
//
if((x - s_2) * eta < 0)
@@ -374,12 +374,12 @@ T temme_method_3_ibeta_inverse(T a, T b, T p, T q, const Policy& pol)
e3 -= (442043 * w_9 + 2054169 * w_8 + 3803094 * w_7 + 3470754 * w_6 + 2141568 * w_5 - 2393568 * w_4 - 19904934 * w_3 - 34714674 * w_2 - 23128299 * w - 5253353) * d / (146966400 * w_6 * w1_3);
e3 -= (116932 * w_10 + 819281 * w_9 + 2378172 * w_8 + 4341330 * w_7 + 6806004 * w_6 + 10622748 * w_5 + 18739500 * w_4 + 30651894 * w_3 + 30869976 * w_2 + 15431867 * w + 2919016) * d_2 / (146966400 * w1_4 * w_7);
//
- // Combine eta0 and the error terms to compute eta (Second eqaution p155):
+ // Combine eta0 and the error terms to compute eta (Second equation p155):
//
T eta = eta0 + e1 / a + e2 / (a * a) + e3 / (a * a * a);
//
// Now we need to solve Eq 4.2 to obtain x. For any given value of
- // eta there are two solutions to this equation, and since the distribtion
+ // eta there are two solutions to this equation, and since the distribution
// may be very skewed, these are not related by x ~ 1-x we used when
// implementing section 3 above. However we know that:
//
@@ -579,7 +579,7 @@ T ibeta_inv_imp(T a, T b, T p, T q, const Policy& pol, T* py)
// When a and b differ by a small amount
// the curve is quite symmetrical and we can use an error
// function to approximate the inverse. This is the cheapest
- // of the three Temme expantions, and the calculated value
+ // of the three Temme expansions, and the calculated value
// for x will never be much larger than p, so we don't have
// to worry about cancellation as long as p is small.
//
diff --git a/include/boost/math/special_functions/gamma.hpp b/include/boost/math/special_functions/gamma.hpp
index bdbff3155..ebaf64072 100644
--- a/include/boost/math/special_functions/gamma.hpp
+++ b/include/boost/math/special_functions/gamma.hpp
@@ -874,7 +874,7 @@ T regularised_gamma_prefix(T a, T z, const Policy& pol, const Lanczos& l)
// We have to treat a < 1 as a special case because our Lanczos
// approximations are optimised against the factorials with a > 1,
// and for high precision types especially (128-bit reals for example)
- // very small values of a can give rather eroneous results for gamma
+ // very small values of a can give rather erroneous results for gamma
// unless we do this:
//
// TODO: is this still required? Lanczos approx should be better now?
@@ -1161,7 +1161,7 @@ T gamma_incomplete_imp(T a, T x, bool normalised, bool invert,
// way from a in value then we can reliably use methods 2 and 4
// below in logarithmic form and go straight to the result.
// Otherwise we let the regularized gamma take the strain
- // (the result is unlikely to unerflow in the central region anyway)
+ // (the result is unlikely to underflow in the central region anyway)
// and combine with lgamma in the hopes that we get a finite result.
//
if(invert && (a * 4 < x))
@@ -1375,7 +1375,7 @@ T gamma_incomplete_imp(T a, T x, bool normalised, bool invert,
// series sum based on what we'll end up subtracting it from
// at the end.
// Have to be careful though that this optimization doesn't
- // lead to spurious numberic overflow. Note that the
+ // lead to spurious numeric overflow. Note that the
// scary/expensive overflow checks below are more often
// than not bypassed in practice for "sensible" input
// values:
@@ -1632,7 +1632,7 @@ T tgamma_delta_ratio_imp(T z, T delta, const Policy& pol)
if((z <= 0) || (z + delta <= 0))
{
- // This isn't very sofisticated, or accurate, but it does work:
+ // This isn't very sophisticated, or accurate, but it does work:
return boost::math::tgamma(z, pol) / boost::math::tgamma(z + delta, pol);
}
@@ -1840,7 +1840,7 @@ struct igamma_initializer
static void do_init(const mpl::int_&)
{
// If std::numeric_limits::digits is zero, we must not call
- // our inituialization code here as the precision presumably
+ // our initialization code here as the precision presumably
// varies at runtime, and will not have been set yet. Plus the
// code requiring initialization isn't called when digits == 0.
if(std::numeric_limits::digits)
diff --git a/include/boost/math/tools/user.hpp b/include/boost/math/tools/user.hpp
index 08a7e53d9..6d3df000c 100644
--- a/include/boost/math/tools/user.hpp
+++ b/include/boost/math/tools/user.hpp
@@ -54,7 +54,7 @@
//
// #define BOOST_MATH_EVALUATION_ERROR_POLICY throw_on_error
//
-// Underfow:
+// Underflow:
//
// #define BOOST_MATH_UNDERFLOW_ERROR_POLICY ignore_error
//
@@ -84,7 +84,7 @@
//
// #define BOOST_MATH_ASSERT_UNDEFINED_POLICY true
//
-// Maximum series iterstions permitted:
+// Maximum series iterations permitted:
//
// #define BOOST_MATH_MAX_SERIES_ITERATION_POLICY 1000000
//
diff --git a/test/complex_test.cpp b/test/complex_test.cpp
index aa6ae4325..f2a0b6c82 100644
--- a/test/complex_test.cpp
+++ b/test/complex_test.cpp
@@ -83,7 +83,7 @@ bool check_complex(const std::complex& a, const std::complex& b, int max_e
}
if((boost::math::isnan)(a.imag()))
{
- BOOST_ERROR("Found non-finite value for inaginary part: " << a);
+ BOOST_ERROR("Found non-finite value for imaginary part: " << a);
}
T rel = boost::math::fabs((b-a)/b) / eps;
@@ -258,7 +258,7 @@ void test_inverse_trig(T)
//
// check_spots:
-// Various spot values, mostly the C99 special cases (infinites and NAN's).
+// Various spot values, mostly the C99 special cases (infinities and NAN's).
// TODO: add spot checks for the Wolfram spot values.
//
template
diff --git a/test/tanh_sinh_quadrature_test.cpp b/test/tanh_sinh_quadrature_test.cpp
index 0c1501c19..e1473a872 100644
--- a/test/tanh_sinh_quadrature_test.cpp
+++ b/test/tanh_sinh_quadrature_test.cpp
@@ -578,7 +578,7 @@ void test_crc()
}
// There is an alternative way to evaluate the above integral: by noticing that all the area of the integral
// is near zero for p < 0 and near 1 for p > 0 we can substitute exp(-x) for x and remap the integral to the
- // domain (0, INF). Internally we need to expand out the terms and evaluate using logs to avoid spurous overflow,
+ // domain (0, INF). Internally we need to expand out the terms and evaluate using logs to avoid spurious overflow,
// this gives us
// for p > 0:
for (Real p = Real(0.99); p > 0; p -= Real(0.1)) {
@@ -647,7 +647,7 @@ void test_crc()
// in the exp terms. Note that for small x: tan(x) ~= x, so making this
// substitution and evaluating by logs we have:
//
- // exp(-x)/tan(exp(-x))^h ~= exp((h - 1) * x) for x > -log(epsion);
+ // exp(-x)/tan(exp(-x))^h ~= exp((h - 1) * x) for x > -log(epsilon);
//
// Here's how that looks in code:
//
diff --git a/test/test_lambert_w.cpp b/test/test_lambert_w.cpp
index c35e83512..39f2aa02f 100644
--- a/test/test_lambert_w.cpp
+++ b/test/test_lambert_w.cpp
@@ -59,7 +59,7 @@ using boost::multiprecision::float128;
//#include // If available.
#include // for real_concept tests.
-#include // isnan, ifinite.
+#include // isnan, isfinite.
#include // float_next, float_prior
using boost::math::float_next;
using boost::math::float_prior;
@@ -616,7 +616,7 @@ void test_spots(RealType)
// Tests to ensure that all JM rational polynomials are being checked.
- // 1st polynomal if (z < 0.5) // 0.05 < z < 0.5
+ // 1st polynomial if (z < 0.5) // 0.05 < z < 0.5
BOOST_CHECK_CLOSE_FRACTION(lambert_w0(BOOST_MATH_TEST_VALUE(RealType, 0.49)),
BOOST_MATH_TEST_VALUE(RealType, 0.3465058086974944293540338951489158955895910665452626949),
tolerance);
@@ -624,7 +624,7 @@ void test_spots(RealType)
BOOST_MATH_TEST_VALUE(RealType, 0.04858156174600359264950777241723801201748517590507517888),
tolerance);
- // 2st polynomal if 0.5 < z < 2
+ // 2st polynomial if 0.5 < z < 2
BOOST_CHECK_CLOSE_FRACTION(lambert_w0(BOOST_MATH_TEST_VALUE(RealType, 0.51)),
BOOST_MATH_TEST_VALUE(RealType, 0.3569144916935871518694242462560450385494399307379277704),
tolerance);
diff --git a/tools/lanczos_generator.cpp b/tools/lanczos_generator.cpp
index 4ab3cf3af..a1ee2994f 100644
--- a/tools/lanczos_generator.cpp
+++ b/tools/lanczos_generator.cpp
@@ -66,7 +66,7 @@ lanczos_spot_data sweet_spots[] = {
23, 23.118012, 5.2e-35,
// some more we've found, these are all the points where the first
-// negleted term from the Lanczos series changes sign, there is one
+// neglected term from the Lanczos series changes sign, there is one
// point just above that point, and one just below:
3, 0.58613894134759903, 0.00036580426080686315,
@@ -4331,7 +4331,7 @@ T binomial(int n, int k, T)
return result;
}
//
-// Functions for creating the matrices that generate the coefficents.
+// Functions for creating the matrices that generate the coefficients.
// See http://my.fit.edu/~gabdo/gamma.txt and http://www.rskey.org/gamma.htm
//
template
@@ -5100,7 +5100,7 @@ int main(int argc, char*argv [])
if(argc < 2)
{
std::cout <<
- "Useage:\n"
+ "Usage:\n"
" -float test type float for the best approximation\n"
" -double test type double for the best approximation\n"
" -long-double test type long double for the best approximation\n"