diff --git a/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html b/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html new file mode 100644 index 00000000..cc0cc3ea --- /dev/null +++ b/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html @@ -0,0 +1,482 @@ + + + +Adapt your own state types + + + + + + + + +
+
+
+PrevUpHomeNext +
+
+

+Adapt + your own state types +

+

+ One of the main goals of odeint is to provide algorithms independent from + the underlying state type. The state type is a type representing the state + of the ODE, that is the variable x. As we usually deal with systems of ODEs, + the state type is represented by some sort of container. Most often, the + value type of the container is simply double, + as usually ODEs are defined as systems of real variables. However, it is + also possible to use complex types (complex<double>) as underlying value type. Moreover, + you can even adopt odeint to work with any + value type as long as the required operations are defined. However, in the + following I will describe how your own state types can be used to run with + odeint. I will assume that the state type is some sort of container aggregating + a number of values representing state of the ODE. As odeint also takes care + for the memory management of where intermediate results are stored, it first + of all needs to know how to construct/destruct and possibly resize the state + type. Additionally, it requires to be told how basic algebraic operations + are to be performed on state types. So when introducing new state types to + odeint, the following points have to be considered: +

+
+

+ Of course, odeint already provides basic interfaces for most of the usual + state types. So if you use a std::vector, + or a boost::array as state type no additional work + is required, they just work out of the box. +

+

+ We distinguish between two basic state types: fixed sized and dynamically + sized. For fixed size state types the default constructor state_type() already allocates the required memory, + prominent example is boost::array<T,N>. + Dynamically sized types have to be resized to make sure enough memory is + allocated, the standard constructor does not take care of the resizing. Examples + for this are the STL containers like vector<double>. +

+

+ The most easy way of getting your own state type to work with odeint is to + use a fixed size state, base calculations on the range_algebra and provide + the following functionality: +

+
++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ Name +

+
+

+ Expression +

+
+

+ Type +

+
+

+ Semantics +

+
+

+ Construct State +

+
+

+ State x() +

+
+

+ void +

+
+

+ Creates an instance of State + and allocates memory. +

+
+

+ Begin of the sequence +

+
+

+ boost::begin(x) +

+
+

+ Iterator +

+
+

+ Returns an iterator pointing to the begin of the sequence +

+
+

+ End of the sequence +

+
+

+ boost::end(x) +

+
+

+ Iterator +

+
+

+ Returns an iterator pointing to the end of the sequence +

+
+
+ + + + + +
[Caution]Caution

+ If your state type does not allocate memory by default construction, you + must define it as resizeable and provide + resize functionality. Otherwise segmentation faults will occur. +

+

+ So fixed sized arrays supported by Boost.Range + immediately work with odeint. For dynamically sized arrays one has to additionally + supply the resize functionality. First, the state has to be tagged as resizeable + by specializing the struct is_resizeable + which consists of one typedef and one bool value: +

+
++++++ + + + + + + + + + + + + + + + + + + + + +
+

+ Name +

+
+

+ Expression +

+
+

+ Type +

+
+

+ Semantics +

+
+

+ Resizability +

+
+

+ is_resizeable<State>::type +

+
+

+ boost::true_type or boost::false_type +

+
+

+ Determines resizeability of the state type, returns boost::true_type if the state is resizeable. +

+
+

+ Resizability +

+
+

+ is_resizeable<State>::value +

+
+

+ bool +

+
+

+ Same as above, but with bool + value. +

+
+

+ This tells odeint that your state is resizeable. By default, odeint now expects + the support of boost::size(x) and a x.resize( boost::size(y) + ) member function for resizing: +

+
++++++ + + + + + + + + + + + + + + + + + + + + +
+

+ Name +

+
+

+ Expression +

+
+

+ Type +

+
+

+ Semantics +

+
+

+ Get size +

+
+

+ boost::size( + x ) +

+
+

+ size_type +

+
+

+ Returns the current size of x. +

+
+

+ Resize +

+
+

+ x.resize( + boost::size( + y ) + ) +

+
+

+ void +

+
+

+ Resizes x to have the same size as y. +

+
+

+ As an example of the above we will adopt ublas::vector + from Boost.UBlas + to work as a state type in odeint. This is particularily easy because ublas::vector supports Boost.Range, + including boost::size. It also has a resize member function + so all that has to be done in this case is to declare resizability: +

+

+ +

+
typedef boost::numeric::ublas::vector< double > state_type;
+
+namespace boost { namespace numeric { namespace odeint {
+
+template<>
+struct is_resizeable< state_type >
+{
+    typedef boost::true_type type;
+    const static bool value = type::value;
+};
+
+} } }
+
+

+

+

+ This immediately makes ublas::vector + working with odeint as all other requirements are fullfilled by default in + this case. You can find the full example in lorenz_ublas.cpp. +

+

+ If your state type does work with Boost.Range, + but handles resizing differently you are required to specialize two implementations + used by odeint to check a state's size and to resize: +

+
++++++ + + + + + + + + + + + + + + + + + + + + +
+

+ Name +

+
+

+ Expression +

+
+

+ Type +

+
+

+ Semantics +

+
+

+ Check size +

+
+

+ same_size_impl<State,State>::same_size(x + , y) +

+
+

+ bool +

+
+

+ Returns true if the size of x equals the size of y. +

+
+

+ Resize +

+
+

+ resize_impl<State,State>::resize(x , + y) +

+
+

+ void +

+
+

+ Resizes x to have the same size as y. +

+
+

+ gsl_vector, gsl_matrix, ublas::matrix, blitz::matrix, thrust +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/boost_sandbox_numeric_odeint/odeint_in_detail/steppers.html b/doc/boost_sandbox_numeric_odeint/odeint_in_detail/steppers.html new file mode 100644 index 00000000..a55a0d53 --- /dev/null +++ b/doc/boost_sandbox_numeric_odeint/odeint_in_detail/steppers.html @@ -0,0 +1,1737 @@ + + + +Steppers + + + + + + + + +
+
+
+PrevUpHomeNext +
+
+

+Steppers +

+
+
Explicit + steppers
+
Symplectic + solvers
+
Implicit + solvers
+
Multistep + methods
+
Controlled + steppers
+
Dense + output steppers
+
Using + steppers
+
Stepper + overview
+
+

+ Solving ordinary differential equation numerically is ususally done iteratively, + that is a given state of an ordinary differential equation is iterated forward + x(t) -> x(t+dt) -> x(t+2dt). The steppers in odeint + perform one single step. The most general stepper type is described by the + Stepper + concept. The stepper concepts of odeint are described in detail in section + Concepts, here + we briefly present the mathematical and numerical details of the steppers. + The Stepper + has two versions of the do_step + method, one with an in-place transform of the currrent state and one with + an out-of-place transform: +

+

+ do_step( + sys , + inout , + t , dt ) +

+

+ do_step( + sys , + in , + t , out , dt ) +

+

+ The first parameter is always the system function - a function describing + the ODE. In the first version the second parameter is the step which is here + updated in-place and the third and the fourth parameters are the time and + step size (the time step). After a call of do_step + the time of the ODE is t+dt. In the second version the + second argument is the state of the ODE at time t, the + third argument is t, the fourth argument is the state at time t+dt + which is filled by do_step + and the fifth argument is the time step. +

+

+ System functions +

+

+ Up to now, we have nothing said about the system function. This function + depends on the stepper. For the explicit Runge-Kutta steppers this function + can be a simple callable object hence a simple (global) C-function or a functor. + The parameter syntax is sys( x , + dxdt , + t ) + and it is assumed that it calculates dx/dt = f(x,t). +

+

+ Other types of system function represent Hamiltonian systems or system which + also compute the Jacobian needed in implicit steppers. For informations which + stepper uses which system function see the stepper table below. It might + be possible, that odeint will introduce new system types in near future. + Since the system function is strongly related to the stepper type, such an + introduction of a new stepper might result in a new type of system function. +

+
+ +

+ A first specialization are the explicit steppers. Explicit means that the + new state of the ode can be computed explicitly from the current state + without solving implicit equations. These steppers have in common that + they evaluate the system at time t such that the result + of f(x,t) can be passed to the stepper. In odeint, + the explicit stepper have two additional methods +

+

+ do_step( + sys , + inout , + dxdtin , + t , + dt ) +

+

+ do_step( + sys , + in , + dxdtin , + t , + out , + dt ) +

+

+ Here, the additional parameter is the value of the function f + at state x and time t. An example + is the Runge Kutta stepper of fourth order: +

+

+ +

+
runge_kutta4< state_type > rk;
+rk.do_step( sys1 , inout , t , dt );               // In-place transformation of inout
+rk.do_step( sys2 , inout , t , dt );               // Ok
+rk.do_step( sys1 , in , t , out , dt );            // Out-of-place transformation
+rk.do_step( sys1 , inout , dxdtin , t , dt );      // In-place tranformation of inout
+rk.do_step( sys1 , in , dxdtin , t , out , dt );   // Out-of-place transformation
+
+

+

+

+ Of course, you do not need to call these two method. You can always use + the simpler do_step( + sys , + inout , + t , + dt ), + but sometimes the derivative of the state is needed to do some external + computations or to perform some statistical analysis. +

+

+ A special class of the explicit steppers are the FSAL (first-same-as-last) + steppers, where the last evaluation of the system function is also the + first evaluation of the following step. For such stepper a further do_step method exist: +

+

+ do_step( + sys , + in , + dxdtin , + out , + dxdtout , + t , + dt ) +

+

+ This method also fills the derivative at time t+dt + into dxdtout. Of course, + the performance gain of such FSAL steppers only appears when combining + with intergrate error estimation, like in the Runge-Kutta-Dopri5 stepper. + The FSAL-trick is sometimes also referred as the Fehlberg-Trick. An example + how the FSAL steppers can be used is +

+

+ +

+
runge_kutta_dopri5< state_type > rk;
+rk.do_step( sys1 , in , t , out , dt );
+rk.do_step( sys2 , in , t , out , dt );         // DONT do this, sys1 is assumed
+
+rk.do_step( sys2 , in2 , t , out , dt );
+rk.do_step( sys2 , in3 , t , out , dt );        // DONT do this, in2 is assumed
+
+rk.do_step( sys1 , inout , dxdtinout , t , dt );
+rk.do_step( sys2 , inout , dxdtinout , t , dt );           // Ok, internal derivative is not used, dxdtinout is updated
+
+rk.do_step( sys1 , in , dxdtin , t , out , dxdtout , dt );
+rk.do_step( sys2 , in , dxdtin , t , out , dxdtout , dt ); // Ok, internal derivative is not used
+
+

+

+
+ + + + + +
[Caution]Caution

+ The FSAL-steppers save the derivative at time t+dt + internally if they are called via do_step( sys , in , out , t , dt ). The first call of do_step + will initialize dxdt + and for all following calls it is assumed that the same system and the + same state are used. If you use the FSAL stepper within the integrate + functions automatically. See the Using + steppers section for more details or look into the table below + to see which stepper have an internal state. +

+
+
+ +

+ As mentioned above symplectic solvers are used for Hamiltonian systems. + Symplectic solvers conserve the phase space volume exactly and if the Hamiltonian + system is energy conservative they also conserve the energy approximately. + A special class of symplectic systems are separable systems which can be + written in the form dqdt/dt = f1(p), dpdt/dt + = f2(q), where (q,p) are the state of system. + The space of (q,p) is sometimes refered as the phase + space and q and p are said the + be the phase space variables. Symplectic systems in this special form occur + widely in nature. For example the complete classical mechanics as written + down by Newton, Lagrange and Hamilton can be forumulated in this framework. + Of course, the separability of the system depends on the specific choice + of coordinates. +

+

+ Integrable symplectic systems can be solved by odeint by means of the symplectic_euler + stepper and a symplectic Runge-Kutta-Nystrom method of sixth-order. These + stepper assume that the system is autonomous, hence the time will not explicitly + occur. Furhter they fullfil in principle the default Stepper concept, but + they expect the system to be a pair of callable objects. The first entry + of this pair calculates f1(p) while the second calculates + f2(q). The syntax is sys.first(p,dqdt) + and sys.second(q,dpdt), where the first and second part can be + again simple C-functions of functors. An example is the harmonic oscillator: +

+

+ +

+
typedef boost::array< double , 1 > vector_type;
+
+void harm_osc_f1( const vector_type &p , vector_type &dqdt )
+{
+    dqdt[0] = p[0];
+}
+
+void harm_osc_f2( const vector_type &q , vector_type &dpdt )
+{
+    dpdt[0] = -q[0];
+}
+
+

+

+

+ The state of such an ODE consist now also of two parts, the part for q + (also called the coordinates) and the part for p (the momenta). The full + example for the harmonic oscillator is now: +

+

+ +

+
pair< vector_type , vector_type > x;
+x.first[0] = 1.0; x.second[0] = 0.0;
+symplectic_rkn_sb3a_mclachlan< vector_type > rkn;
+rkn.do_step( make_pair( harm_osc_f1 , harm_osc_f2 ) , x , t , dt );
+
+

+

+

+ If you like to represent the system with one class you can easily bind + two public method: +

+

+ +

+
struct harm_osc
+{
+    void f1( const vector_type &p , vector_type &dqdt ) const
+    {
+        dqdt[0] = p[0];
+    }
+
+    void f2( const vector_type &q , vector_type &dpdt ) const
+    {
+        dpdt[0] = -q[0];
+    }
+};
+
+

+

+

+ +

+
harm_osc h;
+rkn.do_step( make_pair( boost::bind( &harm_osc::f1 , h , _1 , _2 ) , boost::bind( &harm_osc::f2 , h , _1 , _2 ) ) ,
+        x , t , dt );
+
+

+

+

+ Many Hamiltonian system can be written as dq/dt=p, + dp/dt=f(q) which is computationally much easier then + the full separable system. Very often, it is also possible to transform + the original equations of motion to bring the system in this simplified + form. This kind of system can be used in the symplectic solvers, by simply + passing f(p) to the do_step + method, again f(p) will be represented by a simple + C-function or a functor. Here, the above example of the harmonic oscaillator + can be written as +

+

+ +

+
pair< vector_type , vector_type > x;
+x.first[0] = 1.0; x.second[0] = 0.0;
+symplectic_rkn_sb3a_mclachlan< vector_type > rkn;
+rkn.do_step( harm_osc_f1 , x , t , dt );
+
+

+

+

+ In this example the function harm_osc_f1 + is exactly the same function as in the above examples. +

+

+ Note, that the state of the ODE must not be contructed explicitly via + pair< + vector_type , + vector_type > + x. One can also use a combination + of make_pair and ref. Furthermore, a convenience version + of do_step exists which + takes q and p without combining them into a pair: +

+

+ +

+
rkn.do_step( harm_osc_f1 , make_pair( boost::ref( q ) , boost::ref( p ) ) , t , dt );
+rkn.do_step( harm_osc_f1 , q , p , t , dt );
+rkn.do_step( make_pair( harm_osc_f1 , harm_osc_f2 ) , q , p , t , dt );
+
+

+

+
+
+ +
+ + + + + +
[Caution]Caution

+ This section is not up-to-date. +

+

+ For some kind of systems the stability properties of the classical Runge-Kutta + are not sufficient, especially if the system is said to be stiff. A stiff + system possesses two or more time scales of very different order. Solvers + for stiff systems are usually implicit, meaning that they solve equations + like x(t+dt) = x(t) + dt * f(x(t+1)). This particular + scheme is the implicit euler method. Implicit methods usually solve the + system of equations by a root finding algorithm like the Newton method + and therefore need to know the Jacobian of the system J​ij = df​i / + dx​j. +

+

+ For implicit solvers the system is again a pair, where the first component + computes f(x,t) and the second the Jacobian. The syntax + is sys.first( x , dxdt , t ) and + sys.second( x , J , t ). + For the implicit solver the state_type + is ublas::vector and the Jacobian is represented + by ublas::matrix. +

+
+
+ +

+ Another large class of solvers are multi-step method. They save a small + part of the history of the solution and compute the next step with the + help of this history. Since multistep methods know a part of their history + they do not need to compute the system function very often, usually it + is only computed once. This makes multistep methods preferable if a call + of the system function is expensive. Examples are ODEs defined on networks, + where the computation of the interaction is usually where expensive (and + might be of order O(N^2)). +

+

+ Multistep methods differ from the normal steppers. They safe a part of + their history and this part has to be explicitly calculated and initialized. + In the following example an Adams-Bashforth-stepper with a history of 5 + steps is instantiated and initialized; +

+

+ +

+
adams_bashforth_moulton< 5 , state_type > abm;
+abm.initialize( sys , inout , t , dt );
+abm.do_step( sys , inout , t , dt );
+
+

+

+

+ The initialization uses a fourth-order Runge-Kutta stepper and after the + call of initialize the + state of inout has changed + to the current state, such that can be immediately used by passing it to + following calls of do_step. + Of course, you can also use you own steppers to initialize the internal + state of the Adams-Bashforth-Stepper: +

+

+ +

+
abm.initialize( runge_kutta_fehlberg78< state_type >() , sys , inout , t , dt );
+
+

+

+

+ Many multistep methods are also explicit steppers, hence the parameter + of do_step method do not + differ from the explicit steppers. +

+
+ + + + + +
[Caution]Caution

+ The multistep methods have some internal variables which depend on the + explicit solution. Hence you can not exchange the system of the state + between two consecutive calls of do_step + since then the internal variable do not correspond with the ODE and the + current solution. Of course, if you use the integrate functions this + will be taken into account. See the Using + steppers section for more details. +

+
+
+ +
+ + + + + +
[Caution]Caution

+ This section is not complete. +

+

+ Many of the above introduced steppers possess the possibility to use adaptive + stepsize control. Adaptive step size integration works in principle as + follows: +

+
    +
  1. + The error of one step is calculated. This is usually done by performing + two steps with different orders. The difference between these two steps + is then used as a measure for the error. Stepper which can calculate + the error are __error_steppers and they form a own class with an separate + concept. +
  2. +
  3. + This error is compared against some predefined error tolerances. Are + the tolerance verletzt the step is reject and the stepsize is decreases. + Otherwise the step is accepted and possibly the stepsize is increased. +
  4. +
+

+ The class of controlled steppers has its own concept in odeint - the Controlled + Stepper concept. They are usually constructed from the underlying + error steppers. An example is the controller for the explicit Runge-Kutta + steppers. The Runge-Kutta steppers enter the controller as a template argument. + Additionally one can pass the Runge-Kutta stepper to the contructor, but + this step is not neccessary; the stepper is default-constructed if possible. +

+

+ Different step size controlling mechanism exist. They all have in common + that they somehow compare predefined error tolerance against the error + and that they might reject or accept a step. If a step is rejected the + step size is usually decreased and the step is made again. Then the procedure + of checking the error tolerances and accepting or rejecting a step is made + again and repeated until the step is accepted. The procedure is implemented + in the integration functions. +

+

+ To calculate error tolerance one can use +

+

+ value = [err] / eps_abs + eps_rel +

+

+ different formulas which are used +

+

+ The step is rejected if value is smaller then 1, (safety factors) +

+

+ The new step is +

+

+ dt = dt^abc +

+

+ The controlled steppers in odeint implement the following error calculation + schemes and step size adaption methods: +

+
+

Table 1.6. Adaptive step size algorithms

+
+++++ + + + + + + + + + + + + + + + + + + + + + + +
+

+ Stepper +

+
+

+ Tolerance formula +

+
+

+ Step size adaption +

+
+

+ controlled_runge_kutta +

+
+

+ tol=1/2 +

+
+

+ dt​new = dt​old1/a +

+
+

+ rosenbrock4_controller +

+
+

+ tol=1/2 +

+
+

+ dt​new = dt​old1/a +

+
+

+ bulirsch_stoer +

+
+

+ tol=1/2 +

+
+

+ dt​new = dt​old1/a +

+
+
+

+ To ease to generation of the controlled stepper generation functions exist + which take the absolute and relative error tolerances and a predefined + error stepper and contruct from this knowledge an appropirate controlled + stepper. The generation functions are explained in detail in XYZ. +

+
+
+ +

+ A fourth class of stepper exists which are the so called dense output steppers. + Dense-output steppers might take larger steps and interpolate the solution + between two consecutive points. This interpolated points have usually the + same order as the order of the stepper. Dense-output stepper are often + composite stepper which take the underlying method as a template parameter. + An example is the dense_output_runge_kutta + stepper which takes a Runge-Kutta stepper with dense-output facilities + as argument. Not all Runge-Kutta steppers provide dense-output calculation; + at the moment only the Dormand-Prince 5 stepper provides dense output. + An example is +

+

+ +

+
dense_output_runge_kutta< controlled_runge_kutta< runge_kutta_dopri5< state_type > > > dense;
+dense.initialize( in , t , dt );
+pair< double , double > times = dense.do_step( sys );
+
+

+

+

+ Dense output stepper have their own concept. The main difference is that + they control the state by them-self. If you call do_step, + only the ODE is passed as argument. Furhtermore do_step + return the last time interval, hence you interpolate the solution between + these two times points. Another difference is that they must be initialized + with initialize, otherwise + the internal state of the stepper is default contructed which might produce + funny errors or bugs. +

+

+ The construction of the dense output stepper looks a little bit nasty, + since in the case of the dense_output_runge_kutta + stepper a controlled stepper and an error stepper have to be nested. To + simplify the generation of the dense output stepper generation functions + exist: +

+

+ +

+
result_of::make_dense_output< runge_kutta_dopri5< state_type > >::type dense2 = make_dense_output( 1.0e-6 , 1.0e-6 , runge_kutta_dopri5< state_type >() );
+
+

+

+

+ Of course, this statement is also lengthly; it demonstrates how make_dense_output can be used with the + result_of protocoll. The + parameters to make_dense_output + are the absolute error tolerance, the relative error tolerance and the + stepper. This explicitly assumes that the underlying stepper is a controlled + stepper and that this stepper has an absolute and a relative error tolerance. + For details about the generation functions see Generation + functions. Of course, the generation functions have been designed + for easy use with the integrate functions: +

+

+ +

+
integrate_const( make_dense_output( 1.0e-6 , 1.0e-6 , runge_kutta_dopri5< state_type >() ) , sys , inout , t_start , t_end , dt );
+
+

+

+
+
+ +
+ + + + + +
[Caution]Caution

+ This section is not complete +

+

+ This section contains some general informations about the usage of the + steppers in odeint. +

+

+ The stepper in odeint are always copied in odeint. * Stepper are always + copied in odeint. * def +

+

+ steppers are always copied, in integrate_functions or in nested steppers +

+

+ which steppers are good +

+

+ Usage constraints +

+

+ Some steppers require to store some informations about the state of the + ODE between two steps. Examples are the multistep method which store a + part of the solution during the evolution of the ODE; the FSAL steppers + store the last derivative at time t+dt since this + derivative is used in the next step. In this case the stepper expects that + consecutive calls of do_step are from the same solution and the same ODE. +

+
+
+ +
+

Table 1.7. Stepper Algorithms

+
+++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+

+ Algorithm +

+
+

+ Class +

+
+

+ Concept +

+
+

+ System Concept +

+
+

+ Order +

+
+

+ Error Estimation +

+
+

+ Dense Output +

+
+

+ Internal state +

+
+

+ Remarks +

+
+

+ Explicit Euler +

+
+

+ euler +

+
+

+ Dense + Output Stepper +

+
+

+ System +

+
+

+ 1 +

+
+

+ No +

+
+

+ Yes +

+
+

+ No +

+
+

+ Very simple, only for demonstrating purpose +

+
+

+ Modified Midpoint +

+
+

+ modified_midpoint +

+
+

+ Stepper +

+
+

+ System +

+
+

+ configurable (2) +

+
+

+ No +

+
+

+ No +

+
+

+ No +

+
+

+ Used in Bulirsch-Stoer implementation +

+
+

+ Runge-Kutta 4 +

+
+

+ runge_kutta4 +

+
+

+ Stepper +

+
+

+ System +

+
+

+ 4 +

+
+

+ No +

+
+

+ No +

+
+

+ No +

+
+

+ The classical Runge Kutta scheme, good general scheme without + error control +

+
+

+ Cash-Karp +

+
+

+ runge_kutta_cash_karp54 +

+
+

+ Error + Stepper +

+
+

+ System +

+
+

+ 5 +

+
+

+ Yes (4) +

+
+

+ No +

+
+

+ No +

+
+

+ Good general scheme with error estimation, to be used in controlled_error_stepper +

+
+

+ Dormand-Prince 5 +

+
+

+ runge_kutta_dopri5 +

+
+

+ Error + Stepper +

+
+

+ System +

+
+

+ 5 +

+
+

+ Yes (4) +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ Standard method with error control and dense ouput, to be used + in controlled_error_stepper and in dense_output_controlled_explicit_fsal. +

+
+

+ Fehlberg 78 +

+
+

+ runge_kutta_fehlberg78 +

+
+

+ Error + Stepper +

+
+

+ System +

+
+

+ 8 +

+
+

+ Yes (7) +

+
+

+ No +

+
+

+ No +

+
+

+ Good high order method with error estimation, to be used in controlled_error_stepper. +

+
+

+ Adams Bashforth +

+
+

+ adams_bashforth +

+
+

+ Stepper +

+
+

+ System +

+
+

+ configurable +

+
+

+ No +

+
+

+ No +

+
+

+ Yes +

+
+

+ Multistep method +

+
+

+ Adams Moulton +

+
+

+ adams_moulton +

+
+

+ Stepper +

+
+

+ System +

+
+

+ configurable +

+
+

+ No +

+
+

+ No +

+
+

+ Yes +

+
+

+ Multistep method +

+
+

+ Adams Bashforth Moulton +

+
+

+ adams_bashforth_moulton +

+
+

+ Stepper +

+
+

+ System +

+
+

+ configurable +

+
+

+ No +

+
+

+ No +

+
+

+ Yes +

+
+

+ Combined multistep method +

+
+

+ Controlled Runge Kutta +

+
+

+ controlled_runge_kutta +

+
+

+ Controlled + Stepper +

+
+

+ System +

+
+

+ depends +

+
+

+ Yes +

+
+

+ No +

+
+

+ depends +

+
+

+ Error control for Error + Stepper. Requires an Error + Stepper from above. Order depends on the given ErrorStepper +

+
+

+ Dense Output Runge Kutta +

+
+

+ dense_output_runge_kutta +

+
+

+ Dense + Output Stepper +

+
+

+ System +

+
+

+ depends +

+
+

+ No +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ Dense ouput for Stepper + and Error + Stepper from above if they provide dense ouput functionality + (like euler and + runge_kutta_dopri5). + Order depends on the given stepper. +

+
+

+ Bulirsch-Stoer +

+
+

+ bulirsch_stoer +

+
+

+ Controlled + Stepper +

+
+

+ System +

+
+

+ variable +

+
+

+ Yes +

+
+

+ No +

+
+

+ No +

+
+

+ Stepper with step size and order control. Very good if high precision + is required. +

+
+

+ Bulirsch-Stoer Dense Output +

+
+

+ bulirsch_stoer_dense_out +

+
+

+ Dense + Output Stepper +

+
+

+ System +

+
+

+ variable +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ No +

+
+

+ Stepper with step size and order control as well as dense ouput. + Very good if high precision and dense ouput is required. +

+
+

+ Implicit Euler +

+
+

+ implicit_euler +

+
+

+ Stepper +

+
+

+ Implicit + System +

+
+

+ 1 +

+
+

+ No +

+
+

+ No +

+
+

+ No +

+
+

+ Basic implicit routine. Requires the Jacobian. Works only with + Boost.UBlas + vectors as state types. +

+
+

+ Rosenbrock 4 +

+
+

+ rosenbrock4 +

+
+

+ Error + Stepper +

+
+

+ Implicit + System +

+
+

+ 4 +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ No +

+
+

+ Good for stiff systems. Works only with Boost.UBlas + vectors as state types. +

+
+

+ Controlled Rosenbrock 4 +

+
+

+ rosenbrock4_controller +

+
+

+ Controlled + Stepper +

+
+

+ Implicit + System +

+
+

+ 4 +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ No +

+
+

+ Rosenbrock 4 with error control. Works only with Boost.UBlas + vectors as state types. +

+
+

+ Dense Ouput Rosenbrock 4 +

+
+

+ rosenbrock4_dense_ouput +

+
+

+ Dense + Output Stepper +

+
+

+ Implicit + System +

+
+

+ 4 +

+
+

+ Yes +

+
+

+ Yes +

+
+

+ No +

+
+

+ Controlled Rosenbrock 4 with dense output. Works only with Boost.UBlas + vectors as state types. +

+
+

+ Symplectic Euler +

+
+

+ symplectic_euler +

+
+

+ Stepper +

+
+

+ Symplectic + System Simple + Symplectic System +

+
+

+ 1 +

+
+

+ No +

+
+

+ No +

+
+

+ No +

+
+

+ Basic symplectic solver for separable Hamiltonian system +

+
+

+ Symplectic RKN McLachlan +

+
+

+ symplectic_rkn_sb3a_mclachlan +

+
+

+ Stepper +

+
+

+ Symplectic + System Simple + Symplectic System +

+
+

+ 6 +

+
+

+ No +

+
+

+ No +

+
+

+ No +

+
+

+ Symplectic solver for separable Hamiltonian system with order + 6 +

+
+
+
+
+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/index.html b/doc/index.html new file mode 100644 index 00000000..59d5894d --- /dev/null +++ b/doc/index.html @@ -0,0 +1,143 @@ + + + +Chapter 1. boost.sandbox.numeric.odeint + + + + + + +
+
+
Next
+
+
+

+Chapter 1. boost.sandbox.numeric.odeint

+

+Karsten Ahnert +

+

+Mario Mulansky +

+
+
+

+ Distributed under the Boost Software License, Version 1.0. (See accompanying + file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +

+
+
+
+

Table of Contents

+
+
Getting started
+
+
Overview
+
Usage, + Compilation, Headers
+
Short + Example
+
+
Tutorial
+
+
Harmonic + oscillator
+
Solar + system
+
Chaotic + systems and Lyapunov exponents
+
Stiff + systems
+
Special + topics
+
Using + Cuda and Thrust
+
All + examples
+
References
+
+
Odeint in + detail
+
+
Steppers
+
Generation + functions
+
Integrate + functions
+
Algebras + and operations
+
Using + boost::ref
+
Using + boost::range
+
+
Extend odeint
+
+
Adapt + your own state types
+
Write + own steppers
+
Adapt + your own operations
+
+
Concepts
+
+
System
+
Symplectic + System
+
Simple + Symplectic System
+
Implicit + System
+
Observer
+
Stepper
+
Error + Stepper
+
Controlled + Stepper
+
Dense + Output Stepper
+
State + Algebra Operations
+
State + Wrapper
+
+
Reference
+
Old Concepts
+
+
Basic + stepper
+
Error + stepper
+
Controlled + stepper
+
Dense + ouput stepper
+
Size + adjusting stepper
+
CompositeStepper
+
+
Old Reference
+
+
Stepper + classes
+
Integration + functions
+
Algebras
+
Operations
+
Resizing
+
+
+
+
+ + + +

Last revised: November 16, 2011 at 20:59:19 GMT

+
+
Next
+ +