diff --git a/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html b/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html new file mode 100644 index 00000000..cc0cc3ea --- /dev/null +++ b/doc/boost_sandbox_numeric_odeint/extend_odeint/adapt_your_own_state_types.html @@ -0,0 +1,482 @@ + +
+ +
+ One of the main goals of odeint is to provide algorithms independent from
+ the underlying state type. The state type is a type representing the state
+ of the ODE, that is the variable x. As we usually deal with systems of ODEs,
+ the state type is represented by some sort of container. Most often, the
+ value type of the container is simply double,
+ as usually ODEs are defined as systems of real variables. However, it is
+ also possible to use complex types (complex<double>) as underlying value type. Moreover,
+ you can even adopt odeint to work with any
+ value type as long as the required operations are defined. However, in the
+ following I will describe how your own state types can be used to run with
+ odeint. I will assume that the state type is some sort of container aggregating
+ a number of values representing state of the ODE. As odeint also takes care
+ for the memory management of where intermediate results are stored, it first
+ of all needs to know how to construct/destruct and possibly resize the state
+ type. Additionally, it requires to be told how basic algebraic operations
+ are to be performed on state types. So when introducing new state types to
+ odeint, the following points have to be considered:
+
+ Of course, odeint already provides basic interfaces for most of the usual
+ state types. So if you use a std::vector,
+ or a boost::array as state type no additional work
+ is required, they just work out of the box.
+
+ We distinguish between two basic state types: fixed sized and dynamically
+ sized. For fixed size state types the default constructor state_type() already allocates the required memory,
+ prominent example is boost::array<T,N>.
+ Dynamically sized types have to be resized to make sure enough memory is
+ allocated, the standard constructor does not take care of the resizing. Examples
+ for this are the STL containers like vector<double>.
+
+ The most easy way of getting your own state type to work with odeint is to + use a fixed size state, base calculations on the range_algebra and provide + the following functionality: +
+|
+ + Name + + |
+
+ + Expression + + |
+
+ + Type + + |
+
+ + Semantics + + |
+
|---|---|---|---|
|
+ + Construct State + + |
+
+
+ |
+
+
+ |
+
+
+ Creates an instance of |
+
|
+ + Begin of the sequence + + |
+
+ + boost::begin(x) + + |
+
+ + Iterator + + |
+
+ + Returns an iterator pointing to the begin of the sequence + + |
+
|
+ + End of the sequence + + |
+
+ + boost::end(x) + + |
+
+ + Iterator + + |
+
+ + Returns an iterator pointing to the end of the sequence + + |
+
![]() |
+Caution | +
|---|---|
+ If your state type does not allocate memory by default construction, you + must define it as resizeable and provide + resize functionality. Otherwise segmentation faults will occur. + |
+ So fixed sized arrays supported by Boost.Range
+ immediately work with odeint. For dynamically sized arrays one has to additionally
+ supply the resize functionality. First, the state has to be tagged as resizeable
+ by specializing the struct is_resizeable
+ which consists of one typedef and one bool value:
+
|
+ + Name + + |
+
+ + Expression + + |
+
+ + Type + + |
+
+ + Semantics + + |
+
|---|---|---|---|
|
+ + Resizability + + |
+
+
+ |
+
+
+ |
+
+
+ Determines resizeability of the state type, returns |
+
|
+ + Resizability + + |
+
+
+ |
+
+
+ |
+
+
+ Same as above, but with |
+
+ This tells odeint that your state is resizeable. By default, odeint now expects
+ the support of boost::size(x) and a x.resize( boost::size(y)
+ ) member function for resizing:
+
|
+ + Name + + |
+
+ + Expression + + |
+
+ + Type + + |
+
+ + Semantics + + |
+
|---|---|---|---|
|
+ + Get size + + |
+
+
+ |
+
+
+ |
+
+ + Returns the current size of x. + + |
+
|
+ + Resize + + |
+
+
+ |
+
+
+ |
+
+ + Resizes x to have the same size as y. + + |
+
+ As an example of the above we will adopt ublas::vector
+ from Boost.UBlas
+ to work as a state type in odeint. This is particularily easy because ublas::vector supports Boost.Range,
+ including boost::size. It also has a resize member function
+ so all that has to be done in this case is to declare resizability:
+
+ +
+typedef boost::numeric::ublas::vector< double > state_type; + +namespace boost { namespace numeric { namespace odeint { + +template<> +struct is_resizeable< state_type > +{ + typedef boost::true_type type; + const static bool value = type::value; +}; + +} } } ++
+
+
+ This immediately makes ublas::vector
+ working with odeint as all other requirements are fullfilled by default in
+ this case. You can find the full example in lorenz_ublas.cpp.
+
+ If your state type does work with Boost.Range, + but handles resizing differently you are required to specialize two implementations + used by odeint to check a state's size and to resize: +
+|
+ + Name + + |
+
+ + Expression + + |
+
+ + Type + + |
+
+ + Semantics + + |
+
|---|---|---|---|
|
+ + Check size + + |
+
+
+ |
+
+
+ |
+
+ + Returns true if the size of x equals the size of y. + + |
+
|
+ + Resize + + |
+
+
+ |
+
+
+ |
+
+ + Resizes x to have the same size as y. + + |
+
+ gsl_vector, gsl_matrix, ublas::matrix, blitz::matrix, thrust +
+| + | + |
+ Solving ordinary differential equation numerically is ususally done iteratively,
+ that is a given state of an ordinary differential equation is iterated forward
+ x(t) -> x(t+dt) -> x(t+2dt). The steppers in odeint
+ perform one single step. The most general stepper type is described by the
+ Stepper
+ concept. The stepper concepts of odeint are described in detail in section
+ Concepts, here
+ we briefly present the mathematical and numerical details of the steppers.
+ The Stepper
+ has two versions of the do_step
+ method, one with an in-place transform of the currrent state and one with
+ an out-of-place transform:
+
+ do_step(
+ sys ,
+ inout ,
+ t , dt )
+
+ do_step(
+ sys ,
+ in ,
+ t , out , dt )
+
+ The first parameter is always the system function - a function describing
+ the ODE. In the first version the second parameter is the step which is here
+ updated in-place and the third and the fourth parameters are the time and
+ step size (the time step). After a call of do_step
+ the time of the ODE is t+dt. In the second version the
+ second argument is the state of the ODE at time t, the
+ third argument is t, the fourth argument is the state at time t+dt
+ which is filled by do_step
+ and the fifth argument is the time step.
+
+ System functions +
+
+ Up to now, we have nothing said about the system function. This function
+ depends on the stepper. For the explicit Runge-Kutta steppers this function
+ can be a simple callable object hence a simple (global) C-function or a functor.
+ The parameter syntax is sys( x ,
+ dxdt ,
+ t )
+ and it is assumed that it calculates dx/dt = f(x,t).
+
+ Other types of system function represent Hamiltonian systems or system which + also compute the Jacobian needed in implicit steppers. For informations which + stepper uses which system function see the stepper table below. It might + be possible, that odeint will introduce new system types in near future. + Since the system function is strongly related to the stepper type, such an + introduction of a new stepper might result in a new type of system function. +
++ A first specialization are the explicit steppers. Explicit means that the + new state of the ode can be computed explicitly from the current state + without solving implicit equations. These steppers have in common that + they evaluate the system at time t such that the result + of f(x,t) can be passed to the stepper. In odeint, + the explicit stepper have two additional methods +
+
+ do_step(
+ sys ,
+ inout ,
+ dxdtin ,
+ t ,
+ dt )
+
+ do_step(
+ sys ,
+ in ,
+ dxdtin ,
+ t ,
+ out ,
+ dt )
+
+ Here, the additional parameter is the value of the function f + at state x and time t. An example + is the Runge Kutta stepper of fourth order: +
++ +
+runge_kutta4< state_type > rk; +rk.do_step( sys1 , inout , t , dt ); // In-place transformation of inout +rk.do_step( sys2 , inout , t , dt ); // Ok +rk.do_step( sys1 , in , t , out , dt ); // Out-of-place transformation +rk.do_step( sys1 , inout , dxdtin , t , dt ); // In-place tranformation of inout +rk.do_step( sys1 , in , dxdtin , t , out , dt ); // Out-of-place transformation ++
+
+
+ Of course, you do not need to call these two method. You can always use
+ the simpler do_step(
+ sys ,
+ inout ,
+ t ,
+ dt ),
+ but sometimes the derivative of the state is needed to do some external
+ computations or to perform some statistical analysis.
+
+ A special class of the explicit steppers are the FSAL (first-same-as-last)
+ steppers, where the last evaluation of the system function is also the
+ first evaluation of the following step. For such stepper a further do_step method exist:
+
+ do_step(
+ sys ,
+ in ,
+ dxdtin ,
+ out ,
+ dxdtout ,
+ t ,
+ dt )
+
+ This method also fills the derivative at time t+dt
+ into dxdtout. Of course,
+ the performance gain of such FSAL steppers only appears when combining
+ with intergrate error estimation, like in the Runge-Kutta-Dopri5 stepper.
+ The FSAL-trick is sometimes also referred as the Fehlberg-Trick. An example
+ how the FSAL steppers can be used is
+
+ +
+runge_kutta_dopri5< state_type > rk; +rk.do_step( sys1 , in , t , out , dt ); +rk.do_step( sys2 , in , t , out , dt ); // DONT do this, sys1 is assumed + +rk.do_step( sys2 , in2 , t , out , dt ); +rk.do_step( sys2 , in3 , t , out , dt ); // DONT do this, in2 is assumed + +rk.do_step( sys1 , inout , dxdtinout , t , dt ); +rk.do_step( sys2 , inout , dxdtinout , t , dt ); // Ok, internal derivative is not used, dxdtinout is updated + +rk.do_step( sys1 , in , dxdtin , t , out , dxdtout , dt ); +rk.do_step( sys2 , in , dxdtin , t , out , dxdtout , dt ); // Ok, internal derivative is not used ++
+
+![]() |
+Caution | +
|---|---|
+ The FSAL-steppers save the derivative at time t+dt
+ internally if they are called via |
+ As mentioned above symplectic solvers are used for Hamiltonian systems. + Symplectic solvers conserve the phase space volume exactly and if the Hamiltonian + system is energy conservative they also conserve the energy approximately. + A special class of symplectic systems are separable systems which can be + written in the form dqdt/dt = f1(p), dpdt/dt + = f2(q), where (q,p) are the state of system. + The space of (q,p) is sometimes refered as the phase + space and q and p are said the + be the phase space variables. Symplectic systems in this special form occur + widely in nature. For example the complete classical mechanics as written + down by Newton, Lagrange and Hamilton can be forumulated in this framework. + Of course, the separability of the system depends on the specific choice + of coordinates. +
+
+ Integrable symplectic systems can be solved by odeint by means of the symplectic_euler
+ stepper and a symplectic Runge-Kutta-Nystrom method of sixth-order. These
+ stepper assume that the system is autonomous, hence the time will not explicitly
+ occur. Furhter they fullfil in principle the default Stepper concept, but
+ they expect the system to be a pair of callable objects. The first entry
+ of this pair calculates f1(p) while the second calculates
+ f2(q). The syntax is sys.first(p,dqdt)
+ and sys.second(q,dpdt), where the first and second part can be
+ again simple C-functions of functors. An example is the harmonic oscillator:
+
+ +
+typedef boost::array< double , 1 > vector_type; + +void harm_osc_f1( const vector_type &p , vector_type &dqdt ) +{ + dqdt[0] = p[0]; +} + +void harm_osc_f2( const vector_type &q , vector_type &dpdt ) +{ + dpdt[0] = -q[0]; +} ++
+
++ The state of such an ODE consist now also of two parts, the part for q + (also called the coordinates) and the part for p (the momenta). The full + example for the harmonic oscillator is now: +
++ +
+pair< vector_type , vector_type > x; +x.first[0] = 1.0; x.second[0] = 0.0; +symplectic_rkn_sb3a_mclachlan< vector_type > rkn; +rkn.do_step( make_pair( harm_osc_f1 , harm_osc_f2 ) , x , t , dt ); ++
+
++ If you like to represent the system with one class you can easily bind + two public method: +
++ +
+struct harm_osc +{ + void f1( const vector_type &p , vector_type &dqdt ) const + { + dqdt[0] = p[0]; + } + + void f2( const vector_type &q , vector_type &dpdt ) const + { + dpdt[0] = -q[0]; + } +}; ++
+
++ +
+harm_osc h; +rkn.do_step( make_pair( boost::bind( &harm_osc::f1 , h , _1 , _2 ) , boost::bind( &harm_osc::f2 , h , _1 , _2 ) ) , + x , t , dt ); ++
+
+
+ Many Hamiltonian system can be written as dq/dt=p,
+ dp/dt=f(q) which is computationally much easier then
+ the full separable system. Very often, it is also possible to transform
+ the original equations of motion to bring the system in this simplified
+ form. This kind of system can be used in the symplectic solvers, by simply
+ passing f(p) to the do_step
+ method, again f(p) will be represented by a simple
+ C-function or a functor. Here, the above example of the harmonic oscaillator
+ can be written as
+
+ +
+pair< vector_type , vector_type > x; +x.first[0] = 1.0; x.second[0] = 0.0; +symplectic_rkn_sb3a_mclachlan< vector_type > rkn; +rkn.do_step( harm_osc_f1 , x , t , dt ); ++
+
+
+ In this example the function harm_osc_f1
+ is exactly the same function as in the above examples.
+
+ Note, that the state of the ODE must not be contructed explicitly via
+ pair<
+ vector_type ,
+ vector_type >
+ x. One can also use a combination
+ of make_pair and ref. Furthermore, a convenience version
+ of do_step exists which
+ takes q and p without combining them into a pair:
+
+ +
+rkn.do_step( harm_osc_f1 , make_pair( boost::ref( q ) , boost::ref( p ) ) , t , dt ); +rkn.do_step( harm_osc_f1 , q , p , t , dt ); +rkn.do_step( make_pair( harm_osc_f1 , harm_osc_f2 ) , q , p , t , dt ); ++
+
+![]() |
+Caution | +
|---|---|
+ This section is not up-to-date. + |
+ For some kind of systems the stability properties of the classical Runge-Kutta + are not sufficient, especially if the system is said to be stiff. A stiff + system possesses two or more time scales of very different order. Solvers + for stiff systems are usually implicit, meaning that they solve equations + like x(t+dt) = x(t) + dt * f(x(t+1)). This particular + scheme is the implicit euler method. Implicit methods usually solve the + system of equations by a root finding algorithm like the Newton method + and therefore need to know the Jacobian of the system Jij = dfi / + dxj. +
+
+ For implicit solvers the system is again a pair, where the first component
+ computes f(x,t) and the second the Jacobian. The syntax
+ is sys.first( x , dxdt , t ) and
+ sys.second( x , J , t ).
+ For the implicit solver the state_type
+ is ublas::vector and the Jacobian is represented
+ by ublas::matrix.
+
+ Another large class of solvers are multi-step method. They save a small + part of the history of the solution and compute the next step with the + help of this history. Since multistep methods know a part of their history + they do not need to compute the system function very often, usually it + is only computed once. This makes multistep methods preferable if a call + of the system function is expensive. Examples are ODEs defined on networks, + where the computation of the interaction is usually where expensive (and + might be of order O(N^2)). +
++ Multistep methods differ from the normal steppers. They safe a part of + their history and this part has to be explicitly calculated and initialized. + In the following example an Adams-Bashforth-stepper with a history of 5 + steps is instantiated and initialized; +
++ +
+adams_bashforth_moulton< 5 , state_type > abm; +abm.initialize( sys , inout , t , dt ); +abm.do_step( sys , inout , t , dt ); ++
+
+
+ The initialization uses a fourth-order Runge-Kutta stepper and after the
+ call of initialize the
+ state of inout has changed
+ to the current state, such that can be immediately used by passing it to
+ following calls of do_step.
+ Of course, you can also use you own steppers to initialize the internal
+ state of the Adams-Bashforth-Stepper:
+
+ +
+abm.initialize( runge_kutta_fehlberg78< state_type >() , sys , inout , t , dt ); ++
+
+
+ Many multistep methods are also explicit steppers, hence the parameter
+ of do_step method do not
+ differ from the explicit steppers.
+
![]() |
+Caution | +
|---|---|
+ The multistep methods have some internal variables which depend on the
+ explicit solution. Hence you can not exchange the system of the state
+ between two consecutive calls of |
![]() |
+Caution | +
|---|---|
+ This section is not complete. + |
+ Many of the above introduced steppers possess the possibility to use adaptive + stepsize control. Adaptive step size integration works in principle as + follows: +
++ The class of controlled steppers has its own concept in odeint - the Controlled + Stepper concept. They are usually constructed from the underlying + error steppers. An example is the controller for the explicit Runge-Kutta + steppers. The Runge-Kutta steppers enter the controller as a template argument. + Additionally one can pass the Runge-Kutta stepper to the contructor, but + this step is not neccessary; the stepper is default-constructed if possible. +
++ Different step size controlling mechanism exist. They all have in common + that they somehow compare predefined error tolerance against the error + and that they might reject or accept a step. If a step is rejected the + step size is usually decreased and the step is made again. Then the procedure + of checking the error tolerances and accepting or rejecting a step is made + again and repeated until the step is accepted. The procedure is implemented + in the integration functions. +
++ To calculate error tolerance one can use +
++ value = [err] / eps_abs + eps_rel +
++ different formulas which are used +
++ The step is rejected if value is smaller then 1, (safety factors) +
++ The new step is +
++ dt = dt^abc +
++ The controlled steppers in odeint implement the following error calculation + schemes and step size adaption methods: +
+Table 1.6. Adaptive step size algorithms
+|
+ + Stepper + + |
+
+ + Tolerance formula + + |
+
+ + Step size adaption + + |
+
|---|---|---|
|
+ + controlled_runge_kutta + + |
+
+ + tol=1/2 + + |
+
+ + dtnew = dtold1/a + + |
+
|
+ + rosenbrock4_controller + + |
+
+ + tol=1/2 + + |
+
+ + dtnew = dtold1/a + + |
+
|
+ + bulirsch_stoer + + |
+
+ + tol=1/2 + + |
+
+ + dtnew = dtold1/a + + |
+
+ To ease to generation of the controlled stepper generation functions exist + which take the absolute and relative error tolerances and a predefined + error stepper and contruct from this knowledge an appropirate controlled + stepper. The generation functions are explained in detail in XYZ. +
+
+ A fourth class of stepper exists which are the so called dense output steppers.
+ Dense-output steppers might take larger steps and interpolate the solution
+ between two consecutive points. This interpolated points have usually the
+ same order as the order of the stepper. Dense-output stepper are often
+ composite stepper which take the underlying method as a template parameter.
+ An example is the dense_output_runge_kutta
+ stepper which takes a Runge-Kutta stepper with dense-output facilities
+ as argument. Not all Runge-Kutta steppers provide dense-output calculation;
+ at the moment only the Dormand-Prince 5 stepper provides dense output.
+ An example is
+
+ +
+dense_output_runge_kutta< controlled_runge_kutta< runge_kutta_dopri5< state_type > > > dense; +dense.initialize( in , t , dt ); +pair< double , double > times = dense.do_step( sys ); ++
+
+
+ Dense output stepper have their own concept. The main difference is that
+ they control the state by them-self. If you call do_step,
+ only the ODE is passed as argument. Furhtermore do_step
+ return the last time interval, hence you interpolate the solution between
+ these two times points. Another difference is that they must be initialized
+ with initialize, otherwise
+ the internal state of the stepper is default contructed which might produce
+ funny errors or bugs.
+
+ The construction of the dense output stepper looks a little bit nasty,
+ since in the case of the dense_output_runge_kutta
+ stepper a controlled stepper and an error stepper have to be nested. To
+ simplify the generation of the dense output stepper generation functions
+ exist:
+
+ +
+result_of::make_dense_output< runge_kutta_dopri5< state_type > >::type dense2 = make_dense_output( 1.0e-6 , 1.0e-6 , runge_kutta_dopri5< state_type >() ); ++
+
+
+ Of course, this statement is also lengthly; it demonstrates how make_dense_output can be used with the
+ result_of protocoll. The
+ parameters to make_dense_output
+ are the absolute error tolerance, the relative error tolerance and the
+ stepper. This explicitly assumes that the underlying stepper is a controlled
+ stepper and that this stepper has an absolute and a relative error tolerance.
+ For details about the generation functions see Generation
+ functions. Of course, the generation functions have been designed
+ for easy use with the integrate functions:
+
+ +
+integrate_const( make_dense_output( 1.0e-6 , 1.0e-6 , runge_kutta_dopri5< state_type >() ) , sys , inout , t_start , t_end , dt ); ++
+
+![]() |
+Caution | +
|---|---|
+ This section is not complete + |
+ This section contains some general informations about the usage of the + steppers in odeint. +
++ The stepper in odeint are always copied in odeint. * Stepper are always + copied in odeint. * def +
++ steppers are always copied, in integrate_functions or in nested steppers +
++ which steppers are good +
++ Usage constraints +
++ Some steppers require to store some informations about the state of the + ODE between two steps. Examples are the multistep method which store a + part of the solution during the evolution of the ODE; the FSAL steppers + store the last derivative at time t+dt since this + derivative is used in the next step. In this case the stepper expects that + consecutive calls of do_step are from the same solution and the same ODE. +
+Table 1.7. Stepper Algorithms
+|
+ + Algorithm + + |
+
+ + Class + + |
+
+ + Concept + + |
+
+ + System Concept + + |
+
+ + Order + + |
+
+ + Error Estimation + + |
+
+ + Dense Output + + |
+
+ + Internal state + + |
+
+ + Remarks + + |
+
|---|---|---|---|---|---|---|---|---|
|
+ + Explicit Euler + + |
+
+
+ |
++ + | +
+ + System + + |
+
+ + 1 + + |
+
+ + No + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + Very simple, only for demonstrating purpose + + |
+
|
+ + Modified Midpoint + + |
+
+
+ |
+
+ + Stepper + + |
+
+ + System + + |
+
+ + configurable (2) + + |
+
+ + No + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Used in Bulirsch-Stoer implementation + + |
+
|
+ + Runge-Kutta 4 + + |
+
+
+ |
+
+ + Stepper + + |
+
+ + System + + |
+
+ + 4 + + |
+
+ + No + + |
+
+ + No + + |
+
+ + No + + |
+
+ + The classical Runge Kutta scheme, good general scheme without + error control + + |
+
|
+ + Cash-Karp + + |
+
+
+ |
+
+ + Error + Stepper + + |
+
+ + System + + |
+
+ + 5 + + |
+
+ + Yes (4) + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Good general scheme with error estimation, to be used in controlled_error_stepper + + |
+
|
+ + Dormand-Prince 5 + + |
+
+
+ |
+
+ + Error + Stepper + + |
+
+ + System + + |
+
+ + 5 + + |
+
+ + Yes (4) + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+ + Standard method with error control and dense ouput, to be used + in controlled_error_stepper and in dense_output_controlled_explicit_fsal. + + |
+
|
+ + Fehlberg 78 + + |
+
+
+ |
+
+ + Error + Stepper + + |
+
+ + System + + |
+
+ + 8 + + |
+
+ + Yes (7) + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Good high order method with error estimation, to be used in controlled_error_stepper. + + |
+
|
+ + Adams Bashforth + + |
+
+
+ |
+
+ + Stepper + + |
+
+ + System + + |
+
+ + configurable + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Yes + + |
+
+ + Multistep method + + |
+
|
+ + Adams Moulton + + |
+
+
+ |
+
+ + Stepper + + |
+
+ + System + + |
+
+ + configurable + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Yes + + |
+
+ + Multistep method + + |
+
|
+ + Adams Bashforth Moulton + + |
+
+
+ |
+
+ + Stepper + + |
+
+ + System + + |
+
+ + configurable + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Yes + + |
+
+ + Combined multistep method + + |
+
|
+ + Controlled Runge Kutta + + |
+
+
+ |
++ + | +
+ + System + + |
+
+ + depends + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + depends + + |
+
+ + Error control for Error + Stepper. Requires an Error + Stepper from above. Order depends on the given ErrorStepper + + |
+
|
+ + Dense Output Runge Kutta + + |
+
+
+ |
++ + | +
+ + System + + |
+
+ + depends + + |
+
+ + No + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+
+ Dense ouput for Stepper
+ and Error
+ Stepper from above if they provide dense ouput functionality
+ (like |
+
|
+ + Bulirsch-Stoer + + |
+
+
+ |
++ + | +
+ + System + + |
+
+ + variable + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Stepper with step size and order control. Very good if high precision + is required. + + |
+
|
+ + Bulirsch-Stoer Dense Output + + |
+
+
+ |
++ + | +
+ + System + + |
+
+ + variable + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + Stepper with step size and order control as well as dense ouput. + Very good if high precision and dense ouput is required. + + |
+
|
+ + Implicit Euler + + |
+
+
+ |
+
+ + Stepper + + |
++ + | +
+ + 1 + + |
+
+ + No + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Basic implicit routine. Requires the Jacobian. Works only with + Boost.UBlas + vectors as state types. + + |
+
|
+ + Rosenbrock 4 + + |
+
+
+ |
+
+ + Error + Stepper + + |
++ + | +
+ + 4 + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + Good for stiff systems. Works only with Boost.UBlas + vectors as state types. + + |
+
|
+ + Controlled Rosenbrock 4 + + |
+
+
+ |
++ + | ++ + | +
+ + 4 + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + Rosenbrock 4 with error control. Works only with Boost.UBlas + vectors as state types. + + |
+
|
+ + Dense Ouput Rosenbrock 4 + + |
+
+
+ |
++ + | ++ + | +
+ + 4 + + |
+
+ + Yes + + |
+
+ + Yes + + |
+
+ + No + + |
+
+ + Controlled Rosenbrock 4 with dense output. Works only with Boost.UBlas + vectors as state types. + + |
+
|
+ + Symplectic Euler + + |
+
+
+ |
+
+ + Stepper + + |
++ + | +
+ + 1 + + |
+
+ + No + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Basic symplectic solver for separable Hamiltonian system + + |
+
|
+ + Symplectic RKN McLachlan + + |
+
+
+ |
+
+ + Stepper + + |
++ + | +
+ + 6 + + |
+
+ + No + + |
+
+ + No + + |
+
+ + No + + |
+
+ + Symplectic solver for separable Hamiltonian system with order + 6 + + |
+
| + | + |
Copyright © 2009-2011 Karsten Ahnert + and Mario Mulansky
+ Distributed under the Boost Software License, Version 1.0. (See accompanying + file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +
+Table of Contents
+Last revised: November 16, 2011 at 20:59:19 GMT |
++ |