diff --git a/doc/callbacks.qbk b/doc/callbacks.qbk index e6d16d1e..4a8500f5 100644 --- a/doc/callbacks.qbk +++ b/doc/callbacks.qbk @@ -34,7 +34,7 @@ The significant points about each of `init_write()` and `init_read()` are: We would like to wrap these asynchronous methods in functions that appear synchronous by blocking the calling fiber until the operation completes. This -lets us use the wrapper function's return value to deliver relevant data. +lets us use the wrapper function[s] return value to deliver relevant data. [tip [template_link promise] and [template_link future] are your friends here.] @@ -56,7 +56,7 @@ All we have to do is: [note This tactic for resuming a pending fiber works even if the callback is called on a different thread than the one on which the initiating fiber is -running. In fact, [@../../examples/adapt_callbacks.cpp the example program's] +running. In fact, [@../../examples/adapt_callbacks.cpp the example program[s]] dummy `AsyncAPI` implementation illustrates that: it simulates async I/O by launching a new thread that sleeps briefly and then calls the relevant callback.] @@ -76,7 +76,7 @@ messy boilerplate: normal encapsulation works. [endsect] [section Return Errorcode or Data] -Things get a bit more interesting when the async operation's callback passes +Things get a bit more interesting when the async operation[s] callback passes multiple data items of interest. One approach would be to use `std::pair<>` to capture both: @@ -94,9 +94,9 @@ identical to `write_ec()`. You can call it like this: But a more natural API for a function that obtains data is to return only the data on success, throwing an exception on error. -As with `write()` above, it's certainly possible to code a `read()` wrapper in +As with `write()` above, it[s] certainly possible to code a `read()` wrapper in terms of `read_ec()`. But since a given application is unlikely to need both, -let's code `read()` from scratch, leveraging [member_link +let[s] code `read()` from scratch, leveraging [member_link promise..set_exception]: [callbacks_read] @@ -147,9 +147,9 @@ hypothetical `AsyncAPI` asynchronous operations. Fortunately we need not. Boost.Asio incorporates a mechanism[footnote This mechanism has been proposed as a conventional way to allow the caller of an -async function to specify completion handling: +arbitrary async function to specify completion handling: [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4045.pdf N4045].] -by which the caller can customize the notification behavior of every async +by which the caller can customize the notification behavior of any async operation. Therefore we can construct a ['completion token] which, when passed to a __boost_asio__ async operation, requests blocking for the calling fiber. @@ -160,32 +160,25 @@ A typical Asio async function might look something like this:[footnote per [@htt async_something( ... , CompletionToken&& token) { // construct handler_type instance from CompletionToken - handler_type::type handler(token); + handler_type::type ``[*[`handler(token)]]``; // construct async_result instance from handler_type - async_result result(handler); + async_result ``[*[`result(handler)]]``; // ... arrange to call handler on completion ... // ... initiate actual I/O operation ... - return result.get(); + return ``[*[`result.get()]]``; } -We will engage that mechanism, which is based on specializing Asio's +We will engage that mechanism, which is based on specializing Asio[s] `handler_type<>` template for the `CompletionToken` type and the signature of the specific callback. The remainder of this discussion will refer back to `async_something()` as the Asio async function under consideration. The implementation described below uses lower-level facilities than `promise` -and `future` for two reasons: - -# The `promise` mechanism interacts badly with - [@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/stop.html - `io_service::stop()`]. It produces `broken_promise` exceptions. -# If more than one thread is calling the - [@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run.html - `io_service::run()`] method, the implementation described below allows - resuming the suspended fiber on whichever thread gets there first with - completion notification. More on this later. +and `future` because the `promise` mechanism interacts badly with +[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/stop.html +`io_service::stop()`]. It produces `broken_promise` exceptions. `boost::fibers::asio::yield` is a completion token of this kind. `yield` is an instance of `yield_t`: @@ -198,13 +191,10 @@ customization. It can bind a `boost::system::error_code`] for use by the actual handler. -In fact there are two canonical instances of `yield_t` [mdash] `yield` and -`yield_hop`: +`yield` is declared as: [fibers_asio_yield] -We'll get to the differences between these shortly. - Asio customization is engaged by specializing [@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/handler_type.html `boost::asio::handler_type<>`] @@ -214,26 +204,25 @@ for `yield_t`: (There are actually four different specializations in [@../../examples/asio/detail/yield.hpp detail/yield.hpp], -one for each of the four Asio async callback signatures we expect to have to -support.) +one for each of the four Asio async callback signatures we expect.) The above directs Asio to use `yield_handler` as the actual handler for an -async operation to which `yield` is passed. There's a generic +async operation to which `yield` is passed. There[s] a generic `yield_handler` implementation and a `yield_handler` specialization. -Let's start with the `` specialization: +Let[s] start with the `` specialization: [fibers_asio_yield_handler_void] `async_something()`, having consulted the `handler_type<>` traits specialization, instantiates a `yield_handler` to be passed as the -actual callback for the async operation. `yield_handler`'s constructor accepts +actual callback for the async operation. `yield_handler`[s] constructor accepts the `yield_t` instance (the `yield` object passed to the async function) and passes it along to `yield_handler_base`: [fibers_asio_yield_handler_base] `yield_handler_base` stores a copy of the `yield_t` instance [mdash] which, as -shown above, is only an `error_code` and a `bool`. It also captures the +shown above, contains only an `error_code*`. It also captures the [class_link context]* for the currently-running fiber by calling [member_link context..active]. @@ -254,22 +243,22 @@ Naturally that leads us straight to `async_result_base`: [fibers_asio_async_result_base] This is how `yield_handler_base::ycomp_` becomes non-null: -`async_result_base`'s constructor injects a pointer back to its own +`async_result_base`[s] constructor injects a pointer back to its own `yield_completion` member. -Recall that both of the canonical `yield_t` instances `yield` and `yield_hop` -initialize their `error_code*` member `ec_` to `nullptr`. If either of these -instances is passed to `async_something()` (`ec_` is still `nullptr`), the -copy stored in `yield_handler_base` will likewise have null `ec_`. -`async_result_base`'s constructor sets `yield_handler_base`'s `yield_t`'s -`ec_` member to point to its own `error_code` member. +Recall that the canonical `yield_t` instance `yield` initializes its +`error_code*` member `ec_` to `nullptr`. If this instance is passed to +`async_something()` (`ec_` is still `nullptr`), the copy stored in +`yield_handler_base` will likewise have null `ec_`. `async_result_base`[s] +constructor sets `yield_handler_base`[s] `yield_t`[s] `ec_` member to point to +its own `error_code` member. The stage is now set. `async_something()` initiates the actual async operation, arranging to call its `yield_handler` instance on completion. -Let's say, for the sake of argument, that the actual async operation's +Let[s] say, for the sake of argument, that the actual async operation[s] callback has signature `void(error_code)`. -But since it's an async operation, control returns at once to +But since it[s] an async operation, control returns at once to `async_something()`. `async_something()` calls `async_result>::get()`, and will return its return value. @@ -288,57 +277,35 @@ Other fibers will now have a chance to run. Some time later, the async operation completes. It calls `yield_handler::operator()(error_code const&)` with an `error_code` indicating -either success or failure. We'll consider both cases. +either success or failure. We[,]ll consider both cases. `yield_handler` explicitly inherits `operator()(error_code const&)` from `yield_handler_base`. `yield_handler_base::operator()(error_code const&)` first sets -`yield_completion::completed_` `true`. This way, if `async_something()`'s +`yield_completion::completed_` `true`. This way, if `async_something()`[s] async operation completes immediately [mdash] if `yield_handler_base::operator()` is called even before `async_result_base::get()` [mdash] the calling fiber will ['not] suspend. The actual `error_code` produced by the async operation is then stored through -the stored `yield_t::ec_` pointer. If `async_something()`'s caller used (e.g.) +the stored `yield_t::ec_` pointer. If `async_something()`[s] caller used (e.g.) `yield[my_ec]` to bind a local `error_code` instance, the actual `error_code` -value is stored into the caller's variable. Otherwise, it is stored into +value is stored into the caller[s] variable. Otherwise, it is stored into `async_result_base::ec_`. -Finally we get to the distinction between `yield` and `yield_hop`. - -As described for [member_link context..is_context], a `pinned_context` fiber -is special to the library and must never be passed to [member_link -context..migrate]. We must detect and avoid that case here. - -The `yield_t::allow_hop_` `bool` indicates whether `async_something()`'s -caller is willing to allow the running fiber to ["hop] to another thread -(`yield_hop`) or whether s/he insists that the fiber resume on the same thread -(`yield`). - -If the caller passed `yield_hop` to `async_something()`, and the running fiber -isn't a `pinned_context`, `yield_handler_base::operator()` passes the -`context` of the original fiber [mdash] the one on which `async_something()` -was called, captured in `yield_handler_base`'s constructor [mdash] to the -current thread's [member_link context..migrate]. - -If the running application has more than one thread calling -[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run.html -`io_service::run()`], that fiber could return from `async_something()` on a -different thread (the one calling `yield_handler_base::operator()`) than the -one on which it entered `async_something()`. - -In any case, the fiber is marked as ready to run by passing it to [member_link +If the stored fiber context `yield_handler_base::ctx_` is not already running, +it is marked as ready to run by passing it to [member_link context..set_ready]. Control then returns from `yield_handler_base::operator()`: the callback is done. -In due course, the fiber `yield_handler_base::ctx_` is resumed. Control -returns from [member_link context..suspend] to `yield_completion::wait()`, -which returns to `async_result_base::get()`. +In due course, that fiber is resumed. Control returns from [member_link +context..suspend] to `yield_completion::wait()`, which returns to +`async_result_base::get()`. * If the original caller passed `yield[my_ec]` to `async_something()` to bind a local `error_code` instance, then `yield_handler_base::operator()` stored - its `error_code` to the caller's `my_ec` instance, leaving + its `error_code` to the caller[s] `my_ec` instance, leaving `async_result_base::ec_` initialized to success. * If the original caller passed `yield` to `async_something()` without binding a local `error_code` variable, then `yield_handler_base::operator()` stored @@ -349,7 +316,7 @@ which returns to `async_result_base::get()`. error [mdash] `async_result_base::get()` throws `system_error` with that `error_code`. -The case in which `async_something()`'s completion callback has signature +The case in which `async_something()`[s] completion callback has signature `void()` is similar. `yield_handler::operator()()` invokes the machinery above with a ["success] `error_code`. @@ -379,10 +346,10 @@ data member. `async_result>::value_`. Then it passes control to `yield_handler_base::operator()(error_code)` to deal -with waking (and possibly migrating) the original fiber as described above. +with waking the original fiber as described above. When `async_result>::get()` resumes, it returns the stored -`value_` to `async_something()` and ultimately to `async_something()`'s +`value_` to `async_something()` and ultimately to `async_something()`[s] caller. The case of a callback signature `void(T)` is handled by having diff --git a/doc/fibers.qbk b/doc/fibers.qbk index 033500d6..5608b6db 100644 --- a/doc/fibers.qbk +++ b/doc/fibers.qbk @@ -33,69 +33,75 @@ [def __not_a_fiber__ ['not-a-fiber]] [def __rendezvous__ ['rendezvous]] -[template mdash[] '''—'''] -[template "[text] '''“'''[text]'''”'''] -[template superscript[exp] ''''''[exp]''''''] +[/ important especially for [,] to avoid a space between empty argument + brackets and expansion: the space, if any, becomes part of the expansion!] +[template mdash[]'''—'''] +[template ,[]'''’'''] +[template "[text]'''“'''[text]'''”'''] +[template superscript[exp]''''''[exp]''''''] +[/ "isn[t]" is slightly more readable than "isn[,]t", and so forth] +[template s[][,]s] +[template t[][,]t] [template class_heading[class_name] [hding class_[class_name]..Class [`[class_name]]] ] -[template class_link[class_name] [dblink class_[class_name]..[`[class_name]]]] +[template class_link[class_name][dblink class_[class_name]..[`[class_name]]]] [template template_heading[class_name] [hding class_[class_name]..Template [`[class_name]<>]] ] -[template template_link[class_name] [dblink class_[class_name]..[`[class_name]<>]]] +[template template_link[class_name][dblink class_[class_name]..[`[class_name]<>]]] [template member_heading[class_name method_name] [operator_heading [class_name]..[method_name]..[method_name]] ] -[template member_link[class_name method_name] [operator_link [class_name]..[method_name]..[method_name]]] +[template member_link[class_name method_name][operator_link [class_name]..[method_name]..[method_name]]] [template operator_heading[class_name method_name method_text] [hding [class_name]_[method_name]..Member function [`[method_text]]()] ] -[template operator_link[class_name method_name method_text] [dblink [class_name]_[method_name]..[`[class_name]::[method_text]()]]] +[template operator_link[class_name method_name method_text][dblink [class_name]_[method_name]..[`[class_name]::[method_text]()]]] [template template_member_heading[class_name method_name] [hding [class_name]_[method_name]..Templated member function [`[method_name]]()] ] -[template template_member_link[class_name method_name] [member_link [class_name]..[method_name]]] +[template template_member_link[class_name method_name][member_link [class_name]..[method_name]]] [template static_member_heading[class_name method_name] [hding [class_name]_[method_name]..Static member function [`[method_name]]()] ] -[template static_member_link[class_name method_name] [member_link [class_name]..[method_name]]] +[template static_member_link[class_name method_name][member_link [class_name]..[method_name]]] [template data_member_heading[class_name member_name] [hding [class_name]_[member_name]..Data member [`[member_name]]] ] -[template data_member_link[class_name member_name] [dblink [class_name]_[member_name]..[`[class_name]::[member_name]]]] +[template data_member_link[class_name member_name][dblink [class_name]_[member_name]..[`[class_name]::[member_name]]]] [template function_heading[function_name] [hding [function_name]..Non-member function [`[function_name]()]] ] -[template function_link[function_name] [dblink [function_name]..[`[function_name]()]]] +[template function_link[function_name][dblink [function_name]..[`[function_name]()]]] [template function_heading_for[function_name arg] [hding [function_name]\_for\_[arg]..Non-member function [`[function_name]()]] ] -[template function_link_for[function_name arg] [dblink [function_name]\_for\_[arg]..[`[function_name]()]]] +[template function_link_for[function_name arg][dblink [function_name]\_for\_[arg]..[`[function_name]()]]] [template ns_function_heading[namespace function_name] [hding [namespace]_[function_name]..Non-member function [`[namespace]::[function_name]()]] ] -[template ns_function_link[namespace function_name] [dblink [namespace]_[function_name]..[`[namespace]::[function_name]()]]] +[template ns_function_link[namespace function_name][dblink [namespace]_[function_name]..[`[namespace]::[function_name]()]]] -[template anchor[name] ''''''] +[template anchor[name]''''''] [template hding[name title] ''' '''[title]''' ''' ] -[template dblink[id text] ''''''[text]''''''] -[template `[text] ''''''[text]''''''] +[template dblink[id text]''''''[text]''''''] +[template `[text]''''''[text]''''''] [def __allocator_arg_t__ [@http://en.cppreference.com/w/cpp/memory/allocator_arg_t `std::allocator_arg_t`]] [def __Allocator__ [@http://en.cppreference.com/w/cpp/concept/Allocator `Allocator`]] @@ -155,12 +161,12 @@ [def __fsp__ [class_link fiber_specific_ptr]] [def __future_get__ [member_link future..get]] [def __get_id__ [member_link fiber..get_id]] -[def __io_service__ `boost::asio::io_service`] +[def __io_service__ [@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service.html `boost::asio::io_service`]] [def __join__ [member_link fiber..join]] [def __migrate__ [member_link context..migrate]] [def __mutex_lock__ [member_link mutex..lock]] [def __mutex_try_lock__ [member_link mutex..try_lock]] -[def __run_service__ `boost::fibers::asio::run_service()`] +[def __run_service__ `boost::fibers::asio::run_svc()`] [def __shared_future_get__ [member_link shared_future..get]] [def __sleep_for__ [ns_function_link this_fiber..sleep_for]] [def __sleep_until__ [ns_function_link this_fiber..sleep_until]] diff --git a/doc/fibers.xml b/doc/fibers.xml index 3f3faef5..a97c8ccb 100644 --- a/doc/fibers.xml +++ b/doc/fibers.xml @@ -1,6 +1,6 @@ - @@ -118,7 +118,7 @@ Unless migrated, a fiber may access thread-local storage; however that storage will be shared among all fibers running on the same thread. For fiber-local - storage, please see fiber_specific_ptr. + storage, please see fiber_specific_ptr. @@ -153,7 +153,7 @@ role="identifier">std::future::get() would block the whole thread, preventing the other fiber from delivering its - value. Use future<> instead. + value. Use future<> instead. Similarly, a fiber that invokes a normal blocking I/O operation will block @@ -213,20 +213,14 @@ template< typename PROPS > PROPS & properties(); -void interruption_point(); -bool interruption_requested() noexcept; -bool interruption_enabled() noexcept; -class disable_interruption; -class restore_interruption; - }} Tutorial - Each fiber represents a micro-thread which will be launched and managed - cooperatively by a scheduler. Objects of type fiber are move-only. + Each fiber represents a micro-thread which will be launched and managed + cooperatively by a scheduler. Objects of type fiber are move-only. boost::fibers::fiber f1; // not-a-fiber @@ -262,7 +256,7 @@ // this leads to undefined behaviour - The spawned fiber does not immediately start running. It is enqueued + The spawned fiber does not immediately start running. It is enqueued in the list of ready-to-run fibers, and will run when the scheduler gets around to it. @@ -271,19 +265,19 @@ Exceptions - An exception escaping from the function or callable object passed to the fiber + An exception escaping from the function or callable object passed to the fiber constructor calls std::terminate(). - If you need to know which exception was thrown, use future<> or - packaged_task<>. + If you need to know which exception was thrown, use future<> or + packaged_task<>. Detaching - A fiber can be detached by explicitly invoking the fiber::detach() member - function. After fiber::detach() is called on a fiber object, that + A fiber can be detached by explicitly invoking the fiber::detach() member + function. After fiber::detach() is called on a fiber object, that object represents not-a-fiber. The fiber object may then safely be destroyed. @@ -292,31 +286,21 @@ constructor Boost.Fiber provides a number of ways to wait for a running fiber to complete. You can coordinate even with a detached fiber - using a mutex, or condition_variable, or + using a mutex, or condition_variable, or any of the other synchronization objects provided by the library. If a detached fiber is still running when the thread's main fiber terminates, - that fiber will be interrupted and shut - down. - - This treatment of detached fibers depends on a detached fiber eventually - either terminating or reaching one of the specified interruption-points. - Note that since this_fiber::yield() is not - an interruption point, a detached fiber whose only interaction with the Fiber - library is yield() - cannot cleanly be terminated. - - + the thread will not shut down. Joining - In order to wait for a fiber to finish, the fiber::join() member function - of the fiber object can be used. fiber::join() will block - until the fiber object has completed. + In order to wait for a fiber to finish, the fiber::join() member function + of the fiber object can be used. fiber::join() will block + until the fiber object has completed. void some_fn() { ... @@ -327,17 +311,17 @@ constructor f.join(); - If the fiber has already completed, then fiber::join() returns immediately - and the joined fiber object becomes not-a-fiber. + If the fiber has already completed, then fiber::join() returns immediately + and the joined fiber object becomes not-a-fiber. Destruction - When a fiber object representing a valid execution context (the fiber - is fiber::joinable()) is destroyed, the program terminates. If - you intend the fiber to outlive the fiber object that launched it, - use the fiber::detach() method. + When a fiber object representing a valid execution context (the fiber + is fiber::joinable()) is destroyed, the program terminates. If + you intend the fiber to outlive the fiber object that launched it, + use the fiber::detach() method. { boost::fibers::fiber f( some_fn); @@ -348,219 +332,18 @@ constructor f.detach(); } // okay, program continues - - - Interruption - - - A valid fiber can be interrupted by invoking its fiber::interrupt() member - function. The next time that fiber executes one of the specific interruption-points - with interruption enabled, a fiber_interrupted - exception will be thrown. If this exception is not caught, the fiber will be - terminated, its stack unwound, its stack objects properly destroyed. - - - (fiber_interrupted, being thrown - by the library, is similarly caught by the library. It does not cause the program - to terminate.) - - - With disable_interruption a fiber can defer being - interrupted. - -// interruption enabled at this point -{ - boost::this_fiber::disable_interruption di1; - // interruption disabled - { - boost::this::fiber::disable_interruption di2; - // interruption still disabled - } // di2 destroyed; interruption state restored - // interruption still disabled -} // di destroyed; interruption state restored -// interruption enabled - - - At any point, the interruption state for the current thread can be queried - by calling this_fiber::interruption_enabled(). - - - (But consider fiber f1 running - a packaged_task<>. Suppose f1 - is interrupted. Its associated future<> will be set with - fiber_interrupted. When fiber - f2 calls future::get() on - that future, get() will - immediately rethrow fiber_interrupted - — regardless of any disable_interruption instance f2 might have constructed.) - - - The following interruption-points - are defined and will throw fiber_interrupted - if this_fiber::interruption_requested() and - this_fiber::interruption_enabled(). - - - - - fiber::join() - - - - - barrier::wait() - - - - - condition_variable::wait() - - - - - condition_variable::wait_for() - - - - - condition_variable::wait_until() - - - - - condition_variable_any::wait() - - - - - condition_variable_any::wait_for() - - - - - condition_variable_any::wait_until() - - - - - this_fiber::sleep_for() - - - - - this_fiber::sleep_until() - - - - - this_fiber::interruption_point() - - - - - bounded_channel::push() - - - - - bounded_channel::push_wait_for() - - - - - bounded_channel::push_wait_until() - - - - - bounded_channel::pop() - - - - - bounded_channel::value_pop() - - - - - bounded_channel::pop_wait_for() - - - - - bounded_channel::pop_wait_until() - - - - - unbounded_channel::push() - - - - - unbounded_channel::pop() - - - - - unbounded_channel::value_pop() - - - - - unbounded_channel::pop_wait_for() - - - - - unbounded_channel::pop_wait_until() - - - - - future::wait() - - - - - future::get() - - - - - future::get_exception_ptr() - - - - - shared_future::wait() - - - - - shared_future::get() - - - - - shared_future::get_exception_ptr() - - - - + Fiber IDs Objects of class fiber::id can be - used to identify fibers. Each running fiber has a unique fiber has a unique fiber::id obtainable - from the corresponding fiber -by calling the fiber::get_id() member + from the corresponding fiber +by calling the fiber::get_id() member function. Objects of class fiber::id can be copied, and used as keys in associative containers: the full range of comparison @@ -578,6 +361,72 @@ by calling the fiber::get_id() fiber::id. + + + Enumeration + launch_policy + + + launch_policy specifies whether + control passes immediately into a newly-launched fiber. + +enum class launch_policy { + dispatch, + post +}; + + + dispatch + + + + + Effects: + + + A fiber launched with launch_policy + == dispatch + is entered immediately. In other words, launching a fiber with dispatch suspends the caller (the previously-running + fiber) until the fiber scheduler has a chance to resume it later. + + + + + + post + + + + + Effects: + + + A fiber launched with launch_policy + == post + is passed to the fiber scheduler as ready, but it is not yet entered. + The caller (the previously-running fiber) continues executing. The newly-launched + fiber will be entered when the fiber scheduler has a chance to resume + it later. + + + + + Note: + + + If launch_policy is not + explicitly specified, post + is the default. + + + +
<anchor id="class_fiber"/><link linkend="fiber.fiber_mgmt.fiber">Class <code><phrase role="identifier">fiber</phrase></code></link> @@ -592,9 +441,15 @@ by calling the fiber::get_id() template< typename Fn, typename ... Args > fiber( Fn &&, Args && ...); + template< typename Fn, typename ... Args > + fiber( launch_policy, Fn &&, Args && ...); + template< typename StackAllocator, typename Fn, typename ... Args > fiber( std::allocator_arg_t, StackAllocator, Fn &&, Args && ...); + template< typename StackAllocator, typename Fn, typename ... Args > + fiber( launch_policy, std::allocator_arg_t, StackAllocator, Fn &&, Args && ...); + ~fiber(); fiber( fiber const&) = delete; @@ -615,8 +470,6 @@ by calling the fiber::get_id() void join(); - void interrupt() noexcept; - template< typename PROPS > PROPS & properties(); }; @@ -642,7 +495,7 @@ by calling the fiber::get_id() Effects: - Constructs a fiber instance that refers to not-a-fiber. + Constructs a fiber instance that refers to not-a-fiber. @@ -674,8 +527,15 @@ by calling the fiber::get_id() template< typename Fn, typename ... Args > fiber( Fn && fn, Args && ... args); +template< typename Fn, typename ... Args > +fiber( launch_policy lpol, Fn && fn, Args && ... args); + template< typename StackAllocator, typename Fn, typename ... Args > fiber( std::allocator_arg_t, StackAllocator salloc, Fn && fn, Args && ... args); + +template< typename StackAllocator, typename Fn, typename ... Args > +fiber( launch_policy lpol, std::allocator_arg_t, StackAllocator salloc, + Fn && fn, Args && ... args); @@ -693,7 +553,12 @@ by calling the fiber::get_id() fn is copied or moved - into internal storage for access by the new fiber. + into internal storage for access by the new fiber. If launch_policy is + specified (or defaulted) to post, + the new fiber is marked ready and will be entered at + the next opportunity. If launch_policy + is specified as dispatch, + the calling fiber is suspended and the new fiber is entered immediately. @@ -724,7 +589,7 @@ by calling the fiber::get_id() If StackAllocator is not explicitly passed, the default stack allocator depends on BOOST_USE_SEGMENTED_STACKS: if defined, - you will get a segmented_stack, else a fixedsize_stack. + you will get a segmented_stack, else a fixedsize_stack. @@ -753,7 +618,7 @@ by calling the fiber::get_id() Transfers ownership of the fiber managed by other - to the newly constructed fiber instance. + to the newly constructed fiber instance. @@ -835,7 +700,7 @@ by calling the fiber::get_id() Effects: - If the fiber is fiber::joinable(), calls std::terminate. + If the fiber is fiber::joinable(), calls std::terminate. Destroys *this. @@ -845,9 +710,9 @@ by calling the fiber::get_id() The programmer must ensure that the destructor is never executed while - the fiber is still fiber::joinable(). Even if you know - that the fiber has completed, you must still call either fiber::join() or - fiber::detach() before destroying the fiber + the fiber is still fiber::joinable(). Even if you know + that the fiber has completed, you must still call either fiber::join() or + fiber::detach() before destroying the fiber object. @@ -897,7 +762,7 @@ by calling the fiber::get_id() Preconditions: - the fiber is fiber::joinable(). + the fiber is fiber::joinable(). @@ -923,8 +788,7 @@ by calling the fiber::get_id() Throws: - fiber_interrupted if - the current fiber is interrupted or fiber_error + fiber_error @@ -939,16 +803,7 @@ by calling the fiber::get_id() role="special">::this_fiber::get_id(). invalid_argument: - if the fiber is not fiber::joinable(). - - - - - Notes: - - - join() - is one of the predefined interruption-points. + if the fiber is not fiber::joinable(). @@ -967,7 +822,7 @@ by calling the fiber::get_id() Preconditions: - the fiber is fiber::joinable(). + the fiber is fiber::joinable(). @@ -976,7 +831,7 @@ by calling the fiber::get_id() The fiber of execution becomes detached, and no longer has an associated - fiber object. + fiber object. @@ -1002,7 +857,7 @@ by calling the fiber::get_id() invalid_argument: if the fiber is - not fiber::joinable(). + not fiber::joinable(). @@ -1042,39 +897,7 @@ by calling the fiber::get_id() See also: - this_fiber::get_id() - - - - - - - - Member function interrupt() - - -void interrupt() noexcept; - - - - - Effects: - - - If *this - refers to a fiber of execution, request that the fiber will be interrupted - the next time it enters one of the predefined interruption-points - with interruption enabled, or if it is currently blocked - in a call to one of the predefined interruption-points - with interruption enabled. - - - - - Throws: - - - Nothing + this_fiber::get_id() @@ -1096,8 +919,8 @@ by calling the fiber::get_id() *this - refers to a fiber of execution. use_scheduling_algorithm() has - been called from this thread with a subclass of sched_algorithm_with_properties<> with + refers to a fiber of execution. use_scheduling_algorithm() has + been called from this thread with a subclass of sched_algorithm_with_properties<> with the same template argument PROPS. @@ -1126,7 +949,7 @@ by calling the fiber::get_id() Note: - sched_algorithm_with_properties<> provides + sched_algorithm_with_properties<> provides a way for a user-coded scheduler to associate extended properties, such as priority, with a fiber instance. This method allows access to those user-provided properties. @@ -1271,7 +1094,7 @@ by calling the fiber::get_id() Directs Boost.Fiber to use SchedAlgo, which must be a concrete - subclass of sched_algorithm, as the scheduling + subclass of sched_algorithm, as the scheduling algorithm for all fibers in the current thread. Pass any required SchedAlgo constructor arguments as args. @@ -1287,7 +1110,7 @@ by calling the fiber::get_id() role="special">() before any other Boost.Fiber entry point. If no scheduler has been set for the current thread by the time Boost.Fiber needs to use - it, the library will create a default round_robin instance + it, the library will create a default round_robin instance for this thread. @@ -1620,13 +1443,6 @@ by calling the fiber::get_id() That is, in many respects the main fiber on each thread can be treated like an explicitly-launched fiber. - - However, unlike an explicitly-launched fiber, if fiber_interrupted - is thrown (or rethrown) on a thread's main fiber without being caught, the - Fiber library cannot catch it: std::terminate() will be called. - namespace boost { namespace this_fiber { @@ -1639,12 +1455,6 @@ by calling the fiber::get_id() template< typename PROPS > PROPS & properties(); -void interruption_point(); -bool interruption_requested() noexcept; -bool interruption_enabled() noexcept; -class disable_interruption; -class restore_interruption; - }} @@ -1706,17 +1516,7 @@ by calling the fiber::get_id() Throws: - fiber_interrupted if - the current fiber is interrupted or timeout-related exceptions. - - - - - Note: - - - sleep_until() - is one of the predefined interruption-points. + timeout-related exceptions. @@ -1771,17 +1571,7 @@ by calling the fiber::get_id() Throws: - fiber_interrupted if - the current fiber is interrupted or timeout-related exceptions. - - - - - Note: - - - sleep_for() - is one of the predefined interruption-points. + timeout-related exceptions. @@ -1829,11 +1619,9 @@ by calling the fiber::get_id() Note: - yield() - is not an interruption point. A fiber that calls - yield() - is not suspended: it is immediately passed to the scheduler as ready - to run. + A fiber that calls yield() is not suspended: it is immediately + passed to the scheduler as ready to run. @@ -1856,8 +1644,8 @@ by calling the fiber::get_id() Preconditions: - use_scheduling_algorithm() has been called from - this thread with a subclass of sched_algorithm_with_properties<> with + use_scheduling_algorithm() has been called from + this thread with a subclass of sched_algorithm_with_properties<> with the same template argument PROPS. @@ -1886,7 +1674,7 @@ by calling the fiber::get_id() Note: - sched_algorithm_with_properties<> provides + sched_algorithm_with_properties<> provides a way for a user-coded scheduler to associate extended properties, such as priority, with a fiber instance. This function allows access to those user-provided properties. @@ -1911,357 +1699,6 @@ by calling the fiber::get_id() - - - - Non-member - function this_fiber::interruption_point() - - -#include <boost/fiber/interruption.hpp> - -void interruption_point(); - - - - - Effects: - - - Check to see if the current fiber has been interrupted. - - - - - Throws: - - - fiber_interrupted if - this_fiber::interruption_enabled() and - this_fiber::interruption_requested() both - return true. - - - - - - - - Non-member - function this_fiber::interruption_requested() - - -#include <boost/fiber/interruption.hpp> - -bool interruption_requested() noexcept; - - - - - Returns: - - - true if interruption has - been requested for the current fiber, false - otherwise. - - - - - Throws: - - - Nothing. - - - - - - - - Non-member - function this_fiber::interruption_enabled() - - -#include <boost/fiber/interruption.hpp> - -bool interruption_enabled() noexcept; - - - - - Returns: - - - true if interruption is - enabled for the current fiber, false - otherwise. - - - - - Throws: - - - Nothing. - - - - - Note: - - - Interruption is enabled by default. - - - - - - - - Class - disable_interruption - - -#include <boost/fiber/interruption.hpp> - -class disable_interruption { -public: - disable_interruption() noexcept; - ~disable_interruption() noexcept; - disable_interruption(const disable_interruption&) = delete; - disable_interruption& operator=(const disable_interruption&) = delete; -}; - - - Constructor - -disable_interruption() noexcept; - - - - - Effects: - - - Stores the current state of this_fiber::interruption_enabled() and - disables interruption for the current fiber. - - - - - Postconditions: - - - this_fiber::interruption_enabled() returns - false for the current - fiber. - - - - - Throws: - - - Nothing. - - - - - Note: - - - Nesting of disable_interruption - instances matters. Constructing a disable_interruption - while this_fiber::interruption_enabled() == false - has no effect. - - - - - - Destructor - -~disable_interruption() noexcept; - - - - - Preconditions: - - - Must be called from the same fiber on which *this was constructed. - - - - - Effects: - - - Restores the state of this_fiber::interruption_enabled() for - the current fiber to the state saved at construction of *this. - - - - - Postconditions: - - - this_fiber::interruption_enabled() for - the current fiber returns the value stored by the constructor of *this. - - - - - Note: - - - Destroying a disable_interruption - constructed while this_fiber::interruption_enabled() == false - has no effect. - - - - - - - - Class - restore_interruption - - -#include <boost/fiber/interruption.hpp> - -class restore_interruption { -public: - explicit restore_interruption(disable_interruption&) noexcept; - ~restore_interruption() noexcept; - restore_interruption(const restore_interruption&) = delete; - restore_interruption& operator=(const restore_interruption&) = delete; -}; - - - Constructor - -explicit restore_interruption(disable_interruption& disabler) noexcept; - - - - - Preconditions: - - - Must be called from the same fiber on which disabler - was constructed. - - - - - Effects: - - - Restores the current state of this_fiber::interruption_enabled() for - the current fiber to that saved in disabler. - - - - - Postconditions: - - - this_fiber::interruption_enabled() for - the current fiber returns the value stored in the constructor of disabler. - - - - - Throws: - - - Nothing. - - - - - Note: - - - Nesting of restore_interruption - instances does not matter: only the disable_interruption instance - passed to the constructor matters. Constructing a restore_interruption - with a disable_interruption - constructed while this_fiber::interruption_enabled() == false - has no effect. - - - - - - Destructor - -~restore_interruption() noexcept; - - - - - Preconditions: - - - Must be called from the same fiber on which *this was constructed. - - - - - Effects: - - - Disables interruption for the current fiber. - - - - - Postconditions: - - - this_fiber::interruption_enabled() for - the current fiber returns false. - - - - - Note: - - - Destroying a restore_interruption - constructed with a disable_interruption constructed - while this_fiber::interruption_enabled() == false - has no effect. - - - - -void foo() { - // interruption is enabled - { - boost::this_fiber::disable_interruption di; - // interruption is disabled - { - boost::this_fiber::restore_interruption ri( di); - // interruption now enabled - } // ri destroyed, interruption disabled again - } // di destructed, interruption state restored - // interruption now enabled -} -
@@ -2280,10 +1717,10 @@ by calling the fiber::get_id() Each thread has its own scheduler. Different threads in a process may use different schedulers. By default, Boost.Fiber implicitly - instantiates round_robin as the scheduler for each thread. + instantiates round_robin as the scheduler for each thread. - You are explicitly permitted to code your own sched_algorithm subclass. + You are explicitly permitted to code your own sched_algorithm subclass. For the most part, your sched_algorithm subclass need not defend against cross-thread calls: the fiber manager intercepts and defers such calls. Most sched_algorithm @@ -2292,7 +1729,7 @@ by calling the fiber::get_id() Your sched_algorithm subclass - is engaged on a particular thread by calling use_scheduling_algorithm(): + is engaged on a particular thread by calling use_scheduling_algorithm(): void thread_fn() { boost::fibers::use_scheduling_algorithm< my_fiber_scheduler >(); @@ -2300,8 +1737,8 @@ by calling the fiber::get_id() } - A scheduler class must implement interface sched_algorithm. - Boost.Fiber provides one scheduler: round_robin. + A scheduler class must implement interface sched_algorithm. + Boost.Fiber provides one scheduler: round_robin. @@ -2348,7 +1785,7 @@ by calling the fiber::get_id() Informs the scheduler that fiber f is ready to run. Fiber f might be newly launched, or it might have been blocked but has just been - awakened, or it might have called this_fiber::yield(). + awakened, or it might have called this_fiber::yield(). @@ -2366,7 +1803,7 @@ by calling the fiber::get_id() See also: - round_robin + round_robin @@ -2405,7 +1842,7 @@ by calling the fiber::get_id() See also: - round_robin + round_robin @@ -2459,7 +1896,7 @@ by calling the fiber::get_id() environment in whatever way makes sense. The fiber manager is stating that suspend_until() need not return until abs_time - — or sched_algorithm::notify() is called — whichever + — or sched_algorithm::notify() is called — whichever comes first. The interaction with notify() means that, for instance, calling fiber::get_id() role="identifier">this_thread::sleep_until(abs_time) - would be too simplistic. round_robin::suspend_until() uses + would be too simplistic. round_robin::suspend_until() uses a std::condition_variable to coordinate - with round_robin::notify(). + with round_robin::notify(). @@ -2504,7 +1941,7 @@ by calling the fiber::get_id() Effects: - Requests the scheduler to return from a pending call to sched_algorithm::suspend_until(). + Requests the scheduler to return from a pending call to sched_algorithm::suspend_until(). @@ -2529,7 +1966,7 @@ by calling the fiber::get_id() - This class implements sched_algorithm, scheduling fibers + This class implements sched_algorithm, scheduling fibers in round-robin fashion. #include <boost/fiber/round_robin.hpp> @@ -2691,7 +2128,7 @@ by calling the fiber::get_id() Effects: - Wake up a pending call to round_robin::suspend_until(), + Wake up a pending call to round_robin::suspend_until(), some fibers might be ready. This implementation wakes suspend_until() via std:: fiber::get_id() Scheduler Fiber Properties - A scheduler class directly derived from sched_algorithm can - use any information available from context to implement the sched_algorithm can + use any information available from context to implement the sched_algorithm interface. But a custom scheduler might need to track additional properties for a fiber. For instance, a priority-based scheduler would need to track a fiber's priority. @@ -2795,8 +2232,8 @@ by calling the fiber::get_id() Effects: - Pass control to the custom sched_algorithm_with_properties<> subclass's - sched_algorithm_with_properties::property_change() method. + Pass control to the custom sched_algorithm_with_properties<> subclass's + sched_algorithm_with_properties::property_change() method. @@ -2812,8 +2249,8 @@ by calling the fiber::get_id() Note: - A custom scheduler's sched_algorithm_with_properties::pick_next() method - might dynamically select from the ready fibers, or sched_algorithm_with_properties::awakened() might + A custom scheduler's sched_algorithm_with_properties::pick_next() method + might dynamically select from the ready fibers, or sched_algorithm_with_properties::awakened() might instead insert each ready fiber into some form of ready queue for pick_next(). In the latter case, if application code modifies a fiber property (e.g. @@ -2847,7 +2284,7 @@ by calling the fiber::get_id() role="identifier">sched_algorithm_with_properties<PROPS>. PROPS should be derived from - fiber_properties. + fiber_properties. #include <boost/fiber/algorithm.hpp> @@ -2886,7 +2323,7 @@ by calling the fiber::get_id() Informs the scheduler that fiber f - is ready to run, like sched_algorithm::awakened(). + is ready to run, like sched_algorithm::awakened(). Passes the fiber's associated PROPS instance. @@ -2945,7 +2382,7 @@ by calling the fiber::get_id() Note: - same as sched_algorithm::pick_next() + same as sched_algorithm::pick_next() @@ -2982,7 +2419,7 @@ by calling the fiber::get_id() Note: - same as sched_algorithm::has_ready_fibers() + same as sched_algorithm::has_ready_fibers() @@ -3011,7 +2448,7 @@ by calling the fiber::get_id() Note: - same as sched_algorithm::suspend_until() + same as sched_algorithm::suspend_until() @@ -3031,7 +2468,7 @@ by calling the fiber::get_id() Effects: - Requests the scheduler to return from a pending call to sched_algorithm_with_properties::suspend_until(). + Requests the scheduler to return from a pending call to sched_algorithm_with_properties::suspend_until(). @@ -3039,7 +2476,7 @@ by calling the fiber::get_id() Note: - same as sched_algorithm::notify() + same as sched_algorithm::notify() @@ -3077,11 +2514,11 @@ by calling the fiber::get_id() The fiber's associated PROPS - instance is already passed to sched_algorithm_with_properties::awakened() and - sched_algorithm_with_properties::property_change(). - However, every sched_algorithm subclass is expected - to track a collection of ready context instances. This method - allows your custom scheduler to retrieve the fiber_properties subclass + instance is already passed to sched_algorithm_with_properties::awakened() and + sched_algorithm_with_properties::property_change(). + However, every sched_algorithm subclass is expected + to track a collection of ready context instances. This method + allows your custom scheduler to retrieve the fiber_properties subclass instance for any context in its collection. @@ -3122,8 +2559,8 @@ by calling the fiber::get_id() Note: - This method is only called when a custom fiber_properties subclass - explicitly calls fiber_properties::notify(). + This method is only called when a custom fiber_properties subclass + explicitly calls fiber_properties::notify(). @@ -3143,7 +2580,7 @@ by calling the fiber::get_id() Returns: - A new instance of fiber_properties subclass fiber_properties subclass PROPS. @@ -3189,10 +2626,10 @@ by calling the fiber::get_id() role="identifier">fibers::scheduler::ready_queue_t. This hook is reserved for - use by sched_algorithm implementations. (For instance, - round_robin contains a ready_queue_t - instance to manage its ready fibers.) See context::ready_is_linked(), - context::ready_link(), context::ready_unlink(). + use by sched_algorithm implementations. (For instance, + round_robin contains a ready_queue_t + instance to manage its ready fibers.) See context::ready_is_linked(), + context::ready_link(), context::ready_unlink(). Your sched_algorithm implementation @@ -3311,7 +2748,7 @@ by calling the fiber::get_id() See also: - fiber::get_id() + fiber::get_id() @@ -3393,7 +2830,7 @@ by calling the fiber::get_id() for which is_context(pinned_context) is true - — must never be passed to context::migrate() for any other + — must never be passed to context::migrate() for any other thread. @@ -3453,7 +2890,7 @@ by calling the fiber::get_id() true if *this is stored in a sched_algorithm + role="keyword">this is stored in a sched_algorithm implementation's ready-queue. @@ -3471,7 +2908,7 @@ implementation's Note: - Specifically, this method indicates whether context::ready_link() has + Specifically, this method indicates whether context::ready_link() has been called on *this. ready_is_linked() has no information about participation in any other containers. @@ -3514,8 +2951,8 @@ implementation's A context signaled as ready by another thread is first stored in the fiber manager's remote-ready-queue. - This is the mechanism by which the fiber manager protects a sched_algorithm implementation - from cross-thread sched_algorithm::awakened() calls. + This is the mechanism by which the fiber manager protects a sched_algorithm implementation + from cross-thread sched_algorithm::awakened() calls. @@ -3704,7 +3141,7 @@ implementation's Removes *this - from ready-queue: undoes the effect of context::ready_link(). + from ready-queue: undoes the effect of context::ready_link(). @@ -3791,7 +3228,7 @@ implementation's Suspends the running fiber (the fiber associated with *this) until some other fiber passes this to context::set_ready(). + role="keyword">this to context::set_ready(). *this is marked as not-ready, and control passes to the scheduler to select another fiber to run. @@ -3846,8 +3283,8 @@ implementation's role="identifier">ctx as being ready to run. This does not immediately resume that fiber; rather it passes the fiber to the scheduler for subsequent resumption. If the scheduler is idle (has not - returned from a call to sched_algorithm::suspend_until()), - sched_algorithm::notify() is called to wake it up. + returned from a call to sched_algorithm::suspend_until()), + sched_algorithm::notify() is called to wake it up. @@ -3886,7 +3323,7 @@ implementation's Note: - See context::migrate() for a way to migrate the suspended + See context::migrate() for a way to migrate the suspended thread to the thread calling set_ready(). @@ -3930,7 +3367,7 @@ implementation's
<anchor id="stack"/><link linkend="fiber.stack">Stack allocation</link> - A fiber uses internally an execution_context + A fiber uses internally an execution_context which manages a set of registers and a stack. The memory used by the stack is allocated/deallocated via a stack_allocator which is required to model a stack-allocator @@ -3939,7 +3376,7 @@ implementation's A stack_allocator can be passed to fiber::fiber() or to fibers::async(). + role="special">() or to fibers::async(). @@ -4073,7 +3510,7 @@ implementation's - Boost.Fiber provides the class protected_fixedsize_stack which + Boost.Fiber provides the class protected_fixedsize_stack which models the stack-allocator concept. It appends a guard page at the end of each stack to protect against exceeding the stack. If the guard page is accessed (read @@ -4082,10 +3519,10 @@ implementation's - Using protected_fixedsize_stack is expensive. + Using protected_fixedsize_stack is expensive. Launching a new fiber with a stack of this type incurs the overhead of setting the memory protection; once allocated, this stack is just as efficient to - use as fixedsize_stack. + use as fixedsize_stack. @@ -4194,9 +3631,9 @@ implementation's - Boost.Fiber provides the class pooled_fixedsize_stack which + Boost.Fiber provides the class pooled_fixedsize_stack which models the stack-allocator - concept. In contrast to protected_fixedsize_stack it + concept. In contrast to protected_fixedsize_stack it does not append a guard page at the end of each stack. The memory is managed internally by boost::pool - Boost.Fiber provides the class fixedsize_stack which + Boost.Fiber provides the class fixedsize_stack which models the stack-allocator - concept. In contrast to protected_fixedsize_stack it + concept. In contrast to protected_fixedsize_stack it does not append a guard page at the end of each stack. The memory is simply managed by std::malloc() @@ -4445,19 +3882,19 @@ implementation's - Boost.Fiber supports usage of a segmented_stack, + Boost.Fiber supports usage of a segmented_stack, i.e. the stack grows on demand. The fiber is created with a minimal stack size - which will be increased as required. Class segmented_stack models + which will be increased as required. Class segmented_stack models the stack-allocator concept. - In contrast to protected_fixedsize_stack and - fixedsize_stack it creates a stack which grows on demand. + In contrast to protected_fixedsize_stack and + fixedsize_stack it creates a stack which grows on demand. Segmented stacks are currently only supported by gcc from version 4.7 and clang from version 3.4 onwards. In order to use - a segmented_stack Boost.Fiber + a segmented_stack Boost.Fiber must be built with property segmented-stacks, e.g. toolset=gcc segmented-stacks=on at @@ -4557,7 +3994,7 @@ implementation's - If the library is compiled for segmented stacks, segmented_stack is + If the library is compiled for segmented stacks, segmented_stack is the only available stack allocator. @@ -4605,8 +4042,8 @@ implementation's }; - mutex provides an exclusive-ownership mutex. At most one fiber - can own the lock on a given instance of mutex at any time. Multiple + mutex provides an exclusive-ownership mutex. At most one fiber + can own the lock on a given instance of mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock}; - timed_mutex provides an exclusive-ownership mutex. At most - one fiber can own the lock on a given instance of timed_mutex at + timed_mutex provides an exclusive-ownership mutex. At most + one fiber can own the lock on a given instance of timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), try_lock_until Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as timed_mutex::try_lock(). + time has already passed, behaves as timed_mutex::try_lock(). @@ -5051,7 +4488,7 @@ implementation's Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as timed_mutex::try_lock(). + time has already passed, behaves as timed_mutex::try_lock(). @@ -5110,13 +4547,13 @@ implementation's }; - recursive_mutex provides an exclusive-ownership recursive - mutex. At most one fiber can own the lock on a given instance of recursive_mutex at + recursive_mutex provides an exclusive-ownership recursive + mutex. At most one fiber can own the lock on a given instance of recursive_mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock() shall be permitted. A fiber that already - has exclusive ownership of a given recursive_mutex instance + has exclusive ownership of a given recursive_mutex instance can call lock() or try_lock() to acquire an additional level of ownership of the mutex. unlockThrows: - fiber_interrupted + Nothing @@ -5258,16 +4695,16 @@ implementation's }; - recursive_timed_mutex provides an exclusive-ownership + recursive_timed_mutex provides an exclusive-ownership recursive mutex. At most one fiber can own the lock on a given instance of - recursive_timed_mutex at any time. Multiple concurrent + recursive_timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), try_lock_for(), try_lock_until() and unlock() shall be permitted. A fiber that already has exclusive ownership of a given - recursive_timed_mutex instance can call recursive_timed_mutex instance can call lock(), try_lock(), try_lock_for() @@ -5299,7 +4736,7 @@ implementation's Throws: - fiber_interrupted + Nothing @@ -5401,7 +4838,7 @@ implementation's Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as recursive_timed_mutex::try_lock(). + time has already passed, behaves as recursive_timed_mutex::try_lock(). @@ -5442,7 +4879,7 @@ implementation's Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as recursive_timed_mutex::try_lock(). + time has already passed, behaves as recursive_timed_mutex::try_lock(). @@ -5480,7 +4917,7 @@ implementation's class condition_variable_any; - The class condition_variable provides a mechanism + The class condition_variable provides a mechanism for a fiber to wait for notification from another fiber. When the fiber awakens from the wait, then it checks to see if the appropriate condition is now true, and continues if so. If the condition is not true, then the fiber calls @@ -5505,9 +4942,9 @@ implementation's Notice that the lk is passed - to condition_variable::wait(): waitcondition_variable::wait(): wait() will atomically add the fiber to the set - of fibers waiting on the condition variable, and unlock the mutex. + of fibers waiting on the condition variable, and unlock the mutex. When the fiber is awakened, the mutex will be locked again before the call to wait() returns. This allows other fibers to acquire @@ -5531,8 +4968,8 @@ implementation's In the meantime, another fiber sets data_ready to true, and then calls either - condition_variable::notify_one() or condition_variable::notify_all() on - the condition_variable cond + condition_variable::notify_one() or condition_variable::notify_all() on + the condition_variable cond to wake one waiting fiber or all the waiting fibers respectively. void retrieve_data(); @@ -5549,9 +4986,9 @@ implementation's } - Note that the same mutex is locked before the shared data is updated, + Note that the same mutex is locked before the shared data is updated, but that the mutex does not - have to be locked across the call to condition_variable::notify_one(). + have to be locked across the call to condition_variable::notify_one(). Locking is important because the synchronization objects provided by - Boost.Fiber provides both condition_variable and - condition_variable_any. boostBoost.Fiber provides both condition_variable and + condition_variable_any. boost::fibers::condition_variable can only wait on std::unique_lock< boost::fibers:: mutex > + role="special">::mutex > while boost::fibers::condition_variable_any can wait on user-defined @@ -5580,10 +5017,10 @@ implementation's Wakeups - Neither condition_variable nor condition_variable_any are - subject to spurious wakeup: condition_variable::wait() can - only wake up when condition_variable::notify_one() or - condition_variable::notify_all() is called. Even + Neither condition_variable nor condition_variable_any are + subject to spurious wakeup: condition_variable::wait() can + only wake up when condition_variable::notify_one() or + condition_variable::notify_all() is called. Even so, it is prudent to use one of the wait( lock, predicate Because producer fibers might push() - items to the queue in bursts, they call condition_variable::notify_all() rather - than condition_variable::notify_one(). + items to the queue in bursts, they call condition_variable::notify_all() rather + than condition_variable::notify_one(). - But a given consumer fiber might well wake up from condition_variable::wait() and + But a given consumer fiber might well wake up from condition_variable::wait() and find the queue empty(), because other consumer fibers might already have processed all pending items. @@ -5921,9 +5358,7 @@ implementation's fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution. + occurs. @@ -5935,7 +5370,7 @@ implementation's concurrently calling wait on *this must wait on lk objects - governing the same mutex. Three distinct + governing the same mutex. Three distinct objects are involved in any condition_variable_any::wait() call: the condition_variable_any itself, the mutex coordinating access between fibers and a local lock object (e.g. fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions. @@ -6078,7 +5510,7 @@ implementation's Note: - See Note for condition_variable_any::wait(). + See Note for condition_variable_any::wait(). @@ -6174,10 +5606,7 @@ implementation's fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions. @@ -6212,7 +5641,7 @@ implementation's Note: - See Note for condition_variable_any::wait(). + See Note for condition_variable_any::wait(). @@ -6476,9 +5905,7 @@ implementation's fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution. + occurs. @@ -6490,7 +5917,7 @@ implementation's concurrently calling wait on *this must wait on lk objects - governing the same mutex. Three distinct + governing the same mutex. Three distinct objects are involved in any condition_variable::wait() call: the condition_variable itself, the mutex coordinating access between fibers and a local lock object (e.g. fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions. @@ -6632,7 +6056,7 @@ implementation's Note: - See Note for condition_variable::wait(). + See Note for condition_variable::wait(). @@ -6728,10 +6152,7 @@ implementation's fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions. @@ -6766,7 +6187,7 @@ implementation's Note: - See Note for condition_variable::wait(). + See Note for condition_variable::wait(). @@ -6790,7 +6211,7 @@ implementation's the first of them has completed. You might be tempted to use a barrier(2) as the synchronization - mechanism, making each new fiber call its barrier::wait() method, + mechanism, making each new fiber call its barrier::wait() method, then calling wait() in the launching fiber to wait until the first other fiber completes. @@ -6883,7 +6304,7 @@ implementation's }; - Instances of barrier are not copyable or movable. + Instances of barrier are not copyable or movable. Constructor @@ -6960,15 +6381,6 @@ implementation's - - Notes: - - - wait() - is one of the predefined interruption-points. - - -
@@ -7291,7 +6703,7 @@ implementation's Throws: - fiber_interrupted + Nothing @@ -7325,7 +6737,7 @@ implementation's fiber_error if *this - is closed or fiber_interrupted + is closed @@ -7410,7 +6822,6 @@ implementation's Throws: - fiber_interrupted or timeout-related exceptions. @@ -7455,7 +6866,6 @@ implementation's Throws: - fiber_interrupted or timeout-related exceptions. @@ -7758,7 +7168,6 @@ implementation's Throws: - fiber_interrupted or exceptions thrown by memory allocation and copying or moving va. @@ -7817,7 +7226,6 @@ implementation's Throws: - fiber_interrupted, exceptions thrown by memory allocation and copying or moving va or timeout-related exceptions. @@ -7874,7 +7282,6 @@ implementation's Throws: - fiber_interrupted or exceptions thrown by memory allocation and copying or moving va or timeout-related exceptions. @@ -7956,7 +7363,7 @@ implementation's Throws: - fiber_interrupted + Nothing @@ -7995,7 +7402,7 @@ implementation's fiber_error if *this - is closed or fiber_interrupted + is closed @@ -8090,7 +7497,6 @@ implementation's Throws: - fiber_interrupted or timeout-related exceptions. @@ -8141,7 +7547,6 @@ implementation's Throws: - fiber_interrupted or timeout-related exceptions. @@ -8159,34 +7564,34 @@ implementation's in response to external stimuli, or on-demand. - This is done through the provision of four class templates: future<> and - shared_future<> which are used to retrieve the asynchronous - results, and promise<> and packaged_task<> which + This is done through the provision of four class templates: future<> and + shared_future<> which are used to retrieve the asynchronous + results, and promise<> and packaged_task<> which are used to generate the asynchronous results. - An instance of future<> holds the one and only reference + An instance of future<> holds the one and only reference to a result. Ownership can be transferred between instances using the move constructor or move-assignment operator, but at most one instance holds a reference to a given asynchronous result. When the result is ready, it is - returned from future::get() by rvalue-reference to allow the result + returned from future::get() by rvalue-reference to allow the result to be moved or copied as appropriate for the type. - On the other hand, many instances of shared_future<> may + On the other hand, many instances of shared_future<> may reference the same result. Instances can be freely copied and assigned, and - shared_future::get() + shared_future::get() returns a const - reference so that multiple calls to shared_future::get() + reference so that multiple calls to shared_future::get() are - safe. You can move an instance of future<> into an instance - of shared_future<>, thus transferring ownership + safe. You can move an instance of future<> into an instance + of shared_future<>, thus transferring ownership of the associated asynchronous result, but not vice-versa. - fibers::async() is a simple way of running asynchronous tasks. + fibers::async() is a simple way of running asynchronous tasks. A call to async() - spawns a fiber and returns a future<> that will deliver + spawns a fiber and returns a future<> that will deliver the result of the fiber function. @@ -8195,15 +7600,15 @@ are asynchronous values - You can set the value in a future with either a promise<> or - a packaged_task<>. A packaged_task<> is + You can set the value in a future with either a promise<> or + a packaged_task<>. A packaged_task<> is a callable object with void return that wraps a function or callable object returning the specified type. - When the packaged_task<> is invoked, it invokes the + When the packaged_task<> is invoked, it invokes the contained function in turn, and populates a future with the contained function's return value. This is an answer to the perennial question: How do I return a value from a fiber? Package the function you wish to run - as a packaged_task<> and pass the packaged task to + as a packaged_task<> and pass the packaged task to the fiber constructor. The future retrieved from the packaged task can then be used to obtain the return value. If the function throws an exception, that is stored in the future in place of the return value. @@ -8224,7 +7629,7 @@ are assert(fi.get()==42); - A promise<> is a bit more low level: it just provides explicit + A promise<> is a bit more low level: it just provides explicit functions to store a value or an exception in the associated future. A promise can therefore be used where the value might come from more than one possible source. @@ -8251,22 +7656,22 @@ are state - Behind a promise<> and its future<> lies + Behind a promise<> and its future<> lies an unspecified object called their shared state. The shared state is what will actually hold the async result (or the exception). - The shared state is instantiated along with the promise<>. + The shared state is instantiated along with the promise<>. Aside from its originating promise<>, a future<> holds - a unique reference to a particular shared state. However, multiple shared_future<> instances + role="special"><>, a future<> holds + a unique reference to a particular shared state. However, multiple shared_future<> instances can reference the same underlying shared state. - As packaged_task<> and fibers::async() are - implemented using promise<>, discussions of shared state + As packaged_task<> and fibers::async() are + implemented using promise<>, discussions of shared state apply to them as well. @@ -8276,7 +7681,7 @@ are future_status - Timed wait-operations ( future::wait_for() and future::wait_until()) + Timed wait-operations (future::wait_for() and future::wait_until()) return the state of the future. enum class future_status { @@ -8330,7 +7735,7 @@ are - A future<> contains a shared + A future<> contains a shared state which is not shared with any other future. template< typename R > @@ -8457,18 +7862,18 @@ are - instantiate promise<> + instantiate promise<> obtain its future<> - via promise::get_future() + via promise::get_future() - launch fiber, capturing promisefiber, capturing promise<> @@ -8479,7 +7884,7 @@ are - call promise::set_value() + call promise::set_value() @@ -8564,7 +7969,7 @@ are Effects: - Move the state to a shared_future<>. + Move the state to a shared_future<>. @@ -8572,7 +7977,7 @@ are Returns: - a shared_future<> containing the shared + a shared_future<> containing the shared state formerly belonging to *this. @@ -8623,9 +8028,9 @@ are Returns: - Waits until promise::set_value() or promise::set_exception() is - called. If promise::set_value() is called, returns - the value. If promise::set_exception() is called, + Waits until promise::set_value() or promise::set_exception() is + called. If promise::set_value() is called, returns + the value. If promise::set_exception() is called, throws the indicated exception. @@ -8646,7 +8051,6 @@ are future_error with error condition future_errc::no_state, - fiber_interrupted, future_errc::broken_promise. Any exception passed to promise::Returns: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. If set_value() is called, returns a default-constructed std:: future_error with error condition future_errc::no_state - or fiber_interrupted. + role="special">::no_state. @@ -8708,7 +8111,7 @@ are get_exception_ptr() does not invalidate the future. After calling get_exception_ptr(), you may still call future::get(). + role="special">(), you may still call future::get(). @@ -8727,7 +8130,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. @@ -8738,8 +8141,7 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted. + role="special">::no_state. @@ -8760,7 +8162,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_duration has passed. @@ -8782,7 +8184,6 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions. @@ -8804,7 +8205,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_time has passed. @@ -8826,7 +8227,6 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions. @@ -8840,8 +8240,8 @@ are - A shared_future<> contains a shared - state which might be shared with other shared_future<> instances. + A shared_future<> contains a shared + state which might be shared with other shared_future<> instances. template< typename R > class shared_future { @@ -9084,9 +8484,9 @@ are Returns: - Waits until promise::set_value() or promise::set_exception() is - called. If promise::set_value() is called, returns - the value. If promise::set_exception() is called, + Waits until promise::set_value() or promise::set_exception() is + called. If promise::set_value() is called, returns + the value. If promise::set_exception() is called, throws the indicated exception. @@ -9107,7 +8507,6 @@ are future_error with error condition future_errc::no_state, - fiber_interrupted, future_errc::broken_promise. Any exception passed to promise::Returns: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. If set_value() is called, returns a default-constructed std:: future_error with error condition future_errc::no_state - or fiber_interrupted. + role="special">::no_state. @@ -9169,7 +8567,7 @@ are get_exception_ptr() does not invalidate the shared_future. After calling get_exception_ptr(), you may still call shared_future::get(). + role="special">(), you may still call shared_future::get(). @@ -9189,7 +8587,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. @@ -9200,8 +8598,7 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted. + role="special">::no_state. @@ -9222,7 +8619,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_duration has passed. @@ -9244,7 +8641,6 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions. @@ -9266,7 +8662,7 @@ are Effects: - Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_time has passed. @@ -9288,7 +8684,6 @@ are future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions. @@ -9310,6 +8705,14 @@ are > async( Function && fn, Args && ... args); +template< class Function, class ... Args > +future< + std::result_of_t< + std::decay_t< Function >( std::decay_t< Args > ... ) + > +> +async( launch_policy lpol, Function && fn, Args && ... args); + template< typename StackAllocator, class Function, class ... Args > future< std::result_of_t< @@ -9317,6 +8720,15 @@ are > > async( std::allocator_arg_t, StackAllocator salloc, Function && fn, Args && ... args); + +template< typename StackAllocator, class Function, class ... Args > +future< + std::result_of_t< + std::decay_t< Function >( std::decay_t< Args > ... ) + > +> +async( launch_policy lpol, std::allocator_arg_t, StackAllocator salloc, + Function && fn, Args && ... args); @@ -9325,7 +8737,7 @@ are Executes fn in a - fiber and returns an associated future<>. + fiber and returns an associated future<>. @@ -9356,11 +8768,17 @@ are Notes: - The overload accepting std::allocator_arg_t uses the + role="identifier">allocator_arg_t use the passed StackAllocator when constructing the launched fiber. + The overloads accepting launch_policy use the passed + launch_policy when + constructing the launched fiber. + The default launch_policy + is post, as for the + fiber constructor. @@ -9375,8 +8793,8 @@ are <anchor id="class_promise"/><link linkend="fiber.synchronization.futures.promise">Template <code><phrase role="identifier">promise</phrase><phrase role="special"><></phrase></code></link> - A promise<> provides a mechanism to store a value (or - exception) that can later be retrieved from the corresponding future<> object. + A promise<> provides a mechanism to store a value (or + exception) that can later be retrieved from the corresponding future<> object. promise<> and future<> communicate via their underlying shared state. @@ -9528,7 +8946,7 @@ are if shared state is ready; otherwise stores future_error with error condition future_errc::broken_promise - as if by promise::set_exception(): the shared + as if by promise::set_exception(): the shared state is set ready. @@ -9615,7 +9033,7 @@ are Returns: - A future<> with the same shared + A future<> with the same shared state. @@ -9730,7 +9148,7 @@ are <anchor id="class_packaged_task"/><link linkend="fiber.synchronization.futures.packaged_task">Template <code><phrase role="identifier">packaged_task</phrase><phrase role="special"><></phrase></code></link> - A packaged_task<> wraps a callable target that + A packaged_task<> wraps a callable target that returns a value so that the return value can be computed asynchronously. @@ -9747,20 +9165,20 @@ are - Call packaged_task::get_future() and capture - the returned future<> instance. + Call packaged_task::get_future() and capture + the returned future<> instance. - Launch a fiber to run the new packaged_taskfiber to run the new packaged_task<>, passing any arguments required by the original callable. - Call fiber::detach() on the newly-launched fiber::detach() on the newly-launched fiber. @@ -9772,7 +9190,7 @@ are - This is, in fact, pretty much what fibers::async() + This is, in fact, pretty much what fibers::async() encapsulates. template< class R, typename ... Args > @@ -9945,7 +9363,7 @@ encapsulates. if shared state is ready; otherwise stores future_error with error condition future_errc::broken_promise - as if by promise::set_exception(): the shared + as if by promise::set_exception(): the shared state is set ready. @@ -10063,7 +9481,7 @@ encapsulates. Returns: - A future<> with the same shared + A future<> with the same shared state. @@ -10099,9 +9517,9 @@ encapsulates. Invokes the stored callable target. Any exception thrown by the callable target fn is stored in the shared state as if by - promise::set_exception(). Otherwise, the value + promise::set_exception(). Otherwise, the value returned by fn is - stored in the shared state as if by promise::set_value(). + stored in the shared state as if by promise::set_value(). @@ -10187,11 +9605,11 @@ encapsulates. at fiber exit - When a fiber exits, the objects associated with each fiber_specific_ptr instance + When a fiber exits, the objects associated with each fiber_specific_ptr instance are destroyed. By default, the object pointed to by a pointer p is destroyed by invoking delete p, - but this can be overridden for a specific instance of fiber_specific_ptr by + but this can be overridden for a specific instance of fiber_specific_ptr by providing a cleanup routine func to the constructor. In this case, the object is destroyed by invoking func(pEffects: - Construct a fiber_specific_ptr object for storing + Construct a fiber_specific_ptr object for storing a pointer to an object of type T specific to each fiber. When reset() is called, or the fiber exits, fiber_specific_ptr calls + role="special">() is called, or the fiber exits, fiber_specific_ptr calls fn(this->get()). @@ -10290,7 +9708,7 @@ encapsulates. Requires: - All the fiber specific instances associated to this fiber_specific_ptr + All the fiber specific instances associated to this fiber_specific_ptr (except maybe the one associated to this fiber) must be nullptr. @@ -10314,7 +9732,7 @@ encapsulates. The requirement is an implementation restriction. If the destructor promised to delete instances for all fibers, the implementation would be forced to maintain a list of all the fibers having an associated specific ptr, - which is against the goal of fiber specific data. In general, a fiber_specific_ptr should + which is against the goal of fiber specific data. In general, a fiber_specific_ptr should outlive the fibers that use it. @@ -10323,7 +9741,7 @@ encapsulates. Care needs to be taken to ensure that any fibers still running after an instance - of fiber_specific_ptr has been destroyed do not call + of fiber_specific_ptr has been destroyed do not call any member functions on that instance. @@ -10357,7 +9775,7 @@ encapsulates. - The initial value associated with an instance of fiber_specific_ptr is + The initial value associated with an instance of fiber_specific_ptr is nullptr for each fiber. @@ -10549,7 +9967,7 @@ encapsulates. role="bold">Boost.Fiber implicitly creates a dispatcher fiber for each thread — this cannot migrate either. - , + , Of course it would be problematic to migrate a fiber that relies on thread-local storage. @@ -10569,13 +9987,13 @@ encapsulates. due to increased latency of memory access. - Only fibers that are contained in sched_algorithm's ready + Only fibers that are contained in sched_algorithm's ready queue can migrate between threads. You cannot migrate a running fiber, nor one that is blocked. InBoost.Fiber a fiber is migrated by invoking - context::migrate() on the context instance for a + context::migrate() on the context instance for a fiber already associated with the destination thread, passing the context for the fiber to be migrated. @@ -10587,7 +10005,7 @@ encapsulates. In the example work_sharing.cpp multiple worker fibers are created on the main thread. Each fiber gets a character as parameter at construction. This character is printed out ten times. Between - each iteration the fiber calls this_fiber::yield(). That puts + each iteration the fiber calls this_fiber::yield(). That puts the fiber in the ready queue of the fiber-scheduler shared_ready_queue, running in the current thread. The next fiber ready to be executed is dequeued from the shared ready queue and resumed by shared_ready_queue @@ -10696,7 +10114,7 @@ encapsulates. The start of the threads is synchronized with a barrier. The main fiber of each thread (including main thread) is suspended until all worker fibers are - complete. When the main fiber returns from condition_variable::wait(), + complete. When the main fiber returns from condition_variable::wait(), the thread terminates: the main thread joins all other threads. @@ -10813,13 +10231,13 @@ encapsulates. The fiber scheduler shared_ready_queue is like round_robin, except that it shares a common ready queue among all participating threads. A thread - participates in this pool by executing use_scheduling_algorithm() + participates in this pool by executing use_scheduling_algorithm() before any other Boost.Fiber operation. The important point about the ready queue is that it's a class static, common - to all instances of shared_ready_queue. Fibers that are enqueued via sched_algorithm::awakened() (fibers + to all instances of shared_ready_queue. Fibers that are enqueued via sched_algorithm::awakened() (fibers that are ready to be resumed) are thus available to all threads. It is required to reserve a separate, scheduler-specific queue for the thread's main fiber and dispatcher fibers: these may not be shared between @@ -10854,7 +10272,7 @@ before - When sched_algorithm::pick_next() gets called inside + When sched_algorithm::pick_next() gets called inside one thread, a fiber is dequeued from rqueue_ and will be resumed in that thread. @@ -10904,22 +10322,21 @@ before
<anchor id="callbacks"/><link linkend="fiber.callbacks">Integrating Fibers with Asynchronous Callbacks</link> - - Overview - - - One of the primary benefits of Boost.Fiber - is the ability to use asynchronous operations for efficiency, while at the - same time structuring the calling code as if the operations - were synchronous. Asynchronous operations provide completion notification in - a variety of ways, but most involve a callback function of some kind. This - section discusses tactics for interfacing Boost.Fiber - with an arbitrary async operation. - - - For purposes of illustration, consider the following hypothetical API: - - +
+ <link linkend="fiber.callbacks.overview">Overview</link> + + One of the primary benefits of Boost.Fiber + is the ability to use asynchronous operations for efficiency, while at the + same time structuring the calling code as if the operations + were synchronous. Asynchronous operations provide completion notification + in a variety of ways, but most involve a callback function of some kind. + This section discusses tactics for interfacing Boost.Fiber + with an arbitrary async operation. + + + For purposes of illustration, consider the following hypothetical API: + + class AsyncAPI { public: // constructor acquires some resource that can be read and written @@ -10938,51 +10355,50 @@ before // ... other operations ... }; - - - The significant points about each of init_write() and init_read() are: - - - - - The AsyncAPI method only - initiates the operation. It returns immediately, while the requested operation - is still pending. - - - - - The method accepts a callback. When the operation completes, the callback - is called with relevant parameters (error code, data if applicable). - - - - - We would like to wrap these asynchronous methods in functions that appear synchronous - by blocking the calling fiber until the operation completes. This lets us use - the wrapper function's return value to deliver relevant data. - - - - promise<> and future<> are your friends - here. - - - Return - Errorcode - - - The AsyncAPI::init_write() - callback passes only an errorcode. - If we simply want the blocking wrapper to return that errorcode, - this is an extremely straightforward use of promise<> and - future<>: - - + + The significant points about each of init_write() and init_read() are: + + + + + The AsyncAPI method only + initiates the operation. It returns immediately, while the requested + operation is still pending. + + + + + The method accepts a callback. When the operation completes, the callback + is called with relevant parameters (error code, data if applicable). + + + + + We would like to wrap these asynchronous methods in functions that appear + synchronous by blocking the calling fiber until the operation completes. + This lets us use the wrapper function’s return value to deliver relevant data. + + + + promise<> and future<> are your friends + here. + + +
+
+ <link linkend="fiber.callbacks.return_errorcode">Return Errorcode</link> + + The AsyncAPI::init_write() + callback passes only an errorcode. + If we simply want the blocking wrapper to return that errorcode, + this is an extremely straightforward use of promise<> and + future<>: + + AsyncAPI::errorcode write_ec( AsyncAPI & api, std::string const& data) { boost::fibers::promise< AsyncAPI::errorcode > promise; boost::fibers::future< AsyncAPI::errorcode > future( promise.get_future() ); @@ -11001,54 +10417,53 @@ before return future.get(); } - - - All we have to do is: - - - - - Instantiate a promise<> - of correct type. - - - - - Obtain its future<>. - - - - - Arrange for the callback to call promise::set_value(). - - - - - Block on future::get(). - - - - - - This tactic for resuming a pending fiber works even if the callback is called - on a different thread than the one on which the initiating fiber is running. - In fact, the example program's - dummy AsyncAPI implementation - illustrates that: it simulates async I/O by launching a new thread that sleeps - briefly and then calls the relevant callback. - - - Success - or Exception - - - A wrapper more aligned with modern C++ practice would use an exception, rather - than an errorcode, to communicate - failure to its caller. This is straightforward to code in terms of write_ec(): - - + + All we have to do is: + + + + + Instantiate a promise<> of correct type. + + + + + Obtain its future<>. + + + + + Arrange for the callback to call promise::set_value(). + + + + + Block on future::get(). + + + + + + This tactic for resuming a pending fiber works even if the callback is + called on a different thread than the one on which the initiating fiber + is running. In fact, the + example program’s dummy AsyncAPI + implementation illustrates that: it simulates async I/O by launching a + new thread that sleeps briefly and then calls the relevant callback. + + +
+
+ <link linkend="fiber.callbacks.success_or_exception">Success or Exception</link> + + A wrapper more aligned with modern C++ practice would use an exception, rather + than an errorcode, to communicate + failure to its caller. This is straightforward to code in terms of write_ec(): + + void write( AsyncAPI & api, std::string const& data) { AsyncAPI::errorcode ec = write_ec( api, data); if ( ec) { @@ -11056,22 +10471,22 @@ before } } - - - The point is that since each fiber has its own stack, you need not repeat messy - boilerplate: normal encapsulation works. - - - Return - Errorcode or Data - - - Things get a bit more interesting when the async operation's callback passes - multiple data items of interest. One approach would be to use std::pair<> to capture both: - - + + + The point is that since each fiber has its own stack, you need not repeat + messy boilerplate: normal encapsulation works. + +
+
+ <link linkend="fiber.callbacks.return_errorcode_or_data">Return Errorcode + or Data</link> + + Things get a bit more interesting when the async operation’s callback passes + multiple data items of interest. One approach would be to use std::pair<> to capture both: + + std::pair< AsyncAPI::errorcode, std::string > read_ec( AsyncAPI & api) { typedef std::pair< AsyncAPI::errorcode, std::string > result_pair; boost::fibers::promise< result_pair > promise; @@ -11084,35 +10499,35 @@ before return future.get(); } - - - Once you bundle the interesting data in std::pair<>, - the code is effectively identical to write_ec(). You can call it like this: - - + + + Once you bundle the interesting data in std::pair<>, the code is effectively identical + to write_ec(). + You can call it like this: + + std::tie( ec, data) = read_ec( api); - - - - Data - or Exception - - - But a more natural API for a function that obtains data is to return only the - data on success, throwing an exception on error. - - - As with write() - above, it's certainly possible to code a read() wrapper in terms of read_ec(). But since a given application is unlikely - to need both, let's code read() from scratch, leveraging promise::set_exception(): - - + +
+
+ <anchor id="Data_or_Exception"/><link linkend="fiber.callbacks.data_or_exception">Data + or Exception</link> + + But a more natural API for a function that obtains data is to return only + the data on success, throwing an exception on error. + + + As with write() + above, it’s certainly possible to code a read() wrapper in terms of read_ec(). But since a given application is unlikely + to need both, let’s code read() from scratch, leveraging promise::set_exception(): + + std::string read( AsyncAPI & api) { boost::fibers::promise< std::string > promise; boost::fibers::future< std::string > future( promise.get_future() ); @@ -11130,25 +10545,25 @@ before return future.get(); } - - - future::get() will do the right thing, either returning std::string - or throwing an exception. - - - Success/Error - Virtual Methods - - - One classic approach to completion notification is to define an abstract base - class with success() - and error() - methods. Code wishing to perform async I/O must derive a subclass, override - each of these methods and pass the async operation a pointer to a subclass - instance. The abstract base class might look like this: - - + + + future::get() will do the right thing, either returning std::string + or throwing an exception. + +
+
+ <link linkend="fiber.callbacks.success_error_virtual_methods">Success/Error + Virtual Methods</link> + + One classic approach to completion notification is to define an abstract + base class with success() + and error() + methods. Code wishing to perform async I/O must derive a subclass, override + each of these methods and pass the async operation a pointer to a subclass + instance. The abstract base class might look like this: + + // every async operation receives a subclass instance of this abstract base // class through which to communicate its result struct Response { @@ -11161,20 +10576,20 @@ before virtual void error( AsyncAPIBase::errorcode ec) = 0; }; - - - Now the AsyncAPI operation - might look more like this: - - + + + Now the AsyncAPI operation + might look more like this: + + // derive Response subclass, instantiate, pass Response::ptr void init_read( Response::ptr); - - - We can address this by writing a one-size-fits-all PromiseResponse: - - + + + We can address this by writing a one-size-fits-all PromiseResponse: + + class PromiseResponse: public Response { public: // called if the operation succeeds @@ -11197,13 +10612,13 @@ before boost::fibers::promise< std::string > promise_; }; - - - Now we can simply obtain the future<> from that PromiseResponse - and wait on its get(): - - + + + Now we can simply obtain the future<> from that PromiseResponse + and wait on its get(): + + std::string read( AsyncAPI & api) { // Because init_read() requires a shared_ptr, we must allocate our // ResponsePromise on the heap, even though we know its lifespan. @@ -11215,104 +10630,84 @@ before return future.get(); } - - - The source code above is found in adapt_callbacks.cpp - and adapt_method_calls.cpp. - - - - Then - There's Boost.Asio - - - Since the simplest form of Boost.Asio asynchronous operation completion token - is a callback function, we could apply the same tactics for Asio as for our - hypothetical AsyncAPI asynchronous - operations. - - - Fortunately we need not. Boost.Asio incorporates a mechanism - - This mechanism has been proposed as a conventional way to allow the caller - of an async function to specify completion handling: N4045. - by which the caller can customize the notification behavior of - every async operation. Therefore we can construct a completion token - which, when passed to a Boost.Asio - async operation, requests blocking for the calling fiber. - - - A typical Asio async function might look something like this: - per N4045 + The source code above is found in adapt_callbacks.cpp + and adapt_method_calls.cpp. + +
+
+ <anchor id="callbacks_asio"/><link linkend="fiber.callbacks.then_there_s____boost_asio__">Then + There’s <ulink url="http://www.boost.org/doc/libs/release/libs/asio/index.html">Boost.Asio</ulink></link> + + Since the simplest form of Boost.Asio asynchronous operation completion token + is a callback function, we could apply the same tactics for Asio as for our + hypothetical AsyncAPI asynchronous + operations. + + + Fortunately we need not. Boost.Asio incorporates a mechanism + + This mechanism has been proposed as a conventional way to allow the caller + of an arbitrary async function to specify completion handling: N4045. + + by which the caller can customize the notification behavior of + any async operation. Therefore we can construct a completion token + which, when passed to a Boost.Asio + async operation, requests blocking for the calling fiber. + + + A typical Asio async function might look something like this: + + per N4045 + + - - template < ..., class CompletionToken > deduced_return_type async_something( ... , CompletionToken&& token) { // construct handler_type instance from CompletionToken - handler_type<CompletionToken, ...>::type handler(token); + handler_type<CompletionToken, ...>::type handler(token); // construct async_result instance from handler_type - async_result<decltype(handler)> result(handler); + async_result<decltype(handler)> result(handler); // ... arrange to call handler on completion ... // ... initiate actual I/O operation ... - return result.get(); + return result.get(); } - - We will engage that mechanism, which is based on specializing Asio's handler_type<> - template for the CompletionToken - type and the signature of the specific callback. The remainder of this discussion - will refer back to async_something() as the Asio async function under consideration. - - - The implementation described below uses lower-level facilities than promise and future - for two reasons: - - - - - The promise mechanism interacts - badly with io_service::stop(). - It produces broken_promise - exceptions. - - - - - If more than one thread is calling the io_service::run() - method, the implementation described below allows resuming the suspended - fiber on whichever thread gets there first with completion notification. - More on this later. - - - - - boost::fibers::asio::yield - is a completion token of this kind. yield - is an instance of yield_t: - - + + We will engage that mechanism, which is based on specializing Asio’s handler_type<> + template for the CompletionToken + type and the signature of the specific callback. The remainder of this discussion + will refer back to async_something() as the Asio async function under consideration. + + + The implementation described below uses lower-level facilities than promise and future + because the promise mechanism + interacts badly with io_service::stop(). + It produces broken_promise + exceptions. + + + boost::fibers::asio::yield is a completion token of this kind. + yield is an instance of + yield_t: + + class yield_t { public: - yield_t( bool hop) : - allow_hop_( hop) { - } + yield_t() = default; /** * @code @@ -11335,40 +10730,34 @@ before //private: // ptr to bound error_code instance if any boost::system::error_code * ec_{ nullptr }; - // allow calling fiber to "hop" to another thread if it could resume more - // quickly that way - bool allow_hop_; }; - - - yield_t is in fact only a placeholder, - a way to trigger Boost.Asio customization. It can bind a boost::system::error_code - for use by the actual handler. - - - In fact there are two canonical instances of yield_t - — yield and yield_hop: - - -// canonical instance with allow_hop_ == false -thread_local yield_t yield{ false }; -// canonical instance with allow_hop_ == true -thread_local yield_t yield_hop{ true }; + + + yield_t is in fact only a + placeholder, a way to trigger Boost.Asio customization. It can bind a boost::system::error_code for use by the actual + handler. + + + yield is declared as: + + +// canonical instance +thread_local yield_t yield{}; - - - We'll get to the differences between these shortly. - - - Asio customization is engaged by specializing boost::asio::handler_type<> for yield_t: - - + + + Asio customization is engaged by specializing boost::asio::handler_type<> + for yield_t: + + // Handler type specialisation for fibers::asio::yield. // When 'yield' is passed as a completion handler which accepts only // error_code, use yield_handler<void>. yield_handler will take care of the @@ -11377,23 +10766,22 @@ before struct handler_type< fibers::asio::yield_t, ReturnType( boost::system::error_code) > { typedef fibers::asio::detail::yield_handler< void > type; }; - - - (There are actually four different specializations in detail/yield.hpp, - one for each of the four Asio async callback signatures we expect to have to - support.) - - - The above directs Asio to use yield_handler - as the actual handler for an async operation to which yield - is passed. There's a generic yield_handler<T> - implementation and a yield_handler<void> - specialization. Let's start with the <void> specialization: - - + + + (There are actually four different specializations in detail/yield.hpp, + one for each of the four Asio async callback signatures we expect.) + + + The above directs Asio to use yield_handler + as the actual handler for an async operation to which yield + is passed. There’s a generic yield_handler<T> + implementation and a yield_handler<void> + specialization. Let’s start with the <void> specialization: + + // yield_handler<void> is like yield_handler<T> without value_. In fact it's // just like yield_handler_base. template<> @@ -11412,19 +10800,19 @@ before using yield_handler_base::operator(); }; - - - async_something(), - having consulted the handler_type<> traits specialization, instantiates - a yield_handler<void> to - be passed as the actual callback for the async operation. yield_handler's - constructor accepts the yield_t - instance (the yield object - passed to the async function) and passes it along to yield_handler_base: - - + + + async_something(), + having consulted the handler_type<> traits specialization, instantiates + a yield_handler<void> to + be passed as the actual callback for the async operation. yield_handler’s + constructor accepts the yield_t + instance (the yield object + passed to the async function) and passes it along to yield_handler_base: + + // This class encapsulates common elements between yield_handler<T> (capturing // a value to return from asio async function) and yield_handler<void> (no // such value). See yield_handler<T> and its <void> specialization below. Both @@ -11458,16 +10846,14 @@ before ycomp_->completed_ = true; // set the error_code bound by yield_t * yt_.ec_ = ec; - // Are we permitted to wake up the suspended fiber on this thread, the - // thread that called the completion handler? - if ( ( ! ctx_->is_context( fibers::type::pinned_context) ) && yt_.allow_hop_) { - // We must not migrate a pinned_context to another thread. If this - // isn't a pinned_context, and the application passed yield_hop - // rather than yield, migrate this fiber to the running thread. - fibers::context::active()->migrate( ctx_); + // If ctx_ is still active, e.g. because the async operation + // immediately called its callback (this method!) before the asio + // async function called async_result_base::get(), we must not set it + // ready. + if ( fibers::context::active() != ctx_ ) { + // wake the fiber + fibers::context::active()->set_ready( ctx_); } - // either way, wake the fiber - fibers::context::active()->set_ready( ctx_); } //private: @@ -11478,38 +10864,38 @@ before yield_completion * ycomp_{ nullptr }; }; - - - yield_handler_base stores a - copy of the yield_t instance - — which, as shown above, is only an error_code - and a bool. It also captures the - context* for the currently-running fiber by calling context::active(). - - - You will notice that yield_handler_base - has one more data member (ycomp_) - that is initialized to nullptr - by its constructor — though its operator()() method relies on ycomp_ - being non-null. More on this in a moment. - - - Having constructed the yield_handler<void> - instance, async_something() - goes on to construct an async_result - specialized for the handler_type<>::type: - in this case, async_result<yield_handler<void>>. - It passes the yield_handler<void> - instance to the new async_result - instance. - - + + + yield_handler_base stores + a copy of the yield_t instance + — which, as shown above, contains only an error_code*. It also captures the context* + for the currently-running fiber by calling context::active(). + + + You will notice that yield_handler_base + has one more data member (ycomp_) + that is initialized to nullptr + by its constructor — though its operator()() method relies on ycomp_ + being non-null. More on this in a moment. + + + Having constructed the yield_handler<void> + instance, async_something() goes on to construct an async_result + specialized for the handler_type<>::type: + in this case, async_result<yield_handler<void>>. + It passes the yield_handler<void> + instance to the new async_result + instance. + + // Without the need to handle a passed value, our yield_handler<void> // specialization is just like async_result_base. template<> @@ -11523,11 +10909,11 @@ before } }; - - - Naturally that leads us straight to async_result_base: - - + + + Naturally that leads us straight to async_result_base: + + // Factor out commonality between async_result<yield_handler<T>> and // async_result<yield_handler<void>> class async_result_base { @@ -11555,7 +10941,6 @@ before if ( ec_) { throw_exception( boost::system::system_error{ ec_ } ); } - boost::this_fiber::interruption_point(); } private: @@ -11566,65 +10951,65 @@ before yield_completion ycomp_{}; }; - - - This is how yield_handler_base::ycomp_ - becomes non-null: async_result_base's - constructor injects a pointer back to its own yield_completion - member. - - - Recall that both of the canonical yield_t - instances yield and yield_hop initialize their error_code* - member ec_ to nullptr. If either of these instances is passed - to async_something() - (ec_ is still nullptr), the copy stored in yield_handler_base - will likewise have null ec_. - async_result_base's constructor - sets yield_handler_base's - yield_t's ec_ - member to point to its own error_code - member. - - - The stage is now set. async_something() initiates the actual async operation, arranging - to call its yield_handler<void> instance - on completion. Let's say, for the sake of argument, that the actual async operation's - callback has signature void(error_code). - - - But since it's an async operation, control returns at once to async_something(). - async_something() - calls async_result<yield_handler<void>>::get(), and - will return its return value. - - - async_result<yield_handler<void>>::get() inherits - async_result_base::get(). - - - async_result_base::get() immediately - calls yield_completion::wait(). - - + + + This is how yield_handler_base::ycomp_ + becomes non-null: async_result_base’s + constructor injects a pointer back to its own yield_completion + member. + + + Recall that the canonical yield_t + instance yield initializes + its error_code* + member ec_ to nullptr. If this instance is passed to async_something() + (ec_ is still nullptr), the copy stored in yield_handler_base will likewise have null + ec_. async_result_base’s + constructor sets yield_handler_base’s + yield_t’s ec_ + member to point to its own error_code + member. + + + The stage is now set. async_something() initiates the actual async operation, arranging + to call its yield_handler<void> + instance on completion. Let’s say, for the sake of argument, that the actual + async operation’s callback has signature void(error_code). + + + But since it’s an async operation, control returns at once to async_something(). + async_something() + calls async_result<yield_handler<void>>::get(), + and will return its return value. + + + async_result<yield_handler<void>>::get() inherits + async_result_base::get(). + + + async_result_base::get() immediately + calls yield_completion::wait(). + + // Bundle a completion bool flag with a spinlock to protect it. struct yield_completion { typedef fibers::detail::spinlock mutex_t; @@ -11647,186 +11032,147 @@ before } }; - - - Supposing that the pending async operation has not yet completed, yield_completion::completed_ will still be false, - and wait() - will call context::suspend() on the currently-running fiber. - - - Other fibers will now have a chance to run. - - - Some time later, the async operation completes. It calls yield_handler<void>::operator()(error_code const&) with an error_code - indicating either success or failure. We'll consider both cases. - - - yield_handler<void> explicitly - inherits operator()(error_code const&) from yield_handler_base. - - - yield_handler_base::operator()(error_code const&) first sets yield_completion::completed_ - true. This way, if async_something()'s - async operation completes immediately — if yield_handler_base::operator() - is called even before async_result_base::get() - — the calling fiber will not suspend. - - - The actual error_code produced - by the async operation is then stored through the stored yield_t::ec_ pointer. - If async_something()'s - caller used (e.g.) yield[my_ec] to - bind a local error_code instance, - the actual error_code value - is stored into the caller's variable. Otherwise, it is stored into async_result_base::ec_. - - - Finally we get to the distinction between yield - and yield_hop. - - - As described for context::is_context(), a pinned_context - fiber is special to the library and must never be passed to context::migrate(). - We must detect and avoid that case here. - - - The yield_t::allow_hop_ bool - indicates whether async_something()'s caller is willing to allow the running - fiber to hop to another thread (yield_hop) - or whether s/he insists that the fiber resume on the same thread (yield). - - - If the caller passed yield_hop - to async_something(), - and the running fiber isn't a pinned_context, - yield_handler_base::operator() passes - the context of the original - fiber — the one on which async_something() was called, captured in yield_handler_base's - constructor — to the current thread's context::migrate(). - - - If the running application has more than one thread calling io_service::run(), - that fiber could return from async_something() on a different thread (the one calling yield_handler_base::operator()) - than the one on which it entered async_something(). - - - In any case, the fiber is marked as ready to run by passing it to context::set_ready(). - Control then returns from yield_handler_base::operator(): - the callback is done. - - - In due course, the fiber yield_handler_base::ctx_ is - resumed. Control returns from context::suspend() to yield_completion::wait(), which - returns to async_result_base::get(). - - - - - If the original caller passed yield[my_ec] to async_something() to bind a local error_code - instance, then yield_handler_base::operator() stored its error_code - to the caller's my_ec instance, - leaving async_result_base::ec_ - initialized to success. - - - - - If the original caller passed yield - to async_something() - without binding a local error_code - variable, then yield_handler_base::operator() stored its error_code - into async_result_base::ec_. - If in fact that error_code - is success, then all is well. - - - - - Otherwise — the original caller did not bind a local error_code - and yield_handler_base::operator() was called with an error_code - indicating error — async_result_base::get() throws system_error - with that error_code. - - - - - The case in which async_something()'s completion callback has signature void() is similar. - yield_handler<void>::operator()() invokes the machinery above with a success - error_code. - - - A completion callback with signature void(error_code, T) - (that is: in addition to error_code, - callback receives some data item) is handled somewhat differently. For this - kind of signature, handler_type<>::type - specifies yield_handler<T> (for - T other than void). - - - A yield_handler<T> reserves - a value_ pointer to a value - of type T: - - + + + Supposing that the pending async operation has not yet completed, yield_completion::completed_ will still be false, and wait() will call context::suspend() on + the currently-running fiber. + + + Other fibers will now have a chance to run. + + + Some time later, the async operation completes. It calls yield_handler<void>::operator()(error_code const&) with an error_code + indicating either success or failure. We’ll consider both cases. + + + yield_handler<void> explicitly + inherits operator()(error_code const&) from yield_handler_base. + + + yield_handler_base::operator()(error_code const&) first sets yield_completion::completed_ + true. This way, if async_something()’s + async operation completes immediately — if yield_handler_base::operator() is called even before async_result_base::get() + — the calling fiber will not suspend. + + + The actual error_code produced + by the async operation is then stored through the stored yield_t::ec_ pointer. + If async_something()’s + caller used (e.g.) yield[my_ec] to bind a local error_code + instance, the actual error_code + value is stored into the caller’s variable. Otherwise, it is stored into + async_result_base::ec_. + + + If the stored fiber context yield_handler_base::ctx_ + is not already running, it is marked as ready to run by passing it to context::set_ready(). + Control then returns from yield_handler_base::operator(): the callback is done. + + + In due course, that fiber is resumed. Control returns from context::suspend() to + yield_completion::wait(), + which returns to async_result_base::get(). + + + + + If the original caller passed yield[my_ec] to async_something() to bind a local error_code + instance, then yield_handler_base::operator() stored its error_code + to the caller’s my_ec + instance, leaving async_result_base::ec_ + initialized to success. + + + + + If the original caller passed yield + to async_something() + without binding a local error_code + variable, then yield_handler_base::operator() stored its error_code + into async_result_base::ec_. + If in fact that error_code + is success, then all is well. + + + + + Otherwise — the original caller did not bind a local error_code + and yield_handler_base::operator() was called with an error_code + indicating error — async_result_base::get() throws system_error + with that error_code. + + + + + The case in which async_something()’s completion callback has signature void() is + similar. yield_handler<void>::operator()() + invokes the machinery above with a success error_code. + + + A completion callback with signature void(error_code, T) + (that is: in addition to error_code, + callback receives some data item) is handled somewhat differently. For this + kind of signature, handler_type<>::type + specifies yield_handler<T> (for + T other than void). + + + A yield_handler<T> reserves + a value_ pointer to a value + of type T: + + // asio uses handler_type<completion token type, signature>::type to decide // what to instantiate as the actual handler. Below, we specialize // handler_type< yield_t, ... > to indicate yield_handler<>. So when you pass @@ -11864,17 +11210,17 @@ before T * value_{ nullptr }; }; - - - This pointer is initialized to nullptr. - - - When async_something() - instantiates async_result<yield_handler<T>>: - - + + + This pointer is initialized to nullptr. + + + When async_something() + instantiates async_result<yield_handler<T>>: + + // asio constructs an async_result<> instance from the yield_handler specified // by handler_type<>::type. A particular asio async method constructs the // yield_handler, constructs this async_result specialization from it, then @@ -11903,67 +11249,69 @@ before type value_{}; }; - - - this async_result<> - specialization reserves a member of type T - to receive the passed data item, and sets yield_handler<T>::value_ to point to its own data member. - - - async_result<yield_handler<T>> - overrides get(). - The override calls async_result_base::get(), - so the calling fiber suspends as described above. - - - yield_handler<T>::operator()(error_code, T) - stores its passed T value into - async_result<yield_handler<T>>::value_. - - - Then it passes control to yield_handler_base::operator()(error_code) - to deal with waking (and possibly migrating) the original fiber as described - above. - - - When async_result<yield_handler<T>>::get() resumes, - it returns the stored value_ - to async_something() - and ultimately to async_something()'s caller. - - - The case of a callback signature void(T) - is handled by having yield_handler<T>::operator()(T) engage - the void(error_code, T) machinery, - passing a success error_code. - - - The source code above is found in yield.hpp - and detail/yield.hpp. - + + + this async_result<> + specialization reserves a member of type T + to receive the passed data item, and sets yield_handler<T>::value_ to point to its own data member. + + + async_result<yield_handler<T>> + overrides get(). + The override calls async_result_base::get(), + so the calling fiber suspends as described above. + + + yield_handler<T>::operator()(error_code, T) stores + its passed T value into + async_result<yield_handler<T>>::value_. + + + Then it passes control to yield_handler_base::operator()(error_code) to deal with waking the original fiber as + described above. + + + When async_result<yield_handler<T>>::get() resumes, + it returns the stored value_ + to async_something() + and ultimately to async_something()’s caller. + + + The case of a callback signature void(T) + is handled by having yield_handler<T>::operator()(T) engage + the void(error_code, T) machinery, + passing a success error_code. + + + The source code above is found in yield.hpp + and detail/yield.hpp. + +
<anchor id="nonblocking"/><link linkend="fiber.nonblocking">Integrating @@ -12001,7 +11349,7 @@ before <emphasis role="bold">Boost.Fiber</emphasis> can simplify this problem immensely. Once you have integrated with the application's main loop as described in <link linkend="integration">Sharing a Thread with Another Main Loop</link>, - waiting for the next main-loop iteration is as simple as calling <link linkend="this_fiber_yield"> <code>this_fiber::yield()</code></link>. + waiting for the next main-loop iteration is as simple as calling <link linkend="this_fiber_yield"><code>this_fiber::yield()</code></link>. </para> <bridgehead renderas="sect3" id="fiber.nonblocking.h1"> <phrase id="fiber.nonblocking.example_nonblocking_api"/><link linkend="fiber.nonblocking.example_nonblocking_api">Example @@ -12127,7 +11475,7 @@ before </programlisting> </para> <para> - Once we can transparently wait for the next main-loop iteration using <link linkend="this_fiber_yield"> <code>this_fiber::yield()</code></link>, + Once we can transparently wait for the next main-loop iteration using <link linkend="this_fiber_yield"><code>this_fiber::yield()</code></link>, ordinary encapsulation Just Works. </para> <para> @@ -12285,7 +11633,7 @@ before <para> <anchor id="wait_done"/>For this we introduce a <code><phrase role="identifier">Done</phrase></code> class to wrap a <code><phrase role="keyword">bool</phrase></code> variable - with a <link linkend="class_condition_variable"> <code>condition_variable</code></link> and a <link linkend="class_mutex"> <code>mutex</code></link>: + with a <link linkend="class_condition_variable"><code>condition_variable</code></link> and a <link linkend="class_mutex"><code>mutex</code></link>: </para> <para> <programlisting><phrase role="comment">// Wrap canonical pattern for condition_variable + bool flag</phrase> @@ -12405,8 +11753,8 @@ before One tactic would be to adapt our <link linkend="wait_done"><code><phrase role="identifier">Done</phrase></code></link> class to store the first of the return values, rather than a simple <code><phrase role="keyword">bool</phrase></code>. - However, we choose instead to use a <link linkend="class_unbounded_channel"> <code>unbounded_channel<></code></link>. - We'll only need to enqueue the first value, so we'll <link linkend="unbounded_channel_close"> <code>unbounded_channel::close()</code></link> it + However, we choose instead to use a <link linkend="class_unbounded_channel"><code>unbounded_channel<></code></link>. + We'll only need to enqueue the first value, so we'll <link linkend="unbounded_channel_close"><code>unbounded_channel::close()</code></link> it once we've retrieved that value. Subsequent <code><phrase role="identifier">push</phrase><phrase role="special">()</phrase></code> calls will return <code><phrase role="identifier">closed</phrase></code>. </para> @@ -12479,17 +11827,17 @@ before </para> <para> Let's at least ensure that such an exception would propagate to the fiber - awaiting the first result. We can use <link linkend="class_future"> <code>future<></code></link> to transport + awaiting the first result. We can use <link linkend="class_future"><code>future<></code></link> to transport either a return value or an exception. Therefore, we will change <link linkend="wait_first_value"><code><phrase role="identifier">wait_first_value</phrase><phrase - role="special">()</phrase></code></link>'s <link linkend="class_unbounded_channel"> <code>unbounded_channel<></code></link> to + role="special">()</phrase></code></link>'s <link linkend="class_unbounded_channel"><code>unbounded_channel<></code></link> to hold <code><phrase role="identifier">future</phrase><phrase role="special"><</phrase> <phrase role="identifier">T</phrase> <phrase role="special">></phrase></code> items instead of simply <code><phrase role="identifier">T</phrase></code>. </para> <para> Once we have a <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> - in hand, all we need do is call <link linkend="future_get"> <code>future::get()</code></link>, which will either + in hand, all we need do is call <link linkend="future_get"><code>future::get()</code></link>, which will either return the value or rethrow the exception. </para> <para> @@ -12520,10 +11868,10 @@ before <para> So far so good — but there's a timing issue. How should we obtain the <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> - to <link linkend="unbounded_channel_push"> <code>unbounded_channel::push()</code></link> on the channel? + to <link linkend="unbounded_channel_push"><code>unbounded_channel::push()</code></link> on the channel? </para> <para> - We could call <link linkend="fibers_async"> <code>fibers::async()</code></link>. That would certainly produce + We could call <link linkend="fibers_async"><code>fibers::async()</code></link>. That would certainly produce a <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> for the task function. The trouble is that it would return too quickly! We only want <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> @@ -12539,7 +11887,7 @@ before completes most quickly. </para> <para> - Calling <link linkend="future_get"> <code>future::get()</code></link> on the future returned by <code><phrase + Calling <link linkend="future_get"><code>future::get()</code></link> on the future returned by <code><phrase role="identifier">async</phrase><phrase role="special">()</phrase></code> wouldn't be right. You can only call <code><phrase role="identifier">get</phrase><phrase role="special">()</phrase></code> once per <code><phrase role="identifier">future</phrase><phrase @@ -12548,7 +11896,7 @@ before end of the channel, rather than propagated to the consumer end. </para> <para> - We could call <link linkend="future_wait"> <code>future::wait()</code></link>. That would block the helper fiber + We could call <link linkend="future_wait"><code>future::wait()</code></link>. That would block the helper fiber until the <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> became ready, at which point we could <code><phrase role="identifier">push</phrase><phrase role="special">()</phrase></code> it to be retrieved by <code><phrase role="identifier">wait_first_outcome</phrase><phrase @@ -12556,7 +11904,7 @@ before </para> <para> That would work — but there's a simpler tactic that avoids creating an extra - fiber. We can wrap the task function in a <link linkend="class_packaged_task"> <code>packaged_task<></code></link>. + fiber. We can wrap the task function in a <link linkend="class_packaged_task"><code>packaged_task<></code></link>. While one naturally thinks of passing a <code><phrase role="identifier">packaged_task</phrase><phrase role="special"><></phrase></code> to a new fiber — that is, in fact, what <code><phrase role="identifier">async</phrase><phrase role="special">()</phrase></code> @@ -12710,17 +12058,17 @@ before items. Of course we must limit that iteration! If we launch only <code><phrase role="identifier">count</phrase></code> producer fibers, the <code><phrase role="special">(</phrase><phrase role="identifier">count</phrase><phrase - role="special">+</phrase><phrase role="number">1</phrase><phrase role="special">)</phrase></code> - <superscript>st</superscript> - <link linkend="unbounded_channel_pop"> <code>unbounded_channel::pop()</code></link> call would block forever. + role="special">+</phrase><phrase role="number">1</phrase><phrase role="special">)</phrase></code><superscript>st</superscript> +<link linkend="unbounded_channel_pop"><code>unbounded_channel::pop()</code></link> call + would block forever. </para> <para> Given a ready <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code>, - we can distinguish failure by calling <link linkend="future_get_exception_ptr"> <code>future::get_exception_ptr()</code></link>. + we can distinguish failure by calling <link linkend="future_get_exception_ptr"><code>future::get_exception_ptr()</code></link>. If the <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code> in fact contains a result rather than an exception, <code><phrase role="identifier">get_exception_ptr</phrase><phrase role="special">()</phrase></code> returns <code><phrase role="keyword">nullptr</phrase></code>. - In that case, we can confidently call <link linkend="future_get"> <code>future::get()</code></link> to return + In that case, we can confidently call <link linkend="future_get"><code>future::get()</code></link> to return that result to our caller. </para> <para> @@ -12865,13 +12213,13 @@ before Certain topics in C++ can arouse strong passions, and exceptions are no exception. We cannot resist mentioning — for purely informational purposes — that when you need only the <emphasis>first</emphasis> result from some - number of concurrently-running fibers, it would be possible to pass a <code>shared_ptr< - <link linkend="class_promise"> <code>promise<></code></link>></code> to the participating fibers, then cause - the initiating fiber to call <link linkend="future_get"> <code>future::get()</code></link> on its <link linkend="class_future"> <code>future<></code></link>. - The first fiber to call <link linkend="promise_set_value"> <code>promise::set_value()</code></link> on that shared - <code><phrase role="identifier">promise</phrase></code> will succeed; subsequent - <code><phrase role="identifier">set_value</phrase><phrase role="special">()</phrase></code> - calls on the same <code><phrase role="identifier">promise</phrase></code> + number of concurrently-running fibers, it would be possible to pass a + <literal>shared_ptr<<link linkend="class_promise"><code>promise<></code></link>></literal> to the + participating fibers, then cause the initiating fiber to call <link linkend="future_get"><code>future::get()</code></link> on + its <link linkend="class_future"><code>future<></code></link>. The first fiber to call <link linkend="promise_set_value"><code>promise::set_value()</code></link> on + that shared <code><phrase role="identifier">promise</phrase></code> will + succeed; subsequent <code><phrase role="identifier">set_value</phrase><phrase + role="special">()</phrase></code> calls on the same <code><phrase role="identifier">promise</phrase></code> instance will throw <code><phrase role="identifier">future_error</phrase></code>. </para> <para> @@ -12891,8 +12239,8 @@ before role="special">()</phrase></code> that looks remarkably like <link linkend="wait_first_simple"><code><phrase role="identifier">wait_first_simple</phrase><phrase role="special">()</phrase></code></link>. The difference is that instead of our <link linkend="wait_done"><code><phrase - role="identifier">Done</phrase></code></link> class, we instantiate a <link linkend="class_barrier"> <code>barrier</code></link> and - call its <link linkend="barrier_wait"> <code>barrier::wait()</code></link>. + role="identifier">Done</phrase></code></link> class, we instantiate a <link linkend="class_barrier"><code>barrier</code></link> and + call its <link linkend="barrier_wait"><code>barrier::wait()</code></link>. </para> <para> We initialize the <code><phrase role="identifier">barrier</phrase></code> @@ -12984,8 +12332,8 @@ before role="special"><</phrase><phrase role="identifier">T</phrase><phrase role="special">>></phrase></code>.<footnote id="fiber.when_any.when_all_functionality.when_all__return_values.f0"> <para> - We could have used either <link linkend="class_bounded_channel"> <code>bounded_channel<></code></link> or - <link linkend="class_unbounded_channel"> <code>unbounded_channel<></code></link>. We chose <code><phrase + We could have used either <link linkend="class_bounded_channel"><code>bounded_channel<></code></link> or + <link linkend="class_unbounded_channel"><code>unbounded_channel<></code></link>. We chose <code><phrase role="identifier">unbounded_channel</phrase><phrase role="special"><></phrase></code> on the assumption that its simpler semantics imply a cheaper implementation. </para> @@ -13035,9 +12383,9 @@ before As you can see from the loop in <code><phrase role="identifier">wait_all_values</phrase><phrase role="special">()</phrase></code>, instead of requiring its caller to count values, we define <code><phrase role="identifier">wait_all_values_source</phrase><phrase - role="special">()</phrase></code> to <link linkend="unbounded_channel_close"> <code>unbounded_channel::close()</code></link> the + role="special">()</phrase></code> to <link linkend="unbounded_channel_close"><code>unbounded_channel::close()</code></link> the channel when done. But how do we do that? Each producer fiber is independent. - It has no idea whether it is the last one to <link linkend="unbounded_channel_push"> <code>unbounded_channel::push()</code></link> a + It has no idea whether it is the last one to <link linkend="unbounded_channel_push"><code>unbounded_channel::push()</code></link> a value. </para> <para> @@ -13175,7 +12523,7 @@ before <anchor id="wait_all_until_error"/><code><phrase role="identifier">wait_all_until_error</phrase><phrase role="special">()</phrase></code> pops that <code><phrase role="identifier">future</phrase><phrase role="special"><</phrase> <phrase role="identifier">T</phrase> <phrase - role="special">></phrase></code> and calls its <link linkend="future_get"> <code>future::get()</code></link>: + role="special">></phrase></code> and calls its <link linkend="future_get"><code>future::get()</code></link>: </para> <para> <programlisting><phrase role="keyword">template</phrase><phrase role="special"><</phrase> <phrase role="keyword">typename</phrase> <phrase role="identifier">Fn</phrase><phrase role="special">,</phrase> <phrase role="keyword">typename</phrase> <phrase role="special">...</phrase> <phrase role="identifier">Fns</phrase> <phrase role="special">></phrase> @@ -13414,8 +12762,8 @@ before <para> The trouble with this tactic is that it would serialize all the task functions. The runtime makes a single pass through <code><phrase role="identifier">functions</phrase></code>, - calling <link linkend="fibers_async"> <code>fibers::async()</code></link> for each and then immediately calling - <link linkend="future_get"> <code>future::get()</code></link> on its returned <code><phrase role="identifier">future</phrase><phrase + calling <link linkend="fibers_async"><code>fibers::async()</code></link> for each and then immediately calling + <link linkend="future_get"><code>future::get()</code></link> on its returned <code><phrase role="identifier">future</phrase><phrase role="special"><></phrase></code>. That blocks the implicit loop. The above is almost equivalent to writing: </para> @@ -13468,114 +12816,574 @@ before <section id="fiber.integration"> <title><anchor id="integration"/><link linkend="fiber.integration">Sharing a Thread with Another Main Loop</link> - - Overview - - - As always with cooperative concurrency, it is important not to let any one - fiber monopolize the processor too long: that could starve other - ready fibers. This section discusses a couple of solutions. - - - Event-Driven - Program - - - Consider a classic event-driven program, organized around a main loop that - fetches and dispatches incoming I/O events. You are introducing Boost.Fiber - because certain asynchronous I/O sequences are logically sequential, and for - those you want to write and maintain code that looks and acts sequential. - - - You are launching fibers on the application's main thread because certain of - their actions will affect its user interface, and the application's UI framework - permits UI operations only on the main thread. Or perhaps those fibers need - access to main-thread data, and it would be too expensive in runtime (or development - time) to robustly defend every such data item with thread synchronization primitives. - - - You must ensure that the application's main loop itself - doesn't monopolize the processor: that the fibers it launches will get the - CPU cycles they need. - - - The solution is the same as for any fiber that might claim the CPU for an extended - time: introduce calls to this_fiber::yield(). The most straightforward - approach is to call yield() - on every iteration of your existing main loop. In effect, this unifies the - application's main loop with Boost.Fiber's - internal main loop. yield() - allows the fiber manager to run any fibers that have become ready since the - previous iteration of the application's main loop. When these fibers have had - a turn, control passes to the thread's main fiber, which returns from yield() and - resumes the application's main loop. - - - Integrating - with Boost.Asio - - - More challenging is when the application's main loop is embedded in some other - library or framework. Such an application will typically, after performing - all necessary setup, pass control to some form of run() function from which control does not return - until application shutdown. - - - A Boost.Asio - program might call io_service::run() - in this way. - - - The trick here is to arrange to pass control to this_fiber::yield() frequently. - You can use an Asio - timer for this purpose. Instantiate the timer, arranging to call a - handler function when the timer expires: - - - [run_service] - - - The handler function calls yield(), then resets the timer and arranges to wake - up again on expiration: - - - [timer_handler] - - - Then instead of directly calling io_service::run(), - your application would call the above run_service(io_service&) wrapper. - - - Since, in this example, we always pass control to the fiber manager via yield(), - the calling fiber is never blocked. Therefore there is always at least one - ready fiber. Therefore the fiber manager never sleeps. - - - Using std::chrono::seconds(0) for every - keepalive timer interval would be unfriendly to other threads. When all I/O - is pending and all fibers are blocked, the io_service and the fiber manager - would simply spin the CPU, passing control back and forth to each other. Resetting - the timer for keepalive_iterval - allows tuning the responsiveness of this thread relative to others in the same - way as when Boost.Fiber is running without - Boost.Asio. - - - The source code above is found in round_robin.hpp. - +
+ <link linkend="fiber.integration.overview">Overview</link> + + As always with cooperative concurrency, it is important not to let any one + fiber monopolize the processor too long: that could starve + other ready fibers. This section discusses a couple of solutions. + +
+
+ <link linkend="fiber.integration.event_driven_program">Event-Driven + Program</link> + + Consider a classic event-driven program, organized around a main loop that + fetches and dispatches incoming I/O events. You are introducing Boost.Fiber because certain asynchronous I/O sequences + are logically sequential, and for those you want to write and maintain code + that looks and acts sequential. + + + You are launching fibers on the application’s main thread because certain + of their actions will affect its user interface, and the application’s UI + framework permits UI operations only on the main thread. Or perhaps those + fibers need access to main-thread data, and it would be too expensive in + runtime (or development time) to robustly defend every such data item with + thread synchronization primitives. + + + You must ensure that the application’s main loop itself + doesn’t monopolize the processor: that the fibers it launches will get the + CPU cycles they need. + + + The solution is the same as for any fiber that might claim the CPU for an + extended time: introduce calls to this_fiber::yield(). The + most straightforward approach is to call yield() on every iteration of your existing main + loop. In effect, this unifies the application’s main loop with Boost.Fiber’s + internal main loop. yield() allows the fiber manager to run any fibers + that have become ready since the previous iteration of the application’s main + loop. When these fibers have had a turn, control passes to the thread’s main + fiber, which returns from yield() and resumes the application’s main loop. + +
+
+ <anchor id="embedded_main_loop"/><link linkend="fiber.integration.embedded_main_loop">Embedded + Main Loop</link> + + More challenging is when the application’s main loop is embedded in some other + library or framework. Such an application will typically, after performing + all necessary setup, pass control to some form of run() function from which control does not return + until application shutdown. + + + A Boost.Asio + program might call io_service::run() + in this way. + + + In general, the trick is to arrange to pass control to this_fiber::yield() frequently. + You could use an Asio + timer for that purpose. You could instantiate the timer, arranging + to call a handler function when the timer expires. The handler function could + call yield(), + then reset the timer and arrange to wake up again on its next expiration. + + + Since, in this thought experiment, we always pass control to the fiber manager + via yield(), + the calling fiber is never blocked. Therefore there is always at least one + ready fiber. Therefore the fiber manager never calls sched_algorithm::suspend_until(). + + + Using io_service::post() + instead of setting a timer for some nonzero interval would be unfriendly + to other threads. When all I/O is pending and all fibers are blocked, the + io_service and the fiber manager would simply spin the CPU, passing control + back and forth to each other. Using a timer allows tuning the responsiveness + of this thread relative to others. + +
+
+ <link linkend="fiber.integration.deeper_dive_into___boost_asio__">Deeper + Dive into <ulink url="http://www.boost.org/doc/libs/release/libs/asio/index.html">Boost.Asio</ulink></link> + + By now the alert reader is thinking: but surely, with Asio in particular, + we ought to be able to do much better than periodic polling pings! + + + This turns out to be surprisingly tricky. We present a possible approach + in examples/asio/round_robin.hpp. + + + One consequence of using Boost.Asio + is that you must always let Asio suspend the running thread. Since Asio is + aware of pending I/O requests, it can arrange to suspend the thread in such + a way that the OS will wake it on I/O completion. No one else has sufficient + knowledge. + + + So the fiber scheduler must depend on Asio for suspension and resumption. + It requires Asio handler calls to wake it. + + + One dismaying implication is that we cannot support multiple threads calling + io_service::run() + on the same io_service instance. + The reason is that Asio provides no way to constrain a particular handler + to be called only on a specified thread. A fiber scheduler instance is locked + to a particular thread: that instance cannot manage any other thread’s fibers. + Yet if we allow multiple threads to call io_service::run() + on the same io_service instance, + a fiber scheduler which needs to sleep can have no guarantee that it will + reawaken in a timely manner. It can set an Asio timer, as described above + — but that timer’s handler may well execute on a different thread! + + + Another implication is that since an Asio-aware fiber scheduler (not to mention + boost::fibers::asio::yield) + depends on handler calls from the io_service, + it is the application’s responsibility to ensure that io_service::stop() + is not called until every fiber has terminated. + + + It is easier to reason about the behavior of the presented asio::round_robin scheduler if we require that + after initial setup, the thread’s main fiber is the fiber that calls io_service::run(), + so let’s impose that requirement. + + + Naturally, the first thing we must do on each thread using a custom fiber + scheduler is call use_scheduling_algorithm(). However, + since asio::round_robin requires an io_service + instance, we must first declare that. + + +boost::asio::io_service io_svc; +boost::fibers::use_scheduling_algorithm< boost::fibers::asio::round_robin >( io_svc); + + + + use_scheduling_algorithm() instantiates asio::round_robin, + which naturally calls its constructor: + + +round_robin( boost::asio::io_service & io_svc) : + io_svc_( io_svc), + suspend_timer_( io_svc_) { + // We use add_service() very deliberately. This will throw + // service_already_exists if you pass the same io_service instance to + // more than one round_robin instance. + boost::asio::add_service( io_svc_, new service( io_svc_)); +} + + + + asio::round_robin binds the passed io_service reference and initializes a + boost::asio::steady_timer: + + +boost::asio::io_service & io_svc_; +boost::asio::steady_timer suspend_timer_; + + + + Then it calls boost::asio::add_service() + with a nested service struct: + + +struct service : public boost::asio::io_service::service { + static boost::asio::io_service::id id; + + std::unique_ptr< boost::asio::io_service::work > work_; + + service( boost::asio::io_service & io_svc) : + boost::asio::io_service::service( io_svc), + work_{ new boost::asio::io_service::work( io_svc) } { + io_svc.post([&io_svc](){ + + + + ... + + + }); + } + + virtual ~service() {} + + service( service const&) = delete; + service & operator=( service const&) = delete; + + void shutdown_service() override final { + work_.reset(); + } +}; + + + + The service struct has a + couple of roles. + + + Its foremost role is to manage a std::unique_ptr<boost::asio::io_service::work>. We want the + io_service instance to continue + its main loop even when there is no pending Asio I/O. + + + But when boost::asio::io_service::service::shutdown_service() + is called, we discard the io_service::work + instance so the io_service + can shut down properly. + + + Its other purpose is to post() + a lambda (not yet shown). Let’s walk further through the example program before + coming back to explain that lambda. + + + The service constructor returns + to asio::round_robin’s constructor, which returns + to use_scheduling_algorithm(), which returns to the application code. + + + Once it has called use_scheduling_algorithm(), the application may now launch some number + of fibers: + + +// server +tcp::acceptor a( io_svc, tcp::endpoint( tcp::v4(), 9999) ); +boost::fibers::fiber( server, std::ref( io_svc), std::ref( a) ).detach(); +// client +const unsigned iterations = 2; +const unsigned clients = 3; +boost::fibers::barrier b( clients); +for ( unsigned i = 0; i < clients; ++i) { + boost::fibers::fiber( + client, std::ref( io_svc), std::ref( a), std::ref( b), iterations).detach(); +} + + + + Since we don’t specify a launch_policy, these fibers are + ready to run, but have not yet been entered. + + + Having set everything up, the application calls io_service::run(): + + +io_svc.run(); + + + + Now what? + + + Because this io_service instance + owns an io_service::work instance, run() does not immediately return. But — none of + the fibers that will perform actual work has even been entered yet! + + + Without that initial post() call in service’s + constructor, nothing would happen. The application would + hang right here. + + + So, what should the post() handler execute? Simply this_fiber::yield()? + + + That would be a promising start. But we have no guarantee that any of the + other fibers will initiate any Asio operations to keep the ball rolling. + For all we know, every other fiber could reach a similar this_fiber::yield() call first. Control would return to the + post() + handler, which would return to Asio, and... the application would hang. + + + The post() + handler could post() + itself again. But as discussed in the + previous section, once there are actual I/O operations in flight — once + we reach a state in which no fiber is ready — +that would cause the thread to + spin. + + + We could, of course, set an Asio timer — again as previously + discussed. But in this deeper dive, we’re trying to + do a little better. + + + The key to doing better is that since we’re in a fiber, we can run an actual + loop — not just a chain of callbacks. We can wait for something to happen + by calling io_service::run_one() + — or we can execute already-queued Asio handlers by calling io_service::poll(). + + + Here’s the body of the lambda passed to the post() call. + + +while ( ! io_svc.stopped() ) { + if ( boost::fibers::has_ready_fibers() ) { + // run all pending handlers in round_robin + while ( io_svc.poll() ); + // run pending (ready) fibers + this_fiber::yield(); + } else { + // run one handler inside io_service + // if no handler available, block this thread + if ( ! io_svc.run_one() ) { + break; + } + } +} + + + + We want this loop to exit once the io_service + instance has been stopped(). + + + As long as there are ready fibers, we interleave running ready Asio handlers + with running ready fibers. + + + If there are no ready fibers, we wait by calling run_one(). Once any Asio handler has been called + — no matter which — run_one() + returns. That handler may have transitioned some fiber to ready state, so + we loop back to check again. + + + (We won’t describe awakened(), pick_next() or has_ready_fibers(), as these are just like round_robin::awakened(), + round_robin::pick_next() and round_robin::has_ready_fibers().) + + + That leaves suspend_until() and notify(). + + + Doubtless you have been asking yourself: why are we calling io_service::run_one() + in the lambda loop? Why not call it in suspend_until(), whose very API was designed for just such + a purpose? + + + Under normal circumstances, when the fiber manager finds no ready fibers, + it calls sched_algorithm::suspend_until(). Why + test has_ready_fibers() + in the lambda loop? Why not leverage the normal mechanism? + + + The answer is: it matters who’s asking. + + + Consider the lambda loop shown above. The only Boost.Fiber + APIs it engages are has_ready_fibers() and this_fiber::yield(). + yield() + does not block the calling fiber: the calling fiber + does not become unready. It is immediately passed back to sched_algorithm::awakened(), + to be resumed in its turn when all other ready fibers have had a chance to + run. In other words: during a yield() call, there is always at least + one ready fiber. + + + As long as this lambda loop is still running, the fiber manager does not + call suspend_until() + because it always has a fiber ready to run. + + + However, the lambda loop itself can detect the case + when no other fibers are ready to run: the running fiber + is not ready but running. + + + That said, suspend_until() and notify() are in fact called during orderly shutdown + processing, so let’s try a plausible implementation. + + +void suspend_until( std::chrono::steady_clock::time_point const& abs_time) noexcept { + // Set a timer so at least one handler will eventually fire, causing + // run_one() to eventually return. Set a timer even if abs_time == + // time_point::max() so the timer can be canceled by our notify() + // method -- which calls the handler. + if ( suspend_timer_.expires_at() != abs_time) { + // Each expires_at(time_point) call cancels any previous pending + // call. We could inadvertently spin like this: + // dispatcher calls suspend_until() with earliest wake time + // suspend_until() sets suspend_timer_ + // lambda loop calls run_one() + // some other asio handler runs before timer expires + // run_one() returns to lambda loop + // lambda loop yields to dispatcher + // dispatcher finds no ready fibers + // dispatcher calls suspend_until() with SAME wake time + // suspend_until() sets suspend_timer_ to same time, canceling + // previous async_wait() + // lambda loop calls run_one() + // asio calls suspend_timer_ handler with operation_aborted + // run_one() returns to lambda loop... etc. etc. + // So only actually set the timer when we're passed a DIFFERENT + // abs_time value. + suspend_timer_.expires_at( abs_time); + // It really doesn't matter what the suspend_timer_ handler does, + // or even whether it's called because the timer ran out or was + // canceled. The whole point is to cause the run_one() call to + // return. So just pass a no-op lambda with proper signature. + suspend_timer_.async_wait([](boost::system::error_code const&){}); + } +} + + + + As you might expect, suspend_until() sets an asio::steady_timer to expires_at() + the passed std::chrono::steady_clock::time_point. + Usually. + + + As indicated in comments, we avoid setting suspend_timer_ + multiple times to the same time_point + value since every expires_at() call cancels any previous async_wait() + call. There is a chance that we could spin. Reaching suspend_until() means the fiber manager intends to yield + the processor to Asio. Cancelling the previous async_wait() call would fire its handler, causing run_one() + to return, potentially causing the fiber manager to call suspend_until() again with the same time_point + value... + + + Given that we suspend the thread by calling io_service::run_one(), what’s important is that our async_wait() + call will cause a handler to run, which will cause run_one() to return. It’s not so important specifically + what that handler does. + + +void notify() noexcept { + // Something has happened that should wake one or more fibers BEFORE + // suspend_timer_ expires. Reset the timer to cause it to fire + // immediately, causing the run_one() call to return. In theory we + // could use cancel() because we don't care whether suspend_timer_'s + // handler is called with operation_aborted or success. However -- + // cancel() doesn't change the expiration time, and we use + // suspend_timer_'s expiration time to decide whether it's already + // set. If suspend_until() set some specific wake time, then notify() + // canceled it, then suspend_until() was called again with the same + // wake time, it would match suspend_timer_'s expiration time and we'd + // refrain from setting the timer. So instead of simply calling + // cancel(), reset the timer, which cancels the pending sleep AND sets + // a new expiration time. This will cause us to spin the loop twice -- + // once for the operation_aborted handler, once for timer expiration + // -- but that shouldn't be a big problem. + suspend_timer_.expires_at( std::chrono::steady_clock::now() ); +} + + + + Since an expires_at() + call cancels any previous async_wait() call, we can make notify() simply call steady_timer::expires_at(). That should cause the io_service + to call the async_wait() + handler with operation_aborted. + + + The comments in notify() + explain why we call expires_at() rather than cancel(). + + + This boost::fibers::asio::round_robin implementation is used in + examples/asio/autoecho.cpp. + + + It seems possible that you could put together a more elegant Fiber / Asio + integration. But as noted at the outset: it’s tricky. + +
<link linkend="fiber.performance">Performance</link> @@ -14114,7 +13922,7 @@ before As noted in the Scheduling section, by default - Boost.Fiber uses its own round_robin scheduler + Boost.Fiber uses its own round_robin scheduler for each thread. To control the way Boost.Fiber schedules ready fibers on a particular thread, in general you must follow several steps. This section discusses those steps, whereas Scheduling @@ -14152,14 +13960,14 @@ before - One might suggest deriving a custom fiber subclass to store such + One might suggest deriving a custom fiber subclass to store such properties. There are a couple of reasons for the present mechanism. Boost.Fiber provides a number of different - ways to launch a fiber. (Consider fibers::async().) Higher-level + ways to launch a fiber. (Consider fibers::async().) Higher-level libraries might introduce additional such wrapper functions. A custom scheduler must associate its custom properties with every fiber in the thread, not only the ones explicitly launched by instantiating a @@ -14178,7 +13986,7 @@ before - The fiber class is actually just a handle to internal context data. + The fiber class is actually just a handle to internal context data. A subclass of fiber would not add data to context. @@ -14190,8 +13998,8 @@ before the rest of your application. - Instead of deriving a custom scheduler fiber properties subclass from fiber, - you must instead derive it from fiber_properties. + Instead of deriving a custom scheduler fiber properties subclass from fiber, + you must instead derive it from fiber_properties. class priority_props : public boost::fibers::fiber_properties { @@ -14229,7 +14037,7 @@ before - Your subclass constructor must accept a context* + Your subclass constructor must accept a context* and pass it to the fiber_properties constructor. @@ -14258,7 +14066,7 @@ before Scheduler Class - Now we can derive a custom scheduler from sched_algorithm_with_properties<>, + Now we can derive a custom scheduler from sched_algorithm_with_properties<>, specifying our custom property class priority_props as the template parameter. @@ -14361,7 +14169,7 @@ before - You must override the sched_algorithm_with_properties::awakened() + You must override the sched_algorithm_with_properties::awakened() method. This is how your scheduler receives notification of a fiber that has become ready to run. @@ -14375,7 +14183,7 @@ before - You must override the sched_algorithm_with_properties::pick_next() + You must override the sched_algorithm_with_properties::pick_next() method. This is how your scheduler actually advises the fiber manager of the next fiber to run. @@ -14383,14 +14191,14 @@ before - You must override sched_algorithm_with_properties::has_ready_fibers() + You must override sched_algorithm_with_properties::has_ready_fibers() to inform the fiber manager of the state of your ready queue. - Overriding sched_algorithm_with_properties::property_change() + Overriding sched_algorithm_with_properties::property_change() is optional. This override handles the case in which the running fiber changes the priority of another ready fiber: a fiber already in our queue. In that @@ -14408,7 +14216,7 @@ before Our example priority_scheduler - doesn't override sched_algorithm_with_properties::new_properties(): + doesn't override sched_algorithm_with_properties::new_properties(): we're content with allocating priority_props instances on the heap. @@ -14417,9 +14225,9 @@ before Default Scheduler - You must call use_scheduling_algorithm() at the start + You must call use_scheduling_algorithm() at the start of each thread on which you want Boost.Fiber - to use your custom scheduler rather than its own default round_robin. + to use your custom scheduler rather than its own default round_robin. Specifically, you must call use_scheduling_algorithm() before performing any other Boost.Fiber operations on that thread. @@ -14437,8 +14245,8 @@ before Properties - The running fiber can access its own fiber_properties subclass - instance by calling this_fiber::properties(). Although + The running fiber can access its own fiber_properties subclass + instance by calling this_fiber::properties(). Although properties<>() is a nullary function, you must pass, as a template parameter, the fiber_properties subclass. @@ -14448,9 +14256,9 @@ before - Given a fiber instance still connected with a running fiber (that - is, not fiber::detach()ed), you may access that fiber's properties - using fiber::properties(). As with this_fiberfiber instance still connected with a running fiber (that + is, not fiber::detach()ed), you may access that fiber's properties + using fiber::properties(). As with this_fiber::properties<>(), you must pass your fiber_properties subclass as the template @@ -14474,7 +14282,7 @@ before As shown in the launch() function above, it is reasonable to launch a fiber and immediately set relevant properties -- such as, for instance, its priority. Your custom scheduler can - then make use of this information next time the fiber manager calls sched_algorithm_with_properties::pick_next(). + then make use of this information next time the fiber manager calls sched_algorithm_with_properties::pick_next().
@@ -14558,7 +14366,7 @@ before - condition_variable is not subject to spurious wakeup. + condition_variable is not subject to spurious wakeup. Nonetheless it is prudent to test the business-logic condition in a wait() loop — or, equivalently, use one of the wait Support for migrating fibers between threads has been integrated. The user-defined - scheduler must call context::migrate() on a fiber-context on + scheduler must call context::migrate() on a fiber-context on the destination thread, passing migrate() the fiber-context to migrate. (For more information about custom schedulers, see Customization.) @@ -14592,13 +14400,14 @@ before for Boost.Asio - Support for Boost.Asio's + Support for Boost.Asio’s async-result is not part of the official API. However, - to integrate with a boost::asio::io_service, + to integrate with a boost::asio::io_service, see Sharing a Thread with Another Main Loop. To interface smoothly with an arbitrary Asio async I/O operation, see Then There's Boost.Asio. + linkend="callbacks_asio">Then There’s Boost.Asio. tested diff --git a/doc/html/fiber/callbacks.html b/doc/html/fiber/callbacks.html index 2aab6d84..1c2b1522 100644 --- a/doc/html/fiber/callbacks.html +++ b/doc/html/fiber/callbacks.html @@ -7,7 +7,7 @@ - + @@ -20,964 +20,26 @@

-PrevUpHomeNext +PrevUpHomeNext
-

- - Overview -

-

- One of the primary benefits of Boost.Fiber - is the ability to use asynchronous operations for efficiency, while at the - same time structuring the calling code as if the operations - were synchronous. Asynchronous operations provide completion notification in - a variety of ways, but most involve a callback function of some kind. This - section discusses tactics for interfacing Boost.Fiber - with an arbitrary async operation. -

-

- For purposes of illustration, consider the following hypothetical API: -

-

-

-
class AsyncAPI {
-public:
-    // constructor acquires some resource that can be read and written
-    AsyncAPI();
-
-    // callbacks accept an int error code; 0 == success
-    typedef int errorcode;
-
-    // write callback only needs to indicate success or failure
-    void init_write( std::string const& data,
-                     std::function< void( errorcode) > const& callback);
-
-    // read callback needs to accept both errorcode and data
-    void init_read( std::function< void( errorcode, std::string const&) > const&);
-
-    // ... other operations ...
-};
-
-

-

-

- The significant points about each of init_write() and init_read() are: -

-
    -
  • - The AsyncAPI method only - initiates the operation. It returns immediately, while the requested operation - is still pending. -
  • -
  • - The method accepts a callback. When the operation completes, the callback - is called with relevant parameters (error code, data if applicable). -
  • -
-

- We would like to wrap these asynchronous methods in functions that appear synchronous - by blocking the calling fiber until the operation completes. This lets us use - the wrapper function's return value to deliver relevant data. -

-
- - - - - -
[Tip]Tip

- promise<> and future<> are your friends - here. -

-

- - Return - Errorcode -

-

- The AsyncAPI::init_write() - callback passes only an errorcode. - If we simply want the blocking wrapper to return that errorcode, - this is an extremely straightforward use of promise<> and - future<>: -

-

-

-
AsyncAPI::errorcode write_ec( AsyncAPI & api, std::string const& data) {
-    boost::fibers::promise< AsyncAPI::errorcode > promise;
-    boost::fibers::future< AsyncAPI::errorcode > future( promise.get_future() );
-    // In general, even though we block waiting for future::get() and therefore
-    // won't destroy 'promise' until promise::set_value() has been called, we
-    // are advised that with threads it's possible for ~promise() to be
-    // entered before promise::set_value() has returned. While that shouldn't
-    // happen with fibers::promise, a robust way to deal with the lifespan
-    // issue is to bind 'promise' into our lambda. Since promise is move-only,
-    // use initialization capture.
-    api.init_write(
-        data,
-        [&promise]( AsyncAPI::errorcode ec) mutable {
-                            promise.set_value( ec);
-                  });
-    return future.get();
-}
-
-

-

-

- All we have to do is: -

-
    -
  1. - Instantiate a promise<> - of correct type. -
  2. -
  3. - Obtain its future<>. -
  4. -
  5. - Arrange for the callback to call promise::set_value(). -
  6. -
  7. - Block on future::get(). -
  8. -
-
- - - - - -
[Note]Note

- This tactic for resuming a pending fiber works even if the callback is called - on a different thread than the one on which the initiating fiber is running. - In fact, the example program's - dummy AsyncAPI implementation - illustrates that: it simulates async I/O by launching a new thread that sleeps - briefly and then calls the relevant callback. -

-

- - Success - or Exception -

-

- A wrapper more aligned with modern C++ practice would use an exception, rather - than an errorcode, to communicate - failure to its caller. This is straightforward to code in terms of write_ec(): -

-

-

-
void write( AsyncAPI & api, std::string const& data) {
-    AsyncAPI::errorcode ec = write_ec( api, data);
-    if ( ec) {
-        throw make_exception("write", ec);
-    }
-}
-
-

-

-

- The point is that since each fiber has its own stack, you need not repeat messy - boilerplate: normal encapsulation works. -

-

- - Return - Errorcode or Data -

-

- Things get a bit more interesting when the async operation's callback passes - multiple data items of interest. One approach would be to use std::pair<> to capture both: -

-

-

-
std::pair< AsyncAPI::errorcode, std::string > read_ec( AsyncAPI & api) {
-    typedef std::pair< AsyncAPI::errorcode, std::string > result_pair;
-    boost::fibers::promise< result_pair > promise;
-    boost::fibers::future< result_pair > future( promise.get_future() );
-    // We promise that both 'promise' and 'future' will survive until our
-    // lambda has been called.
-    api.init_read([&promise]( AsyncAPI::errorcode ec, std::string const& data) mutable {
-                            promise.set_value( result_pair( ec, data) );
-                  });
-    return future.get();
-}
-
-

-

-

- Once you bundle the interesting data in std::pair<>, - the code is effectively identical to write_ec(). You can call it like this: -

-

-

-
std::tie( ec, data) = read_ec( api);
-
-

-

-

- - Data - or Exception -

-

- But a more natural API for a function that obtains data is to return only the - data on success, throwing an exception on error. -

-

- As with write() - above, it's certainly possible to code a read() wrapper in terms of read_ec(). But since a given application is unlikely - to need both, let's code read() from scratch, leveraging promise::set_exception(): -

-

-

-
std::string read( AsyncAPI & api) {
-    boost::fibers::promise< std::string > promise;
-    boost::fibers::future< std::string > future( promise.get_future() );
-    // Both 'promise' and 'future' will survive until our lambda has been
-    // called.
-    api.init_read([&promise]( AsyncAPI::errorcode ec, std::string const& data) mutable {
-                           if ( ! ec) {
-                               promise.set_value( data);
-                           } else {
-                               promise.set_exception(
-                                       std::make_exception_ptr(
-                                           make_exception("read", ec) ) );
-                           }
-                  });
-    return future.get();
-}
-
-

-

-

- future::get() will do the right thing, either returning std::string - or throwing an exception. -

-

- - Success/Error - Virtual Methods -

-

- One classic approach to completion notification is to define an abstract base - class with success() - and error() - methods. Code wishing to perform async I/O must derive a subclass, override - each of these methods and pass the async operation a pointer to a subclass - instance. The abstract base class might look like this: -

-

-

-
// every async operation receives a subclass instance of this abstract base
-// class through which to communicate its result
-struct Response {
-    typedef std::shared_ptr< Response > ptr;
-
-    // called if the operation succeeds
-    virtual void success( std::string const& data) = 0;
-
-    // called if the operation fails
-    virtual void error( AsyncAPIBase::errorcode ec) = 0;
-};
-
-

-

-

- Now the AsyncAPI operation - might look more like this: -

-

-

-
// derive Response subclass, instantiate, pass Response::ptr
-void init_read( Response::ptr);
-
-

-

-

- We can address this by writing a one-size-fits-all PromiseResponse: -

-

-

-
class PromiseResponse: public Response {
-public:
-    // called if the operation succeeds
-    virtual void success( std::string const& data) {
-        promise_.set_value( data);
-    }
-
-    // called if the operation fails
-    virtual void error( AsyncAPIBase::errorcode ec) {
-        promise_.set_exception(
-                std::make_exception_ptr(
-                    make_exception("read", ec) ) );
-    }
-
-    boost::fibers::future< std::string > get_future() {
-        return promise_.get_future();
-    }
-
-private:
-    boost::fibers::promise< std::string >   promise_;
-};
-
-

-

-

- Now we can simply obtain the future<> from that PromiseResponse - and wait on its get(): -

-

-

-
std::string read( AsyncAPI & api) {
-    // Because init_read() requires a shared_ptr, we must allocate our
-    // ResponsePromise on the heap, even though we know its lifespan.
-    auto promisep( std::make_shared< PromiseResponse >() );
-    boost::fibers::future< std::string > future( promisep->get_future() );
-    // Both 'promisep' and 'future' will survive until our lambda has been
-    // called.
-    api.init_read( promisep);
-    return future.get();
-}
-
-

-

-

- The source code above is found in adapt_callbacks.cpp - and adapt_method_calls.cpp. -

-

- - Then - There's Boost.Asio -

-

- Since the simplest form of Boost.Asio asynchronous operation completion token - is a callback function, we could apply the same tactics for Asio as for our - hypothetical AsyncAPI asynchronous - operations. -

-

- Fortunately we need not. Boost.Asio incorporates a mechanism[5] by which the caller can customize the notification behavior of - every async operation. Therefore we can construct a completion token - which, when passed to a Boost.Asio - async operation, requests blocking for the calling fiber. -

-

- A typical Asio async function might look something like this:[6] -

-
template < ..., class CompletionToken >
-deduced_return_type
-async_something( ... , CompletionToken&& token)
-{
-    // construct handler_type instance from CompletionToken
-    handler_type<CompletionToken, ...>::type handler(token);
-    // construct async_result instance from handler_type
-    async_result<decltype(handler)> result(handler);
-
-    // ... arrange to call handler on completion ...
-    // ... initiate actual I/O operation ...
-
-    return result.get();
-}
-
-

- We will engage that mechanism, which is based on specializing Asio's handler_type<> - template for the CompletionToken - type and the signature of the specific callback. The remainder of this discussion - will refer back to async_something() as the Asio async function under consideration. -

-

- The implementation described below uses lower-level facilities than promise and future - for two reasons: -

-
    -
  1. - The promise mechanism interacts - badly with io_service::stop(). - It produces broken_promise - exceptions. -
  2. -
  3. - If more than one thread is calling the io_service::run() - method, the implementation described below allows resuming the suspended - fiber on whichever thread gets there first with completion notification. - More on this later. -
  4. -
-

- boost::fibers::asio::yield - is a completion token of this kind. yield - is an instance of yield_t: -

-

-

-
class yield_t {
-public:
-    yield_t( bool hop) :
-        allow_hop_( hop) {
-    }
-
-    /**
-     * @code
-     * static yield_t yield;
-     * boost::system::error_code myec;
-     * func(yield[myec]);
-     * @endcode
-     * @c yield[myec] returns an instance of @c yield_t whose @c ec_ points
-     * to @c myec. The expression @c yield[myec] "binds" @c myec to that
-     * (anonymous) @c yield_t instance, instructing @c func() to store any
-     * @c error_code it might produce into @c myec rather than throwing @c
-     * boost::system::system_error.
-     */
-    yield_t operator[]( boost::system::error_code & ec) const {
-        yield_t tmp{ * this };
-        tmp.ec_ = & ec;
-        return tmp;
-    }
-
-//private:
-    // ptr to bound error_code instance if any
-    boost::system::error_code   *   ec_{ nullptr };
-    // allow calling fiber to "hop" to another thread if it could resume more
-    // quickly that way
-    bool                            allow_hop_;
-};
-
-

-

-

- yield_t is in fact only a placeholder, - a way to trigger Boost.Asio customization. It can bind a boost::system::error_code - for use by the actual handler. -

-

- In fact there are two canonical instances of yield_t - — yield and yield_hop: -

-

-

-
// canonical instance with allow_hop_ == false
-thread_local yield_t yield{ false };
-// canonical instance with allow_hop_ == true
-thread_local yield_t yield_hop{ true };
-
-

-

-

- We'll get to the differences between these shortly. -

-

- Asio customization is engaged by specializing boost::asio::handler_type<> for yield_t: -

-

-

-
// Handler type specialisation for fibers::asio::yield.
-// When 'yield' is passed as a completion handler which accepts only
-// error_code, use yield_handler<void>. yield_handler will take care of the
-// error_code one way or another.
-template< typename ReturnType >
-struct handler_type< fibers::asio::yield_t, ReturnType( boost::system::error_code) >
-{ typedef fibers::asio::detail::yield_handler< void >    type; };
-
-

-

-

- (There are actually four different specializations in detail/yield.hpp, - one for each of the four Asio async callback signatures we expect to have to - support.) -

-

- The above directs Asio to use yield_handler - as the actual handler for an async operation to which yield - is passed. There's a generic yield_handler<T> - implementation and a yield_handler<void> - specialization. Let's start with the <void> specialization: -

-

-

-
// yield_handler<void> is like yield_handler<T> without value_. In fact it's
-// just like yield_handler_base.
-template<>
-class yield_handler< void >: public yield_handler_base {
-public:
-    explicit yield_handler( yield_t const& y) :
-        yield_handler_base{ y } {
-    }
-
-    // nullary completion callback
-    void operator()() {
-        ( * this)( boost::system::error_code() );
-    }
-
-    // inherit operator()(error_code) overload from base class
-    using yield_handler_base::operator();
-};
-
-

-

-

- async_something(), - having consulted the handler_type<> traits specialization, instantiates - a yield_handler<void> to - be passed as the actual callback for the async operation. yield_handler's - constructor accepts the yield_t - instance (the yield object - passed to the async function) and passes it along to yield_handler_base: -

-

-

-
// This class encapsulates common elements between yield_handler<T> (capturing
-// a value to return from asio async function) and yield_handler<void> (no
-// such value). See yield_handler<T> and its <void> specialization below. Both
-// yield_handler<T> and yield_handler<void> are passed by value through
-// various layers of asio functions. In other words, they're potentially
-// copied multiple times. So key data such as the yield_completion instance
-// must be stored in our async_result<yield_handler<>> specialization, which
-// should be instantiated only once.
-class yield_handler_base {
-public:
-    yield_handler_base( yield_t const& y) :
-        // capture the context* associated with the running fiber
-        ctx_{ boost::fibers::context::active() },
-        // capture the passed yield_t
-        yt_{ y } {
-    }
-
-    // completion callback passing only (error_code)
-    void operator()( boost::system::error_code const& ec) {
-        BOOST_ASSERT_MSG( ycomp_,
-                          "Must inject yield_completion* "
-                          "before calling yield_handler_base::operator()()");
-        BOOST_ASSERT_MSG( yt_.ec_,
-                          "Must inject boost::system::error_code* "
-                          "before calling yield_handler_base::operator()()");
-        // If originating fiber is busy testing completed_ flag, wait until it
-        // has observed (! completed_).
-        yield_completion::lock_t lk{ ycomp_->mtx_ };
-        // Notify a subsequent yield_completion::wait() call that it need not
-        // suspend.
-        ycomp_->completed_ = true;
-        // set the error_code bound by yield_t
-        * yt_.ec_ = ec;
-        // Are we permitted to wake up the suspended fiber on this thread, the
-        // thread that called the completion handler?
-        if ( ( ! ctx_->is_context( fibers::type::pinned_context) ) && yt_.allow_hop_) {
-            // We must not migrate a pinned_context to another thread. If this
-            // isn't a pinned_context, and the application passed yield_hop
-            // rather than yield, migrate this fiber to the running thread.
-            fibers::context::active()->migrate( ctx_);
-        }
-        // either way, wake the fiber
-        fibers::context::active()->set_ready( ctx_);
-    }
-
-//private:
-    boost::fibers::context      *   ctx_;
-    yield_t                         yt_;
-    // We depend on this pointer to yield_completion, which will be injected
-    // by async_result.
-    yield_completion            *   ycomp_{ nullptr };
-};
-
-

-

-

- yield_handler_base stores a - copy of the yield_t instance - — which, as shown above, is only an error_code - and a bool. It also captures the - context* for the currently-running fiber by calling context::active(). -

-

- You will notice that yield_handler_base - has one more data member (ycomp_) - that is initialized to nullptr - by its constructor — though its operator()() method relies on ycomp_ - being non-null. More on this in a moment. -

-

- Having constructed the yield_handler<void> - instance, async_something() - goes on to construct an async_result - specialized for the handler_type<>::type: - in this case, async_result<yield_handler<void>>. - It passes the yield_handler<void> - instance to the new async_result - instance. -

-

-

-
// Without the need to handle a passed value, our yield_handler<void>
-// specialization is just like async_result_base.
-template<>
-class async_result< boost::fibers::asio::detail::yield_handler< void > > :
-    public boost::fibers::asio::detail::async_result_base {
-public:
-    typedef void type;
-
-    explicit async_result( boost::fibers::asio::detail::yield_handler< void > & h):
-        boost::fibers::asio::detail::async_result_base{ h } {
-    }
-};
-
-

-

-

- Naturally that leads us straight to async_result_base: -

-

-

-
// Factor out commonality between async_result<yield_handler<T>> and
-// async_result<yield_handler<void>>
-class async_result_base {
-public:
-    explicit async_result_base( yield_handler_base & h) {
-        // Inject ptr to our yield_completion instance into this
-        // yield_handler<>.
-        h.ycomp_ = & this->ycomp_;
-        // if yield_t didn't bind an error_code, make yield_handler_base's
-        // error_code* point to an error_code local to this object so
-        // yield_handler_base::operator() can unconditionally store through
-        // its error_code*
-        if ( ! h.yt_.ec_) {
-            h.yt_.ec_ = & ec_;
-        }
-    }
-
-    void get() {
-        // Unless yield_handler_base::operator() has already been called,
-        // suspend the calling fiber until that call.
-        ycomp_.wait();
-        // The only way our own ec_ member could have a non-default value is
-        // if our yield_handler did not have a bound error_code AND the
-        // completion callback passed a non-default error_code.
-        if ( ec_) {
-            throw_exception( boost::system::system_error{ ec_ } );
-        }
-        boost::this_fiber::interruption_point();
-    }
-
-private:
-    // If yield_t does not bind an error_code instance, store into here.
-    boost::system::error_code       ec_{};
-    // async_result_base owns the yield_completion because, unlike
-    // yield_handler<>, async_result<> is only instantiated once.
-    yield_completion                ycomp_{};
-};
-
-

-

-

- This is how yield_handler_base::ycomp_ - becomes non-null: async_result_base's - constructor injects a pointer back to its own yield_completion - member. -

-

- Recall that both of the canonical yield_t - instances yield and yield_hop initialize their error_code* - member ec_ to nullptr. If either of these instances is passed - to async_something() - (ec_ is still nullptr), the copy stored in yield_handler_base - will likewise have null ec_. - async_result_base's constructor - sets yield_handler_base's - yield_t's ec_ - member to point to its own error_code - member. -

-

- The stage is now set. async_something() initiates the actual async operation, arranging - to call its yield_handler<void> instance - on completion. Let's say, for the sake of argument, that the actual async operation's - callback has signature void(error_code). -

-

- But since it's an async operation, control returns at once to async_something(). - async_something() - calls async_result<yield_handler<void>>::get(), and - will return its return value. -

-

- async_result<yield_handler<void>>::get() inherits - async_result_base::get(). -

-

- async_result_base::get() immediately - calls yield_completion::wait(). -

-

-

-
// Bundle a completion bool flag with a spinlock to protect it.
-struct yield_completion {
-    typedef fibers::detail::spinlock    mutex_t;
-    typedef std::unique_lock< mutex_t > lock_t;
-
-    mutex_t mtx_{};
-    bool    completed_{ false };
-
-    void wait() {
-        // yield_handler_base::operator()() will set completed_ true and
-        // attempt to wake a suspended fiber. It would be Bad if that call
-        // happened between our detecting (! completed_) and suspending.
-        lock_t lk{ mtx_ };
-        // If completed_ is already set, we're done here: don't suspend.
-        if ( ! completed_) {
-            // suspend(unique_lock<spinlock>) unlocks the lock in the act of
-            // resuming another fiber
-            fibers::context::active()->suspend( lk);
-        }
-    }
-};
-
-

-

-

- Supposing that the pending async operation has not yet completed, yield_completion::completed_ will still be false, - and wait() - will call context::suspend() on the currently-running fiber. -

-

- Other fibers will now have a chance to run. -

-

- Some time later, the async operation completes. It calls yield_handler<void>::operator()(error_code const&) with an error_code - indicating either success or failure. We'll consider both cases. -

-

- yield_handler<void> explicitly - inherits operator()(error_code const&) from yield_handler_base. -

-

- yield_handler_base::operator()(error_code const&) first sets yield_completion::completed_ - true. This way, if async_something()'s - async operation completes immediately — if yield_handler_base::operator() - is called even before async_result_base::get() - — the calling fiber will not suspend. -

-

- The actual error_code produced - by the async operation is then stored through the stored yield_t::ec_ pointer. - If async_something()'s - caller used (e.g.) yield[my_ec] to - bind a local error_code instance, - the actual error_code value - is stored into the caller's variable. Otherwise, it is stored into async_result_base::ec_. -

-

- Finally we get to the distinction between yield - and yield_hop. -

-

- As described for context::is_context(), a pinned_context - fiber is special to the library and must never be passed to context::migrate(). - We must detect and avoid that case here. -

-

- The yield_t::allow_hop_ bool - indicates whether async_something()'s caller is willing to allow the running - fiber to hop to another thread (yield_hop) - or whether s/he insists that the fiber resume on the same thread (yield). -

-

- If the caller passed yield_hop - to async_something(), - and the running fiber isn't a pinned_context, - yield_handler_base::operator() passes - the context of the original - fiber — the one on which async_something() was called, captured in yield_handler_base's - constructor — to the current thread's context::migrate(). -

-

- If the running application has more than one thread calling io_service::run(), - that fiber could return from async_something() on a different thread (the one calling yield_handler_base::operator()) - than the one on which it entered async_something(). -

-

- In any case, the fiber is marked as ready to run by passing it to context::set_ready(). - Control then returns from yield_handler_base::operator(): - the callback is done. -

-

- In due course, the fiber yield_handler_base::ctx_ is - resumed. Control returns from context::suspend() to yield_completion::wait(), which - returns to async_result_base::get(). -

-
    -
  • - If the original caller passed yield[my_ec] to async_something() to bind a local error_code - instance, then yield_handler_base::operator() stored its error_code - to the caller's my_ec instance, - leaving async_result_base::ec_ - initialized to success. -
  • -
  • - If the original caller passed yield - to async_something() - without binding a local error_code - variable, then yield_handler_base::operator() stored its error_code - into async_result_base::ec_. - If in fact that error_code - is success, then all is well. -
  • -
  • - Otherwise — the original caller did not bind a local error_code - and yield_handler_base::operator() was called with an error_code - indicating error — async_result_base::get() throws system_error - with that error_code. -
  • -
-

- The case in which async_something()'s completion callback has signature void() is similar. - yield_handler<void>::operator()() invokes the machinery above with a success - error_code. -

-

- A completion callback with signature void(error_code, T) - (that is: in addition to error_code, - callback receives some data item) is handled somewhat differently. For this - kind of signature, handler_type<>::type - specifies yield_handler<T> (for - T other than void). -

-

- A yield_handler<T> reserves - a value_ pointer to a value - of type T: -

-

-

-
// asio uses handler_type<completion token type, signature>::type to decide
-// what to instantiate as the actual handler. Below, we specialize
-// handler_type< yield_t, ... > to indicate yield_handler<>. So when you pass
-// an instance of yield_t as an asio completion token, asio selects
-// yield_handler<> as the actual handler class.
-template< typename T >
-class yield_handler: public yield_handler_base {
-public:
-    // asio passes the completion token to the handler constructor
-    explicit yield_handler( yield_t const& y) :
-        yield_handler_base{ y } {
-    }
-
-    // completion callback passing only value (T)
-    void operator()( T t) {
-        // just like callback passing success error_code
-        (*this)( boost::system::error_code(), std::move(t) );
-    }
-
-    // completion callback passing (error_code, T)
-    void operator()( boost::system::error_code const& ec, T t) {
-        BOOST_ASSERT_MSG( value_,
-                          "Must inject value ptr "
-                          "before caling yield_handler<T>::operator()()");
-        // move the value to async_result<> instance BEFORE waking up a
-        // suspended fiber
-        * value_ = std::move( t);
-        // forward the call to base-class completion handler
-        yield_handler_base::operator()( ec);
-    }
-
-//private:
-    // pointer to destination for eventual value
-    // this must be injected by async_result before operator()() is called
-    T                           *   value_{ nullptr };
-};
-
-

-

-

- This pointer is initialized to nullptr. -

-

- When async_something() - instantiates async_result<yield_handler<T>>: -

-

-

-
// asio constructs an async_result<> instance from the yield_handler specified
-// by handler_type<>::type. A particular asio async method constructs the
-// yield_handler, constructs this async_result specialization from it, then
-// returns the result of calling its get() method.
-template< typename T >
-class async_result< boost::fibers::asio::detail::yield_handler< T > > :
-    public boost::fibers::asio::detail::async_result_base {
-public:
-    // type returned by get()
-    typedef T type;
-
-    explicit async_result( boost::fibers::asio::detail::yield_handler< T > & h) :
-        boost::fibers::asio::detail::async_result_base{ h } {
-        // Inject ptr to our value_ member into yield_handler<>: result will
-        // be stored here.
-        h.value_ = & value_;
-    }
-
-    // asio async method returns result of calling get()
-    type get() {
-        boost::fibers::asio::detail::async_result_base::get();
-        return std::move( value_);
-    }
-
-private:
-    type                            value_{};
-};
-
-

-

-

- this async_result<> - specialization reserves a member of type T - to receive the passed data item, and sets yield_handler<T>::value_ to point to its own data member. -

-

- async_result<yield_handler<T>> - overrides get(). - The override calls async_result_base::get(), - so the calling fiber suspends as described above. -

-

- yield_handler<T>::operator()(error_code, T) - stores its passed T value into - async_result<yield_handler<T>>::value_. -

-

- Then it passes control to yield_handler_base::operator()(error_code) - to deal with waking (and possibly migrating) the original fiber as described - above. -

-

- When async_result<yield_handler<T>>::get() resumes, - it returns the stored value_ - to async_something() - and ultimately to async_something()'s caller. -

-

- The case of a callback signature void(T) - is handled by having yield_handler<T>::operator()(T) engage - the void(error_code, T) machinery, - passing a success error_code. -

-

- The source code above is found in yield.hpp - and detail/yield.hpp. -

-
-

-

[5] - This mechanism has been proposed as a conventional way to allow the caller - of an async function to specify completion handling: N4045. -

-

[6] - per N4045 -

-
+
@@ -989,7 +51,7 @@

-PrevUpHomeNext +PrevUpHomeNext
diff --git a/doc/html/fiber/callbacks/data_or_exception.html b/doc/html/fiber/callbacks/data_or_exception.html new file mode 100644 index 00000000..2ce9ef1a --- /dev/null +++ b/doc/html/fiber/callbacks/data_or_exception.html @@ -0,0 +1,78 @@ + + + +Data or Exception + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ But a more natural API for a function that obtains data is to return only + the data on success, throwing an exception on error. +

+

+ As with write() + above, it’s certainly possible to code a read() wrapper in terms of read_ec(). But since a given application is unlikely + to need both, let’s code read() from scratch, leveraging promise::set_exception(): +

+

+

+
std::string read( AsyncAPI & api) {
+    boost::fibers::promise< std::string > promise;
+    boost::fibers::future< std::string > future( promise.get_future() );
+    // Both 'promise' and 'future' will survive until our lambda has been
+    // called.
+    api.init_read([&promise]( AsyncAPI::errorcode ec, std::string const& data) mutable {
+                           if ( ! ec) {
+                               promise.set_value( data);
+                           } else {
+                               promise.set_exception(
+                                       std::make_exception_ptr(
+                                           make_exception("read", ec) ) );
+                           }
+                  });
+    return future.get();
+}
+
+

+

+

+ future::get() will do the right thing, either returning std::string + or throwing an exception. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/overview.html b/doc/html/fiber/callbacks/overview.html new file mode 100644 index 00000000..603b28df --- /dev/null +++ b/doc/html/fiber/callbacks/overview.html @@ -0,0 +1,106 @@ + + + +Overview + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ One of the primary benefits of Boost.Fiber + is the ability to use asynchronous operations for efficiency, while at the + same time structuring the calling code as if the operations + were synchronous. Asynchronous operations provide completion notification + in a variety of ways, but most involve a callback function of some kind. + This section discusses tactics for interfacing Boost.Fiber + with an arbitrary async operation. +

+

+ For purposes of illustration, consider the following hypothetical API: +

+

+

+
class AsyncAPI {
+public:
+    // constructor acquires some resource that can be read and written
+    AsyncAPI();
+
+    // callbacks accept an int error code; 0 == success
+    typedef int errorcode;
+
+    // write callback only needs to indicate success or failure
+    void init_write( std::string const& data,
+                     std::function< void( errorcode) > const& callback);
+
+    // read callback needs to accept both errorcode and data
+    void init_read( std::function< void( errorcode, std::string const&) > const&);
+
+    // ... other operations ...
+};
+
+

+

+

+ The significant points about each of init_write() and init_read() are: +

+
    +
  • + The AsyncAPI method only + initiates the operation. It returns immediately, while the requested + operation is still pending. +
  • +
  • + The method accepts a callback. When the operation completes, the callback + is called with relevant parameters (error code, data if applicable). +
  • +
+

+ We would like to wrap these asynchronous methods in functions that appear + synchronous by blocking the calling fiber until the operation completes. + This lets us use the wrapper function’s return value to deliver relevant data. +

+
+ + + + + +
[Tip]Tip

+ promise<> and future<> are your friends + here. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/return_errorcode.html b/doc/html/fiber/callbacks/return_errorcode.html new file mode 100644 index 00000000..32b6fa76 --- /dev/null +++ b/doc/html/fiber/callbacks/return_errorcode.html @@ -0,0 +1,103 @@ + + + +Return Errorcode + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ The AsyncAPI::init_write() + callback passes only an errorcode. + If we simply want the blocking wrapper to return that errorcode, + this is an extremely straightforward use of promise<> and + future<>: +

+

+

+
AsyncAPI::errorcode write_ec( AsyncAPI & api, std::string const& data) {
+    boost::fibers::promise< AsyncAPI::errorcode > promise;
+    boost::fibers::future< AsyncAPI::errorcode > future( promise.get_future() );
+    // In general, even though we block waiting for future::get() and therefore
+    // won't destroy 'promise' until promise::set_value() has been called, we
+    // are advised that with threads it's possible for ~promise() to be
+    // entered before promise::set_value() has returned. While that shouldn't
+    // happen with fibers::promise, a robust way to deal with the lifespan
+    // issue is to bind 'promise' into our lambda. Since promise is move-only,
+    // use initialization capture.
+    api.init_write(
+        data,
+        [&promise]( AsyncAPI::errorcode ec) mutable {
+                            promise.set_value( ec);
+                  });
+    return future.get();
+}
+
+

+

+

+ All we have to do is: +

+
    +
  1. + Instantiate a promise<> of correct type. +
  2. +
  3. + Obtain its future<>. +
  4. +
  5. + Arrange for the callback to call promise::set_value(). +
  6. +
  7. + Block on future::get(). +
  8. +
+
+ + + + + +
[Note]Note

+ This tactic for resuming a pending fiber works even if the callback is + called on a different thread than the one on which the initiating fiber + is running. In fact, the + example program’s dummy AsyncAPI + implementation illustrates that: it simulates async I/O by launching a + new thread that sleeps briefly and then calls the relevant callback. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/return_errorcode_or_data.html b/doc/html/fiber/callbacks/return_errorcode_or_data.html new file mode 100644 index 00000000..a6605947 --- /dev/null +++ b/doc/html/fiber/callbacks/return_errorcode_or_data.html @@ -0,0 +1,75 @@ + + + +Return Errorcode or Data + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ Things get a bit more interesting when the async operation’s callback passes + multiple data items of interest. One approach would be to use std::pair<> to capture both: +

+

+

+
std::pair< AsyncAPI::errorcode, std::string > read_ec( AsyncAPI & api) {
+    typedef std::pair< AsyncAPI::errorcode, std::string > result_pair;
+    boost::fibers::promise< result_pair > promise;
+    boost::fibers::future< result_pair > future( promise.get_future() );
+    // We promise that both 'promise' and 'future' will survive until our
+    // lambda has been called.
+    api.init_read([&promise]( AsyncAPI::errorcode ec, std::string const& data) mutable {
+                            promise.set_value( result_pair( ec, data) );
+                  });
+    return future.get();
+}
+
+

+

+

+ Once you bundle the interesting data in std::pair<>, the code is effectively identical + to write_ec(). + You can call it like this: +

+

+

+
std::tie( ec, data) = read_ec( api);
+
+

+

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/success_error_virtual_methods.html b/doc/html/fiber/callbacks/success_error_virtual_methods.html new file mode 100644 index 00000000..c2a40b15 --- /dev/null +++ b/doc/html/fiber/callbacks/success_error_virtual_methods.html @@ -0,0 +1,131 @@ + + + +Success/Error Virtual Methods + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ One classic approach to completion notification is to define an abstract + base class with success() + and error() + methods. Code wishing to perform async I/O must derive a subclass, override + each of these methods and pass the async operation a pointer to a subclass + instance. The abstract base class might look like this: +

+

+

+
// every async operation receives a subclass instance of this abstract base
+// class through which to communicate its result
+struct Response {
+    typedef std::shared_ptr< Response > ptr;
+
+    // called if the operation succeeds
+    virtual void success( std::string const& data) = 0;
+
+    // called if the operation fails
+    virtual void error( AsyncAPIBase::errorcode ec) = 0;
+};
+
+

+

+

+ Now the AsyncAPI operation + might look more like this: +

+

+

+
// derive Response subclass, instantiate, pass Response::ptr
+void init_read( Response::ptr);
+
+

+

+

+ We can address this by writing a one-size-fits-all PromiseResponse: +

+

+

+
class PromiseResponse: public Response {
+public:
+    // called if the operation succeeds
+    virtual void success( std::string const& data) {
+        promise_.set_value( data);
+    }
+
+    // called if the operation fails
+    virtual void error( AsyncAPIBase::errorcode ec) {
+        promise_.set_exception(
+                std::make_exception_ptr(
+                    make_exception("read", ec) ) );
+    }
+
+    boost::fibers::future< std::string > get_future() {
+        return promise_.get_future();
+    }
+
+private:
+    boost::fibers::promise< std::string >   promise_;
+};
+
+

+

+

+ Now we can simply obtain the future<> from that PromiseResponse + and wait on its get(): +

+

+

+
std::string read( AsyncAPI & api) {
+    // Because init_read() requires a shared_ptr, we must allocate our
+    // ResponsePromise on the heap, even though we know its lifespan.
+    auto promisep( std::make_shared< PromiseResponse >() );
+    boost::fibers::future< std::string > future( promisep->get_future() );
+    // Both 'promisep' and 'future' will survive until our lambda has been
+    // called.
+    api.init_read( promisep);
+    return future.get();
+}
+
+

+

+

+ The source code above is found in adapt_callbacks.cpp + and adapt_method_calls.cpp. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/success_or_exception.html b/doc/html/fiber/callbacks/success_or_exception.html new file mode 100644 index 00000000..6a440d6c --- /dev/null +++ b/doc/html/fiber/callbacks/success_or_exception.html @@ -0,0 +1,63 @@ + + + +Success or Exception + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ A wrapper more aligned with modern C++ practice would use an exception, rather + than an errorcode, to communicate + failure to its caller. This is straightforward to code in terms of write_ec(): +

+

+

+
void write( AsyncAPI & api, std::string const& data) {
+    AsyncAPI::errorcode ec = write_ec( api, data);
+    if ( ec) {
+        throw make_exception("write", ec);
+    }
+}
+
+

+

+

+ The point is that since each fiber has its own stack, you need not repeat + messy boilerplate: normal encapsulation works. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/callbacks/then_there_s____boost_asio__.html b/doc/html/fiber/callbacks/then_there_s____boost_asio__.html new file mode 100644 index 00000000..45f32e56 --- /dev/null +++ b/doc/html/fiber/callbacks/then_there_s____boost_asio__.html @@ -0,0 +1,606 @@ + + + +Then There’s Boost.Asio + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ Since the simplest form of Boost.Asio asynchronous operation completion token + is a callback function, we could apply the same tactics for Asio as for our + hypothetical AsyncAPI asynchronous + operations. +

+

+ Fortunately we need not. Boost.Asio incorporates a mechanism[4] by which the caller can customize the notification behavior of + any async operation. Therefore we can construct a completion token + which, when passed to a Boost.Asio + async operation, requests blocking for the calling fiber. +

+

+ A typical Asio async function might look something like this:[5] +

+
template < ..., class CompletionToken >
+deduced_return_type
+async_something( ... , CompletionToken&& token)
+{
+    // construct handler_type instance from CompletionToken
+    handler_type<CompletionToken, ...>::type handler(token);
+    // construct async_result instance from handler_type
+    async_result<decltype(handler)> result(handler);
+
+    // ... arrange to call handler on completion ...
+    // ... initiate actual I/O operation ...
+
+    return result.get();
+}
+
+

+ We will engage that mechanism, which is based on specializing Asio’s handler_type<> + template for the CompletionToken + type and the signature of the specific callback. The remainder of this discussion + will refer back to async_something() as the Asio async function under consideration. +

+

+ The implementation described below uses lower-level facilities than promise and future + because the promise mechanism + interacts badly with io_service::stop(). + It produces broken_promise + exceptions. +

+

+ boost::fibers::asio::yield is a completion token of this kind. + yield is an instance of + yield_t: +

+

+

+
class yield_t {
+public:
+    yield_t() = default;
+
+    /**
+     * @code
+     * static yield_t yield;
+     * boost::system::error_code myec;
+     * func(yield[myec]);
+     * @endcode
+     * @c yield[myec] returns an instance of @c yield_t whose @c ec_ points
+     * to @c myec. The expression @c yield[myec] "binds" @c myec to that
+     * (anonymous) @c yield_t instance, instructing @c func() to store any
+     * @c error_code it might produce into @c myec rather than throwing @c
+     * boost::system::system_error.
+     */
+    yield_t operator[]( boost::system::error_code & ec) const {
+        yield_t tmp{ * this };
+        tmp.ec_ = & ec;
+        return tmp;
+    }
+
+//private:
+    // ptr to bound error_code instance if any
+    boost::system::error_code   *   ec_{ nullptr };
+};
+
+

+

+

+ yield_t is in fact only a + placeholder, a way to trigger Boost.Asio customization. It can bind a boost::system::error_code for use by the actual + handler. +

+

+ yield is declared as: +

+

+

+
// canonical instance
+thread_local yield_t yield{};
+
+

+

+

+ Asio customization is engaged by specializing boost::asio::handler_type<> + for yield_t: +

+

+

+
// Handler type specialisation for fibers::asio::yield.
+// When 'yield' is passed as a completion handler which accepts only
+// error_code, use yield_handler<void>. yield_handler will take care of the
+// error_code one way or another.
+template< typename ReturnType >
+struct handler_type< fibers::asio::yield_t, ReturnType( boost::system::error_code) >
+{ typedef fibers::asio::detail::yield_handler< void >    type; };
+
+

+

+

+ (There are actually four different specializations in detail/yield.hpp, + one for each of the four Asio async callback signatures we expect.) +

+

+ The above directs Asio to use yield_handler + as the actual handler for an async operation to which yield + is passed. There’s a generic yield_handler<T> + implementation and a yield_handler<void> + specialization. Let’s start with the <void> specialization: +

+

+

+
// yield_handler<void> is like yield_handler<T> without value_. In fact it's
+// just like yield_handler_base.
+template<>
+class yield_handler< void >: public yield_handler_base {
+public:
+    explicit yield_handler( yield_t const& y) :
+        yield_handler_base{ y } {
+    }
+
+    // nullary completion callback
+    void operator()() {
+        ( * this)( boost::system::error_code() );
+    }
+
+    // inherit operator()(error_code) overload from base class
+    using yield_handler_base::operator();
+};
+
+

+

+

+ async_something(), + having consulted the handler_type<> traits specialization, instantiates + a yield_handler<void> to + be passed as the actual callback for the async operation. yield_handler’s + constructor accepts the yield_t + instance (the yield object + passed to the async function) and passes it along to yield_handler_base: +

+

+

+
// This class encapsulates common elements between yield_handler<T> (capturing
+// a value to return from asio async function) and yield_handler<void> (no
+// such value). See yield_handler<T> and its <void> specialization below. Both
+// yield_handler<T> and yield_handler<void> are passed by value through
+// various layers of asio functions. In other words, they're potentially
+// copied multiple times. So key data such as the yield_completion instance
+// must be stored in our async_result<yield_handler<>> specialization, which
+// should be instantiated only once.
+class yield_handler_base {
+public:
+    yield_handler_base( yield_t const& y) :
+        // capture the context* associated with the running fiber
+        ctx_{ boost::fibers::context::active() },
+        // capture the passed yield_t
+        yt_{ y } {
+    }
+
+    // completion callback passing only (error_code)
+    void operator()( boost::system::error_code const& ec) {
+        BOOST_ASSERT_MSG( ycomp_,
+                          "Must inject yield_completion* "
+                          "before calling yield_handler_base::operator()()");
+        BOOST_ASSERT_MSG( yt_.ec_,
+                          "Must inject boost::system::error_code* "
+                          "before calling yield_handler_base::operator()()");
+        // If originating fiber is busy testing completed_ flag, wait until it
+        // has observed (! completed_).
+        yield_completion::lock_t lk{ ycomp_->mtx_ };
+        // Notify a subsequent yield_completion::wait() call that it need not
+        // suspend.
+        ycomp_->completed_ = true;
+        // set the error_code bound by yield_t
+        * yt_.ec_ = ec;
+        // If ctx_ is still active, e.g. because the async operation
+        // immediately called its callback (this method!) before the asio
+        // async function called async_result_base::get(), we must not set it
+        // ready.
+        if ( fibers::context::active() != ctx_ ) {
+            // wake the fiber
+            fibers::context::active()->set_ready( ctx_);
+        }
+    }
+
+//private:
+    boost::fibers::context      *   ctx_;
+    yield_t                         yt_;
+    // We depend on this pointer to yield_completion, which will be injected
+    // by async_result.
+    yield_completion            *   ycomp_{ nullptr };
+};
+
+

+

+

+ yield_handler_base stores + a copy of the yield_t instance + — which, as shown above, contains only an error_code*. It also captures the context* + for the currently-running fiber by calling context::active(). +

+

+ You will notice that yield_handler_base + has one more data member (ycomp_) + that is initialized to nullptr + by its constructor — though its operator()() method relies on ycomp_ + being non-null. More on this in a moment. +

+

+ Having constructed the yield_handler<void> + instance, async_something() goes on to construct an async_result + specialized for the handler_type<>::type: + in this case, async_result<yield_handler<void>>. + It passes the yield_handler<void> + instance to the new async_result + instance. +

+

+

+
// Without the need to handle a passed value, our yield_handler<void>
+// specialization is just like async_result_base.
+template<>
+class async_result< boost::fibers::asio::detail::yield_handler< void > > :
+    public boost::fibers::asio::detail::async_result_base {
+public:
+    typedef void type;
+
+    explicit async_result( boost::fibers::asio::detail::yield_handler< void > & h):
+        boost::fibers::asio::detail::async_result_base{ h } {
+    }
+};
+
+

+

+

+ Naturally that leads us straight to async_result_base: +

+

+

+
// Factor out commonality between async_result<yield_handler<T>> and
+// async_result<yield_handler<void>>
+class async_result_base {
+public:
+    explicit async_result_base( yield_handler_base & h) {
+        // Inject ptr to our yield_completion instance into this
+        // yield_handler<>.
+        h.ycomp_ = & this->ycomp_;
+        // if yield_t didn't bind an error_code, make yield_handler_base's
+        // error_code* point to an error_code local to this object so
+        // yield_handler_base::operator() can unconditionally store through
+        // its error_code*
+        if ( ! h.yt_.ec_) {
+            h.yt_.ec_ = & ec_;
+        }
+    }
+
+    void get() {
+        // Unless yield_handler_base::operator() has already been called,
+        // suspend the calling fiber until that call.
+        ycomp_.wait();
+        // The only way our own ec_ member could have a non-default value is
+        // if our yield_handler did not have a bound error_code AND the
+        // completion callback passed a non-default error_code.
+        if ( ec_) {
+            throw_exception( boost::system::system_error{ ec_ } );
+        }
+    }
+
+private:
+    // If yield_t does not bind an error_code instance, store into here.
+    boost::system::error_code       ec_{};
+    // async_result_base owns the yield_completion because, unlike
+    // yield_handler<>, async_result<> is only instantiated once.
+    yield_completion                ycomp_{};
+};
+
+

+

+

+ This is how yield_handler_base::ycomp_ + becomes non-null: async_result_base’s + constructor injects a pointer back to its own yield_completion + member. +

+

+ Recall that the canonical yield_t + instance yield initializes + its error_code* + member ec_ to nullptr. If this instance is passed to async_something() + (ec_ is still nullptr), the copy stored in yield_handler_base will likewise have null + ec_. async_result_base’s + constructor sets yield_handler_base’s + yield_t’s ec_ + member to point to its own error_code + member. +

+

+ The stage is now set. async_something() initiates the actual async operation, arranging + to call its yield_handler<void> + instance on completion. Let’s say, for the sake of argument, that the actual + async operation’s callback has signature void(error_code). +

+

+ But since it’s an async operation, control returns at once to async_something(). + async_something() + calls async_result<yield_handler<void>>::get(), + and will return its return value. +

+

+ async_result<yield_handler<void>>::get() inherits + async_result_base::get(). +

+

+ async_result_base::get() immediately + calls yield_completion::wait(). +

+

+

+
// Bundle a completion bool flag with a spinlock to protect it.
+struct yield_completion {
+    typedef fibers::detail::spinlock    mutex_t;
+    typedef std::unique_lock< mutex_t > lock_t;
+
+    mutex_t mtx_{};
+    bool    completed_{ false };
+
+    void wait() {
+        // yield_handler_base::operator()() will set completed_ true and
+        // attempt to wake a suspended fiber. It would be Bad if that call
+        // happened between our detecting (! completed_) and suspending.
+        lock_t lk{ mtx_ };
+        // If completed_ is already set, we're done here: don't suspend.
+        if ( ! completed_) {
+            // suspend(unique_lock<spinlock>) unlocks the lock in the act of
+            // resuming another fiber
+            fibers::context::active()->suspend( lk);
+        }
+    }
+};
+
+

+

+

+ Supposing that the pending async operation has not yet completed, yield_completion::completed_ will still be false, and wait() will call context::suspend() on + the currently-running fiber. +

+

+ Other fibers will now have a chance to run. +

+

+ Some time later, the async operation completes. It calls yield_handler<void>::operator()(error_code const&) with an error_code + indicating either success or failure. We’ll consider both cases. +

+

+ yield_handler<void> explicitly + inherits operator()(error_code const&) from yield_handler_base. +

+

+ yield_handler_base::operator()(error_code const&) first sets yield_completion::completed_ + true. This way, if async_something()’s + async operation completes immediately — if yield_handler_base::operator() is called even before async_result_base::get() + — the calling fiber will not suspend. +

+

+ The actual error_code produced + by the async operation is then stored through the stored yield_t::ec_ pointer. + If async_something()’s + caller used (e.g.) yield[my_ec] to bind a local error_code + instance, the actual error_code + value is stored into the caller’s variable. Otherwise, it is stored into + async_result_base::ec_. +

+

+ If the stored fiber context yield_handler_base::ctx_ + is not already running, it is marked as ready to run by passing it to context::set_ready(). + Control then returns from yield_handler_base::operator(): the callback is done. +

+

+ In due course, that fiber is resumed. Control returns from context::suspend() to + yield_completion::wait(), + which returns to async_result_base::get(). +

+
    +
  • + If the original caller passed yield[my_ec] to async_something() to bind a local error_code + instance, then yield_handler_base::operator() stored its error_code + to the caller’s my_ec + instance, leaving async_result_base::ec_ + initialized to success. +
  • +
  • + If the original caller passed yield + to async_something() + without binding a local error_code + variable, then yield_handler_base::operator() stored its error_code + into async_result_base::ec_. + If in fact that error_code + is success, then all is well. +
  • +
  • + Otherwise — the original caller did not bind a local error_code + and yield_handler_base::operator() was called with an error_code + indicating error — async_result_base::get() throws system_error + with that error_code. +
  • +
+

+ The case in which async_something()’s completion callback has signature void() is + similar. yield_handler<void>::operator()() + invokes the machinery above with a success error_code. +

+

+ A completion callback with signature void(error_code, T) + (that is: in addition to error_code, + callback receives some data item) is handled somewhat differently. For this + kind of signature, handler_type<>::type + specifies yield_handler<T> (for + T other than void). +

+

+ A yield_handler<T> reserves + a value_ pointer to a value + of type T: +

+

+

+
// asio uses handler_type<completion token type, signature>::type to decide
+// what to instantiate as the actual handler. Below, we specialize
+// handler_type< yield_t, ... > to indicate yield_handler<>. So when you pass
+// an instance of yield_t as an asio completion token, asio selects
+// yield_handler<> as the actual handler class.
+template< typename T >
+class yield_handler: public yield_handler_base {
+public:
+    // asio passes the completion token to the handler constructor
+    explicit yield_handler( yield_t const& y) :
+        yield_handler_base{ y } {
+    }
+
+    // completion callback passing only value (T)
+    void operator()( T t) {
+        // just like callback passing success error_code
+        (*this)( boost::system::error_code(), std::move(t) );
+    }
+
+    // completion callback passing (error_code, T)
+    void operator()( boost::system::error_code const& ec, T t) {
+        BOOST_ASSERT_MSG( value_,
+                          "Must inject value ptr "
+                          "before caling yield_handler<T>::operator()()");
+        // move the value to async_result<> instance BEFORE waking up a
+        // suspended fiber
+        * value_ = std::move( t);
+        // forward the call to base-class completion handler
+        yield_handler_base::operator()( ec);
+    }
+
+//private:
+    // pointer to destination for eventual value
+    // this must be injected by async_result before operator()() is called
+    T                           *   value_{ nullptr };
+};
+
+

+

+

+ This pointer is initialized to nullptr. +

+

+ When async_something() + instantiates async_result<yield_handler<T>>: +

+

+

+
// asio constructs an async_result<> instance from the yield_handler specified
+// by handler_type<>::type. A particular asio async method constructs the
+// yield_handler, constructs this async_result specialization from it, then
+// returns the result of calling its get() method.
+template< typename T >
+class async_result< boost::fibers::asio::detail::yield_handler< T > > :
+    public boost::fibers::asio::detail::async_result_base {
+public:
+    // type returned by get()
+    typedef T type;
+
+    explicit async_result( boost::fibers::asio::detail::yield_handler< T > & h) :
+        boost::fibers::asio::detail::async_result_base{ h } {
+        // Inject ptr to our value_ member into yield_handler<>: result will
+        // be stored here.
+        h.value_ = & value_;
+    }
+
+    // asio async method returns result of calling get()
+    type get() {
+        boost::fibers::asio::detail::async_result_base::get();
+        return std::move( value_);
+    }
+
+private:
+    type                            value_{};
+};
+
+

+

+

+ this async_result<> + specialization reserves a member of type T + to receive the passed data item, and sets yield_handler<T>::value_ to point to its own data member. +

+

+ async_result<yield_handler<T>> + overrides get(). + The override calls async_result_base::get(), + so the calling fiber suspends as described above. +

+

+ yield_handler<T>::operator()(error_code, T) stores + its passed T value into + async_result<yield_handler<T>>::value_. +

+

+ Then it passes control to yield_handler_base::operator()(error_code) to deal with waking the original fiber as + described above. +

+

+ When async_result<yield_handler<T>>::get() resumes, + it returns the stored value_ + to async_something() + and ultimately to async_something()’s caller. +

+

+ The case of a callback signature void(T) + is handled by having yield_handler<T>::operator()(T) engage + the void(error_code, T) machinery, + passing a success error_code. +

+

+ The source code above is found in yield.hpp + and detail/yield.hpp. +

+
+

+

[4] + This mechanism has been proposed as a conventional way to allow the caller + of an arbitrary async function to specify completion handling: N4045. +

+

[5] + per N4045 +

+
+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/custom.html b/doc/html/fiber/custom.html index bf5fa87f..52c4c1f9 100644 --- a/doc/html/fiber/custom.html +++ b/doc/html/fiber/custom.html @@ -32,7 +32,7 @@

As noted in the Scheduling section, by default - Boost.Fiber uses its own round_robin scheduler + Boost.Fiber uses its own round_robin scheduler for each thread. To control the way Boost.Fiber schedules ready fibers on a particular thread, in general you must follow several steps. This section discusses those steps, whereas Scheduling @@ -61,16 +61,16 @@

The first essential point is that we must associate an integer priority with - each fiber.[10] + each fiber.[9]

- One might suggest deriving a custom fiber subclass to store such + One might suggest deriving a custom fiber subclass to store such properties. There are a couple of reasons for the present mechanism.

  1. Boost.Fiber provides a number of different - ways to launch a fiber. (Consider fibers::async().) Higher-level + ways to launch a fiber. (Consider fibers::async().) Higher-level libraries might introduce additional such wrapper functions. A custom scheduler must associate its custom properties with every fiber in the thread, not only the ones explicitly launched by instantiating a @@ -85,7 +85,7 @@ a fiber on that thread.
  2. - The fiber class is actually just a handle to internal context data. + The fiber class is actually just a handle to internal context data. A subclass of fiber would not add data to context.
  3. @@ -96,8 +96,8 @@ the rest of your application.

    - Instead of deriving a custom scheduler fiber properties subclass from fiber, - you must instead derive it from fiber_properties. + Instead of deriving a custom scheduler fiber properties subclass from fiber, + you must instead derive it from fiber_properties.

    @@ -138,7 +138,7 @@

    1

    - Your subclass constructor must accept a context* + Your subclass constructor must accept a context* and pass it to the fiber_properties constructor.

    @@ -170,7 +170,7 @@ Scheduler Class

    - Now we can derive a custom scheduler from sched_algorithm_with_properties<>, + Now we can derive a custom scheduler from sched_algorithm_with_properties<>, specifying our custom property class priority_props as the template parameter.

    @@ -277,7 +277,7 @@

    2

    - You must override the sched_algorithm_with_properties::awakened() + You must override the sched_algorithm_with_properties::awakened() method. This is how your scheduler receives notification of a fiber that has become ready to run. @@ -293,7 +293,7 @@

    4

    - You must override the sched_algorithm_with_properties::pick_next() + You must override the sched_algorithm_with_properties::pick_next() method. This is how your scheduler actually advises the fiber manager of the next fiber to run. @@ -302,7 +302,7 @@

    5

    - You must override sched_algorithm_with_properties::has_ready_fibers() + You must override sched_algorithm_with_properties::has_ready_fibers() to inform the fiber manager of the state of your ready queue.

    @@ -310,7 +310,7 @@

    6

    - Overriding sched_algorithm_with_properties::property_change() + Overriding sched_algorithm_with_properties::property_change() is optional. This override handles the case in which the running fiber changes the priority of another ready fiber: a fiber already in our queue. In that @@ -328,7 +328,7 @@

Our example priority_scheduler - doesn't override sched_algorithm_with_properties::new_properties(): + doesn't override sched_algorithm_with_properties::new_properties(): we're content with allocating priority_props instances on the heap.

@@ -338,9 +338,9 @@ Default Scheduler

- You must call use_scheduling_algorithm() at the start + You must call use_scheduling_algorithm() at the start of each thread on which you want Boost.Fiber - to use your custom scheduler rather than its own default round_robin. + to use your custom scheduler rather than its own default round_robin. Specifically, you must call use_scheduling_algorithm() before performing any other Boost.Fiber operations on that thread.

@@ -360,8 +360,8 @@ Properties

- The running fiber can access its own fiber_properties subclass - instance by calling this_fiber::properties(). Although + The running fiber can access its own fiber_properties subclass + instance by calling this_fiber::properties(). Although properties<>() is a nullary function, you must pass, as a template parameter, the fiber_properties subclass.

@@ -372,9 +372,9 @@

- Given a fiber instance still connected with a running fiber (that - is, not fiber::detach()ed), you may access that fiber's properties - using fiber::properties(). As with this_fiber::properties<>(), you must pass your fiber_properties subclass as the template + Given a fiber instance still connected with a running fiber (that + is, not fiber::detach()ed), you may access that fiber's properties + using fiber::properties(). As with this_fiber::properties<>(), you must pass your fiber_properties subclass as the template parameter.

@@ -397,11 +397,11 @@ As shown in the launch() function above, it is reasonable to launch a fiber and immediately set relevant properties -- such as, for instance, its priority. Your custom scheduler can - then make use of this information next time the fiber manager calls sched_algorithm_with_properties::pick_next(). + then make use of this information next time the fiber manager calls sched_algorithm_with_properties::pick_next().



-

[10] +

[9] A previous version of the Fiber library implicitly tracked an int priority for each fiber, even though the default scheduler ignored it. This has been dropped, since the library now supports arbitrary scheduler-specific fiber diff --git a/doc/html/fiber/fiber_mgmt.html b/doc/html/fiber/fiber_mgmt.html index 5e916862..c0bbd23c 100644 --- a/doc/html/fiber/fiber_mgmt.html +++ b/doc/html/fiber/fiber_mgmt.html @@ -66,12 +66,6 @@ template< typename PROPS > PROPS & properties(); -void interruption_point(); -bool interruption_requested() noexcept; -bool interruption_enabled() noexcept; -class disable_interruption; -class restore_interruption; - }}

@@ -79,8 +73,8 @@ Tutorial

- Each fiber represents a micro-thread which will be launched and managed - cooperatively by a scheduler. Objects of type fiber are move-only. + Each fiber represents a micro-thread which will be launched and managed + cooperatively by a scheduler. Objects of type fiber are move-only.

boost::fibers::fiber f1; // not-a-fiber
 
@@ -117,7 +111,7 @@
   // this leads to undefined behaviour
 

- The spawned fiber does not immediately start running. It is enqueued + The spawned fiber does not immediately start running. It is enqueued in the list of ready-to-run fibers, and will run when the scheduler gets around to it.

@@ -126,19 +120,19 @@ Exceptions

- An exception escaping from the function or callable object passed to the fiber + An exception escaping from the function or callable object passed to the fiber constructor calls std::terminate(). - If you need to know which exception was thrown, use future<> or - packaged_task<>. + If you need to know which exception was thrown, use future<> or + packaged_task<>.

Detaching

- A fiber can be detached by explicitly invoking the fiber::detach() member - function. After fiber::detach() is called on a fiber object, that + A fiber can be detached by explicitly invoking the fiber::detach() member + function. After fiber::detach() is called on a fiber object, that object represents not-a-fiber. The fiber object may then safely be destroyed.

@@ -147,23 +141,22 @@ constructor

Boost.Fiber provides a number of ways to wait for a running fiber to complete. You can coordinate even with a detached fiber - using a mutex, or condition_variable, or + using a mutex, or condition_variable, or any of the other synchronization objects provided by the library.

If a detached fiber is still running when the thread's main fiber terminates, - that fiber will be interrupted and shut - down.[1] + the thread will not shut down.

Joining

- In order to wait for a fiber to finish, the fiber::join() member function - of the fiber object can be used. fiber::join() will block - until the fiber object has completed. + In order to wait for a fiber to finish, the fiber::join() member function + of the fiber object can be used. fiber::join() will block + until the fiber object has completed.

void some_fn() {
     ...
@@ -174,18 +167,18 @@ constructor
 f.join();
 

- If the fiber has already completed, then fiber::join() returns immediately - and the joined fiber object becomes not-a-fiber. + If the fiber has already completed, then fiber::join() returns immediately + and the joined fiber object becomes not-a-fiber.

Destruction

- When a fiber object representing a valid execution context (the fiber - is fiber::joinable()) is destroyed, the program terminates. If - you intend the fiber to outlive the fiber object that launched it, - use the fiber::detach() method. + When a fiber object representing a valid execution context (the fiber + is fiber::joinable()) is destroyed, the program terminates. If + you intend the fiber to outlive the fiber object that launched it, + use the fiber::detach() method.

{
     boost::fibers::fiber f( some_fn);
@@ -196,157 +189,16 @@ constructor
     f.detach();
 } // okay, program continues
 
-

- - Interruption -

-

- A valid fiber can be interrupted by invoking its fiber::interrupt() member - function. The next time that fiber executes one of the specific interruption-points - with interruption enabled, a fiber_interrupted - exception will be thrown. If this exception is not caught, the fiber will be - terminated, its stack unwound, its stack objects properly destroyed. -

-

- (fiber_interrupted, being thrown - by the library, is similarly caught by the library. It does not cause the program - to terminate.) -

-

- With disable_interruption a fiber can defer being - interrupted. -

-
// interruption enabled at this point
-{
-    boost::this_fiber::disable_interruption di1;
-    // interruption disabled
-    {
-        boost::this::fiber::disable_interruption di2;
-        // interruption still disabled
-    } // di2 destroyed; interruption state restored
-    // interruption still disabled
-} // di destroyed; interruption state restored
-// interruption enabled
-
-

- At any point, the interruption state for the current thread can be queried - by calling this_fiber::interruption_enabled(). -

-

- (But consider fiber f1 running - a packaged_task<>. Suppose f1 - is interrupted. Its associated future<> will be set with - fiber_interrupted. When fiber - f2 calls future::get() on - that future, get() will - immediately rethrow fiber_interrupted - — regardless of any disable_interruption instance f2 might have constructed.) -

-

- The following interruption-points - are defined and will throw fiber_interrupted - if this_fiber::interruption_requested() and - this_fiber::interruption_enabled(). -

-

- + Fiber IDs

Objects of class fiber::id can be - used to identify fibers. Each running fiber has a unique fiber::id obtainable - from the corresponding fiber -by calling the fiber::get_id() member + used to identify fibers. Each running fiber has a unique fiber::id obtainable + from the corresponding fiber +by calling the fiber::get_id() member function. Objects of class fiber::id can be copied, and used as keys in associative containers: the full range of comparison operators is provided. They can also be written to an output stream using the @@ -360,16 +212,59 @@ by calling the fiber::id yield a total order for every non-equal fiber::id.

-
-

-

[1] - This treatment of detached fibers depends on a detached fiber eventually - either terminating or reaching one of the specified interruption-points. - Note that since this_fiber::yield() is not - an interruption point, a detached fiber whose only interaction with the Fiber - library is yield() - cannot cleanly be terminated. -

+

+ + Enumeration + launch_policy +

+

+ launch_policy specifies whether + control passes immediately into a newly-launched fiber. +

+
enum class launch_policy {
+    dispatch,
+    post
+};
+
+

+ + dispatch +

+
+

+
+
Effects:
+

+ A fiber launched with launch_policy + == dispatch + is entered immediately. In other words, launching a fiber with dispatch suspends the caller (the previously-running + fiber) until the fiber scheduler has a chance to resume it later. +

+
+
+

+ + post +

+
+

+
+
Effects:
+

+ A fiber launched with launch_policy + == post + is passed to the fiber scheduler as ready, but it is not yet entered. + The caller (the previously-running fiber) continues executing. The newly-launched + fiber will be entered when the fiber scheduler has a chance to resume + it later. +

+
Note:
+

+ If launch_policy is not + explicitly specified, post + is the default. +

+
diff --git a/doc/html/fiber/fiber_mgmt/fiber.html b/doc/html/fiber/fiber_mgmt/fiber.html index b85a408f..714d7ad3 100644 --- a/doc/html/fiber/fiber_mgmt/fiber.html +++ b/doc/html/fiber/fiber_mgmt/fiber.html @@ -38,9 +38,15 @@ template<typenameFn,typename...Args>fiber(Fn&&,Args&&...); + template<typenameFn,typename...Args> + fiber(launch_policy,Fn&&,Args&&...); + template<typenameStackAllocator,typenameFn,typename...Args>fiber(std::allocator_arg_t,StackAllocator,Fn&&,Args&&...); + template<typenameStackAllocator,typenameFn,typename...Args> + fiber(launch_policy,std::allocator_arg_t,StackAllocator,Fn&&,Args&&...); + ~fiber();fiber(fiberconst&)=delete; @@ -61,8 +67,6 @@ voidjoin(); - voidinterrupt()noexcept; - template<typenamePROPS>PROPS&properties();}; @@ -88,7 +92,7 @@
Effects:

- Constructs a fiber instance that refers to not-a-fiber. + Constructs a fiber instance that refers to not-a-fiber.

Postconditions:

@@ -108,8 +112,15 @@

template< typename Fn, typename ... Args >
 fiber( Fn && fn, Args && ... args);
 
+template< typename Fn, typename ... Args >
+fiber( launch_policy lpol, Fn && fn, Args && ... args);
+
 template< typename StackAllocator, typename Fn, typename ... Args >
 fiber( std::allocator_arg_t, StackAllocator salloc, Fn && fn, Args && ... args);
+
+template< typename StackAllocator, typename Fn, typename ... Args >
+fiber( launch_policy lpol, std::allocator_arg_t, StackAllocator salloc,
+       Fn && fn, Args && ... args);
 

@@ -122,7 +133,12 @@
Effects:

fn is copied or moved - into internal storage for access by the new fiber. + into internal storage for access by the new fiber. If launch_policy is + specified (or defaulted) to post, + the new fiber is marked ready and will be entered at + the next opportunity. If launch_policy + is specified as dispatch, + the calling fiber is suspended and the new fiber is entered immediately.

Postconditions:

@@ -140,7 +156,7 @@ is required to allocate a stack for the internal execution_context. If StackAllocator is not explicitly passed, the default stack allocator depends on BOOST_USE_SEGMENTED_STACKS: if defined, - you will get a segmented_stack, else a fixedsize_stack. + you will get a segmented_stack, else a fixedsize_stack.

See also:

@@ -162,7 +178,7 @@

Effects:

Transfers ownership of the fiber managed by other - to the newly constructed fiber instance. + to the newly constructed fiber instance.

Postconditions:

@@ -214,15 +230,15 @@

Effects:

- If the fiber is fiber::joinable(), calls std::terminate. + If the fiber is fiber::joinable(), calls std::terminate. Destroys *this.

Note:

The programmer must ensure that the destructor is never executed while - the fiber is still fiber::joinable(). Even if you know - that the fiber has completed, you must still call either fiber::join() or - fiber::detach() before destroying the fiber + the fiber is still fiber::joinable(). Even if you know + that the fiber has completed, you must still call either fiber::join() or + fiber::detach() before destroying the fiber object.

@@ -269,7 +285,7 @@
Preconditions:

- the fiber is fiber::joinable(). + the fiber is fiber::joinable().

Effects:

@@ -282,20 +298,14 @@

Throws:

- fiber_interrupted if - the current fiber is interrupted or fiber_error + fiber_error

Error Conditions:

resource_deadlock_would_occur: if this->get_id() == boost::this_fiber::get_id(). invalid_argument: - if the fiber is not fiber::joinable(). -

-
Notes:
-

- join() - is one of the predefined interruption-points. + if the fiber is not fiber::joinable().

@@ -315,12 +325,12 @@
Preconditions:

- the fiber is fiber::joinable(). + the fiber is fiber::joinable().

Effects:

The fiber of execution becomes detached, and no longer has an associated - fiber object. + fiber object.

Postconditions:

@@ -334,7 +344,7 @@

Error Conditions:

invalid_argument: if the fiber is - not fiber::joinable(). + not fiber::joinable().

@@ -364,36 +374,7 @@

See also:

- this_fiber::get_id() -

-
- -

-

-
- - - Member function interrupt() -
-

-

-
void interrupt() noexcept;
-
-
-

-
-
Effects:
-

- If *this - refers to a fiber of execution, request that the fiber will be interrupted - the next time it enters one of the predefined interruption-points - with interruption enabled, or if it is currently blocked - in a call to one of the predefined interruption-points - with interruption enabled. -

-
Throws:
-

- Nothing + this_fiber::get_id()

@@ -416,8 +397,8 @@
Preconditions:

*this - refers to a fiber of execution. use_scheduling_algorithm() has - been called from this thread with a subclass of sched_algorithm_with_properties<> with + refers to a fiber of execution. use_scheduling_algorithm() has + been called from this thread with a subclass of sched_algorithm_with_properties<> with the same template argument PROPS.

Returns:
@@ -431,7 +412,7 @@

Note:

- sched_algorithm_with_properties<> provides + sched_algorithm_with_properties<> provides a way for a user-coded scheduler to associate extended properties, such as priority, with a fiber instance. This method allows access to those user-provided properties. @@ -544,7 +525,7 @@

Effects:

Directs Boost.Fiber to use SchedAlgo, which must be a concrete - subclass of sched_algorithm, as the scheduling + subclass of sched_algorithm, as the scheduling algorithm for all fibers in the current thread. Pass any required SchedAlgo constructor arguments as args. @@ -555,7 +536,7 @@ make that thread call use_scheduling_algorithm() before any other Boost.Fiber entry point. If no scheduler has been set for the current thread by the time Boost.Fiber needs to use - it, the library will create a default round_robin instance + it, the library will create a default round_robin instance for this thread.

Throws:
diff --git a/doc/html/fiber/fiber_mgmt/this_fiber.html b/doc/html/fiber/fiber_mgmt/this_fiber.html index e8f7b99a..522c336c 100644 --- a/doc/html/fiber/fiber_mgmt/this_fiber.html +++ b/doc/html/fiber/fiber_mgmt/this_fiber.html @@ -34,11 +34,6 @@ That is, in many respects the main fiber on each thread can be treated like an explicitly-launched fiber.

-

- However, unlike an explicitly-launched fiber, if fiber_interrupted - is thrown (or rethrown) on a thread's main fiber without being caught, the - Fiber library cannot catch it: std::terminate() will be called. -

namespace boost {
 namespace this_fiber {
 
@@ -51,12 +46,6 @@
 template< typename PROPS >
 PROPS & properties();
 
-void interruption_point();
-bool interruption_requested() noexcept;
-bool interruption_enabled() noexcept;
-class disable_interruption;
-class restore_interruption;
-
 }}
 

@@ -111,13 +100,7 @@

Throws:

- fiber_interrupted if - the current fiber is interrupted or timeout-related exceptions. -

-
Note:
-

- sleep_until() - is one of the predefined interruption-points. + timeout-related exceptions.

Note:

@@ -160,13 +143,7 @@

Throws:

- fiber_interrupted if - the current fiber is interrupted or timeout-related exceptions. -

-
Note:
-

- sleep_for() - is one of the predefined interruption-points. + timeout-related exceptions.

Note:

@@ -203,11 +180,8 @@

Note:

- yield() - is not an interruption point. A fiber that calls - yield() - is not suspended: it is immediately passed to the scheduler as ready - to run. + A fiber that calls yield() is not suspended: it is immediately + passed to the scheduler as ready to run.

@@ -231,8 +205,8 @@
Preconditions:

- use_scheduling_algorithm() has been called from - this thread with a subclass of sched_algorithm_with_properties<> with + use_scheduling_algorithm() has been called from + this thread with a subclass of sched_algorithm_with_properties<> with the same template argument PROPS.

Returns:
@@ -247,7 +221,7 @@

Note:

- sched_algorithm_with_properties<> provides + sched_algorithm_with_properties<> provides a way for a user-coded scheduler to associate extended properties, such as priority, with a fiber instance. This function allows access to those user-provided properties. @@ -263,285 +237,6 @@

-

-

-
- - - Non-member - function this_fiber::interruption_point() -
-

-

-
#include <boost/fiber/interruption.hpp>
-
-void interruption_point();
-
-
-

-
-
Effects:
-

- Check to see if the current fiber has been interrupted. -

-
Throws:
-

- fiber_interrupted if - this_fiber::interruption_enabled() and - this_fiber::interruption_requested() both - return true. -

-
-
-

-

-
- - - Non-member - function this_fiber::interruption_requested() -
-

-

-
#include <boost/fiber/interruption.hpp>
-
-bool interruption_requested() noexcept;
-
-
-

-
-
Returns:
-

- true if interruption has - been requested for the current fiber, false - otherwise. -

-
Throws:
-

- Nothing. -

-
-
-

-

-
- - - Non-member - function this_fiber::interruption_enabled() -
-

-

-
#include <boost/fiber/interruption.hpp>
-
-bool interruption_enabled() noexcept;
-
-
-

-
-
Returns:
-

- true if interruption is - enabled for the current fiber, false - otherwise. -

-
Throws:
-

- Nothing. -

-
Note:
-

- Interruption is enabled by default. -

-
-
-

-

-
- - - Class - disable_interruption -
-

-

-
#include <boost/fiber/interruption.hpp>
-
-class disable_interruption {
-public:
-    disable_interruption() noexcept;
-    ~disable_interruption() noexcept;
-    disable_interruption(const disable_interruption&) = delete;
-    disable_interruption& operator=(const disable_interruption&) = delete;
-};
-
-
- - Constructor -
-
disable_interruption() noexcept;
-
-
-

-
-
Effects:
-

- Stores the current state of this_fiber::interruption_enabled() and - disables interruption for the current fiber. -

-
Postconditions:
-

- this_fiber::interruption_enabled() returns - false for the current - fiber. -

-
Throws:
-

- Nothing. -

-
Note:
-

- Nesting of disable_interruption - instances matters. Constructing a disable_interruption - while this_fiber::interruption_enabled() == false - has no effect. -

-
-
-
- - Destructor -
-
~disable_interruption() noexcept;
-
-
-

-
-
Preconditions:
-

- Must be called from the same fiber on which *this was constructed. -

-
Effects:
-

- Restores the state of this_fiber::interruption_enabled() for - the current fiber to the state saved at construction of *this. -

-
Postconditions:
-

- this_fiber::interruption_enabled() for - the current fiber returns the value stored by the constructor of *this. -

-
Note:
-

- Destroying a disable_interruption - constructed while this_fiber::interruption_enabled() == false - has no effect. -

-
-
-

-

-
- - - Class - restore_interruption -
-

-

-
#include <boost/fiber/interruption.hpp>
-
-class restore_interruption {
-public:
-    explicit restore_interruption(disable_interruption&) noexcept;
-    ~restore_interruption() noexcept;
-    restore_interruption(const restore_interruption&) = delete;
-    restore_interruption& operator=(const restore_interruption&) = delete;
-};
-
-
- - Constructor -
-
explicit restore_interruption(disable_interruption& disabler) noexcept;
-
-
-

-
-
Preconditions:
-

- Must be called from the same fiber on which disabler - was constructed. -

-
Effects:
-

- Restores the current state of this_fiber::interruption_enabled() for - the current fiber to that saved in disabler. -

-
Postconditions:
-

- this_fiber::interruption_enabled() for - the current fiber returns the value stored in the constructor of disabler. -

-
Throws:
-

- Nothing. -

-
Note:
-

- Nesting of restore_interruption - instances does not matter: only the disable_interruption instance - passed to the constructor matters. Constructing a restore_interruption - with a disable_interruption - constructed while this_fiber::interruption_enabled() == false - has no effect. -

-
-
-
- - Destructor -
-
~restore_interruption() noexcept;
-
-
-

-
-
Preconditions:
-

- Must be called from the same fiber on which *this was constructed. -

-
Effects:
-

- Disables interruption for the current fiber. -

-
Postconditions:
-

- this_fiber::interruption_enabled() for - the current fiber returns false. -

-
Note:
-

- Destroying a restore_interruption - constructed with a disable_interruption constructed - while this_fiber::interruption_enabled() == false - has no effect. -

-
-
-
void foo() {
-    // interruption is enabled
-    {
-        boost::this_fiber::disable_interruption di;
-        // interruption is disabled
-        {
-            boost::this_fiber::restore_interruption ri( di);
-            // interruption now enabled
-        } // ri destroyed, interruption disabled again
-    } // di destructed, interruption state restored
-    // interruption now enabled
-}
-
diff --git a/doc/html/fiber/fls.html b/doc/html/fiber/fls.html index f493b807..9ad3f56a 100644 --- a/doc/html/fiber/fls.html +++ b/doc/html/fiber/fls.html @@ -40,9 +40,9 @@ at fiber exit

- When a fiber exits, the objects associated with each fiber_specific_ptr instance + When a fiber exits, the objects associated with each fiber_specific_ptr instance are destroyed. By default, the object pointed to by a pointer p is destroyed by invoking delete p, - but this can be overridden for a specific instance of fiber_specific_ptr by + but this can be overridden for a specific instance of fiber_specific_ptr by providing a cleanup routine func to the constructor. In this case, the object is destroyed by invoking func(p). The cleanup functions are called in an unspecified order. @@ -100,9 +100,9 @@

Effects:

- Construct a fiber_specific_ptr object for storing + Construct a fiber_specific_ptr object for storing a pointer to an object of type T - specific to each fiber. When reset() is called, or the fiber exits, fiber_specific_ptr calls + specific to each fiber. When reset() is called, or the fiber exits, fiber_specific_ptr calls fn(this->get()). If the no-arguments constructor is used, the default delete-based cleanup function will be used to destroy the fiber-local objects. @@ -125,7 +125,7 @@

Requires:

- All the fiber specific instances associated to this fiber_specific_ptr + All the fiber specific instances associated to this fiber_specific_ptr (except maybe the one associated to this fiber) must be nullptr.

@@ -140,7 +140,7 @@ The requirement is an implementation restriction. If the destructor promised to delete instances for all fibers, the implementation would be forced to maintain a list of all the fibers having an associated specific ptr, - which is against the goal of fiber specific data. In general, a fiber_specific_ptr should + which is against the goal of fiber specific data. In general, a fiber_specific_ptr should outlive the fibers that use it.

@@ -152,7 +152,7 @@

Care needs to be taken to ensure that any fibers still running after an instance - of fiber_specific_ptr has been destroyed do not call + of fiber_specific_ptr has been destroyed do not call any member functions on that instance.

@@ -187,7 +187,7 @@ Note

- The initial value associated with an instance of fiber_specific_ptr is + The initial value associated with an instance of fiber_specific_ptr is nullptr for each fiber.

diff --git a/doc/html/fiber/integration.html b/doc/html/fiber/integration.html index 24e71806..f5545147 100644 --- a/doc/html/fiber/integration.html +++ b/doc/html/fiber/integration.html @@ -7,7 +7,7 @@ - + @@ -20,111 +20,22 @@

-PrevUpHomeNext +PrevUpHomeNext
-

- - Overview -

-

- As always with cooperative concurrency, it is important not to let any one - fiber monopolize the processor too long: that could starve other - ready fibers. This section discusses a couple of solutions. -

-

- - Event-Driven - Program -

-

- Consider a classic event-driven program, organized around a main loop that - fetches and dispatches incoming I/O events. You are introducing Boost.Fiber - because certain asynchronous I/O sequences are logically sequential, and for - those you want to write and maintain code that looks and acts sequential. -

-

- You are launching fibers on the application's main thread because certain of - their actions will affect its user interface, and the application's UI framework - permits UI operations only on the main thread. Or perhaps those fibers need - access to main-thread data, and it would be too expensive in runtime (or development - time) to robustly defend every such data item with thread synchronization primitives. -

-

- You must ensure that the application's main loop itself - doesn't monopolize the processor: that the fibers it launches will get the - CPU cycles they need. -

-

- The solution is the same as for any fiber that might claim the CPU for an extended - time: introduce calls to this_fiber::yield(). The most straightforward - approach is to call yield() - on every iteration of your existing main loop. In effect, this unifies the - application's main loop with Boost.Fiber's - internal main loop. yield() - allows the fiber manager to run any fibers that have become ready since the - previous iteration of the application's main loop. When these fibers have had - a turn, control passes to the thread's main fiber, which returns from yield() and - resumes the application's main loop. -

-

- - Integrating - with Boost.Asio -

-

- More challenging is when the application's main loop is embedded in some other - library or framework. Such an application will typically, after performing - all necessary setup, pass control to some form of run() function from which control does not return - until application shutdown. -

-

- A Boost.Asio - program might call io_service::run() - in this way. -

-

- The trick here is to arrange to pass control to this_fiber::yield() frequently. - You can use an Asio - timer for this purpose. Instantiate the timer, arranging to call a - handler function when the timer expires: -

-

- [run_service] -

-

- The handler function calls yield(), then resets the timer and arranges to wake - up again on expiration: -

-

- [timer_handler] -

-

- Then instead of directly calling io_service::run(), - your application would call the above run_service(io_service&) wrapper. -

-

- Since, in this example, we always pass control to the fiber manager via yield(), - the calling fiber is never blocked. Therefore there is always at least one - ready fiber. Therefore the fiber manager never sleeps. -

-

- Using std::chrono::seconds(0) for every - keepalive timer interval would be unfriendly to other threads. When all I/O - is pending and all fibers are blocked, the io_service and the fiber manager - would simply spin the CPU, passing control back and forth to each other. Resetting - the timer for keepalive_iterval - allows tuning the responsiveness of this thread relative to others in the same - way as when Boost.Fiber is running without - Boost.Asio. -

-

- The source code above is found in round_robin.hpp. -

+
@@ -136,7 +47,7 @@

-PrevUpHomeNext +PrevUpHomeNext
diff --git a/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html b/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html new file mode 100644 index 00000000..e067bc86 --- /dev/null +++ b/doc/html/fiber/integration/deeper_dive_into___boost_asio__.html @@ -0,0 +1,445 @@ + + + +Deeper Dive into Boost.Asio + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ By now the alert reader is thinking: but surely, with Asio in particular, + we ought to be able to do much better than periodic polling pings! +

+

+ This turns out to be surprisingly tricky. We present a possible approach + in examples/asio/round_robin.hpp. +

+

+ One consequence of using Boost.Asio + is that you must always let Asio suspend the running thread. Since Asio is + aware of pending I/O requests, it can arrange to suspend the thread in such + a way that the OS will wake it on I/O completion. No one else has sufficient + knowledge. +

+

+ So the fiber scheduler must depend on Asio for suspension and resumption. + It requires Asio handler calls to wake it. +

+

+ One dismaying implication is that we cannot support multiple threads calling + io_service::run() + on the same io_service instance. + The reason is that Asio provides no way to constrain a particular handler + to be called only on a specified thread. A fiber scheduler instance is locked + to a particular thread: that instance cannot manage any other thread’s fibers. + Yet if we allow multiple threads to call io_service::run() + on the same io_service instance, + a fiber scheduler which needs to sleep can have no guarantee that it will + reawaken in a timely manner. It can set an Asio timer, as described above + — but that timer’s handler may well execute on a different thread! +

+

+ Another implication is that since an Asio-aware fiber scheduler (not to mention + boost::fibers::asio::yield) + depends on handler calls from the io_service, + it is the application’s responsibility to ensure that io_service::stop() + is not called until every fiber has terminated. +

+

+ It is easier to reason about the behavior of the presented asio::round_robin scheduler if we require that + after initial setup, the thread’s main fiber is the fiber that calls io_service::run(), + so let’s impose that requirement. +

+

+ Naturally, the first thing we must do on each thread using a custom fiber + scheduler is call use_scheduling_algorithm(). However, + since asio::round_robin requires an io_service + instance, we must first declare that. +

+

+

+
boost::asio::io_service io_svc;
+boost::fibers::use_scheduling_algorithm< boost::fibers::asio::round_robin >( io_svc);
+
+

+

+

+ use_scheduling_algorithm() instantiates asio::round_robin, + which naturally calls its constructor: +

+

+

+
round_robin( boost::asio::io_service & io_svc) :
+    io_svc_( io_svc),
+    suspend_timer_( io_svc_) {
+    // We use add_service() very deliberately. This will throw
+    // service_already_exists if you pass the same io_service instance to
+    // more than one round_robin instance.
+    boost::asio::add_service( io_svc_, new service( io_svc_));
+}
+
+

+

+

+ asio::round_robin binds the passed io_service reference and initializes a + boost::asio::steady_timer: +

+

+

+
boost::asio::io_service                     &   io_svc_;
+boost::asio::steady_timer                       suspend_timer_;
+
+

+

+

+ Then it calls boost::asio::add_service() + with a nested service struct: +

+

+

+
struct service : public boost::asio::io_service::service {
+    static boost::asio::io_service::id                  id;
+
+    std::unique_ptr< boost::asio::io_service::work >    work_;
+
+    service( boost::asio::io_service & io_svc) :
+        boost::asio::io_service::service( io_svc),
+        work_{ new boost::asio::io_service::work( io_svc) } {
+        io_svc.post([&io_svc](){
+
+

+

+

+ ... +

+

+

+
        });
+    }
+
+    virtual ~service() {}
+
+    service( service const&) = delete;
+    service & operator=( service const&) = delete;
+
+    void shutdown_service() override final {
+        work_.reset();
+    }
+};
+
+

+

+

+ The service struct has a + couple of roles. +

+

+ Its foremost role is to manage a std::unique_ptr<boost::asio::io_service::work>. We want the + io_service instance to continue + its main loop even when there is no pending Asio I/O. +

+

+ But when boost::asio::io_service::service::shutdown_service() + is called, we discard the io_service::work + instance so the io_service + can shut down properly. +

+

+ Its other purpose is to post() + a lambda (not yet shown). Let’s walk further through the example program before + coming back to explain that lambda. +

+

+ The service constructor returns + to asio::round_robin’s constructor, which returns + to use_scheduling_algorithm(), which returns to the application code. +

+

+ Once it has called use_scheduling_algorithm(), the application may now launch some number + of fibers: +

+

+

+
// server
+tcp::acceptor a( io_svc, tcp::endpoint( tcp::v4(), 9999) );
+boost::fibers::fiber( server, std::ref( io_svc), std::ref( a) ).detach();
+// client
+const unsigned iterations = 2;
+const unsigned clients = 3;
+boost::fibers::barrier b( clients);
+for ( unsigned i = 0; i < clients; ++i) {
+    boost::fibers::fiber(
+            client, std::ref( io_svc), std::ref( a), std::ref( b), iterations).detach();
+}
+
+

+

+

+ Since we don’t specify a launch_policy, these fibers are + ready to run, but have not yet been entered. +

+

+ Having set everything up, the application calls io_service::run(): +

+

+

+
io_svc.run();
+
+

+

+

+ Now what? +

+

+ Because this io_service instance + owns an io_service::work instance, run() does not immediately return. But — none of + the fibers that will perform actual work has even been entered yet! +

+

+ Without that initial post() call in service’s + constructor, nothing would happen. The application would + hang right here. +

+

+ So, what should the post() handler execute? Simply this_fiber::yield()? +

+

+ That would be a promising start. But we have no guarantee that any of the + other fibers will initiate any Asio operations to keep the ball rolling. + For all we know, every other fiber could reach a similar this_fiber::yield() call first. Control would return to the + post() + handler, which would return to Asio, and... the application would hang. +

+

+ The post() + handler could post() + itself again. But as discussed in the + previous section, once there are actual I/O operations in flight — once + we reach a state in which no fiber is ready — +that would cause the thread to + spin. +

+

+ We could, of course, set an Asio timer — again as previously + discussed. But in this deeper dive, we’re trying to + do a little better. +

+

+ The key to doing better is that since we’re in a fiber, we can run an actual + loop — not just a chain of callbacks. We can wait for something to happen + by calling io_service::run_one() + — or we can execute already-queued Asio handlers by calling io_service::poll(). +

+

+ Here’s the body of the lambda passed to the post() call. +

+

+

+
while ( ! io_svc.stopped() ) {
+    if ( boost::fibers::has_ready_fibers() ) {
+        // run all pending handlers in round_robin
+        while ( io_svc.poll() );
+        // run pending (ready) fibers
+        this_fiber::yield();
+    } else {
+        // run one handler inside io_service
+        // if no handler available, block this thread
+        if ( ! io_svc.run_one() ) {
+            break;
+        }
+    }
+}
+
+

+

+

+ We want this loop to exit once the io_service + instance has been stopped(). +

+

+ As long as there are ready fibers, we interleave running ready Asio handlers + with running ready fibers. +

+

+ If there are no ready fibers, we wait by calling run_one(). Once any Asio handler has been called + — no matter which — run_one() + returns. That handler may have transitioned some fiber to ready state, so + we loop back to check again. +

+

+ (We won’t describe awakened(), pick_next() or has_ready_fibers(), as these are just like round_robin::awakened(), + round_robin::pick_next() and round_robin::has_ready_fibers().) +

+

+ That leaves suspend_until() and notify(). +

+

+ Doubtless you have been asking yourself: why are we calling io_service::run_one() + in the lambda loop? Why not call it in suspend_until(), whose very API was designed for just such + a purpose? +

+

+ Under normal circumstances, when the fiber manager finds no ready fibers, + it calls sched_algorithm::suspend_until(). Why + test has_ready_fibers() + in the lambda loop? Why not leverage the normal mechanism? +

+

+ The answer is: it matters who’s asking. +

+

+ Consider the lambda loop shown above. The only Boost.Fiber + APIs it engages are has_ready_fibers() and this_fiber::yield(). + yield() + does not block the calling fiber: the calling fiber + does not become unready. It is immediately passed back to sched_algorithm::awakened(), + to be resumed in its turn when all other ready fibers have had a chance to + run. In other words: during a yield() call, there is always at least + one ready fiber. +

+

+ As long as this lambda loop is still running, the fiber manager does not + call suspend_until() + because it always has a fiber ready to run. +

+

+ However, the lambda loop itself can detect the case + when no other fibers are ready to run: the running fiber + is not ready but running. +

+

+ That said, suspend_until() and notify() are in fact called during orderly shutdown + processing, so let’s try a plausible implementation. +

+

+

+
void suspend_until( std::chrono::steady_clock::time_point const& abs_time) noexcept {
+    // Set a timer so at least one handler will eventually fire, causing
+    // run_one() to eventually return. Set a timer even if abs_time ==
+    // time_point::max() so the timer can be canceled by our notify()
+    // method -- which calls the handler.
+    if ( suspend_timer_.expires_at() != abs_time) {
+        // Each expires_at(time_point) call cancels any previous pending
+        // call. We could inadvertently spin like this:
+        // dispatcher calls suspend_until() with earliest wake time
+        // suspend_until() sets suspend_timer_
+        // lambda loop calls run_one()
+        // some other asio handler runs before timer expires
+        // run_one() returns to lambda loop
+        // lambda loop yields to dispatcher
+        // dispatcher finds no ready fibers
+        // dispatcher calls suspend_until() with SAME wake time
+        // suspend_until() sets suspend_timer_ to same time, canceling
+        // previous async_wait()
+        // lambda loop calls run_one()
+        // asio calls suspend_timer_ handler with operation_aborted
+        // run_one() returns to lambda loop... etc. etc.
+        // So only actually set the timer when we're passed a DIFFERENT
+        // abs_time value.
+        suspend_timer_.expires_at( abs_time);
+        // It really doesn't matter what the suspend_timer_ handler does,
+        // or even whether it's called because the timer ran out or was
+        // canceled. The whole point is to cause the run_one() call to
+        // return. So just pass a no-op lambda with proper signature.
+        suspend_timer_.async_wait([](boost::system::error_code const&){});
+    }
+}
+
+

+

+

+ As you might expect, suspend_until() sets an asio::steady_timer to expires_at() + the passed std::chrono::steady_clock::time_point. + Usually. +

+

+ As indicated in comments, we avoid setting suspend_timer_ + multiple times to the same time_point + value since every expires_at() call cancels any previous async_wait() + call. There is a chance that we could spin. Reaching suspend_until() means the fiber manager intends to yield + the processor to Asio. Cancelling the previous async_wait() call would fire its handler, causing run_one() + to return, potentially causing the fiber manager to call suspend_until() again with the same time_point + value... +

+

+ Given that we suspend the thread by calling io_service::run_one(), what’s important is that our async_wait() + call will cause a handler to run, which will cause run_one() to return. It’s not so important specifically + what that handler does. +

+

+

+
void notify() noexcept {
+    // Something has happened that should wake one or more fibers BEFORE
+    // suspend_timer_ expires. Reset the timer to cause it to fire
+    // immediately, causing the run_one() call to return. In theory we
+    // could use cancel() because we don't care whether suspend_timer_'s
+    // handler is called with operation_aborted or success. However --
+    // cancel() doesn't change the expiration time, and we use
+    // suspend_timer_'s expiration time to decide whether it's already
+    // set. If suspend_until() set some specific wake time, then notify()
+    // canceled it, then suspend_until() was called again with the same
+    // wake time, it would match suspend_timer_'s expiration time and we'd
+    // refrain from setting the timer. So instead of simply calling
+    // cancel(), reset the timer, which cancels the pending sleep AND sets
+    // a new expiration time. This will cause us to spin the loop twice --
+    // once for the operation_aborted handler, once for timer expiration
+    // -- but that shouldn't be a big problem.
+    suspend_timer_.expires_at( std::chrono::steady_clock::now() );
+}
+
+

+

+

+ Since an expires_at() + call cancels any previous async_wait() call, we can make notify() simply call steady_timer::expires_at(). That should cause the io_service + to call the async_wait() + handler with operation_aborted. +

+

+ The comments in notify() + explain why we call expires_at() rather than cancel(). +

+

+ This boost::fibers::asio::round_robin implementation is used in + examples/asio/autoecho.cpp. +

+

+ It seems possible that you could put together a more elegant Fiber / Asio + integration. But as noted at the outset: it’s tricky. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/integration/embedded_main_loop.html b/doc/html/fiber/integration/embedded_main_loop.html new file mode 100644 index 00000000..f2d1bf75 --- /dev/null +++ b/doc/html/fiber/integration/embedded_main_loop.html @@ -0,0 +1,77 @@ + + + +Embedded Main Loop + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ More challenging is when the application’s main loop is embedded in some other + library or framework. Such an application will typically, after performing + all necessary setup, pass control to some form of run() function from which control does not return + until application shutdown. +

+

+ A Boost.Asio + program might call io_service::run() + in this way. +

+

+ In general, the trick is to arrange to pass control to this_fiber::yield() frequently. + You could use an Asio + timer for that purpose. You could instantiate the timer, arranging + to call a handler function when the timer expires. The handler function could + call yield(), + then reset the timer and arrange to wake up again on its next expiration. +

+

+ Since, in this thought experiment, we always pass control to the fiber manager + via yield(), + the calling fiber is never blocked. Therefore there is always at least one + ready fiber. Therefore the fiber manager never calls sched_algorithm::suspend_until(). +

+

+ Using io_service::post() + instead of setting a timer for some nonzero interval would be unfriendly + to other threads. When all I/O is pending and all fibers are blocked, the + io_service and the fiber manager would simply spin the CPU, passing control + back and forth to each other. Using a timer allows tuning the responsiveness + of this thread relative to others. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/integration/event_driven_program.html b/doc/html/fiber/integration/event_driven_program.html new file mode 100644 index 00000000..183b01b6 --- /dev/null +++ b/doc/html/fiber/integration/event_driven_program.html @@ -0,0 +1,73 @@ + + + +Event-Driven Program + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ Consider a classic event-driven program, organized around a main loop that + fetches and dispatches incoming I/O events. You are introducing Boost.Fiber because certain asynchronous I/O sequences + are logically sequential, and for those you want to write and maintain code + that looks and acts sequential. +

+

+ You are launching fibers on the application’s main thread because certain + of their actions will affect its user interface, and the application’s UI + framework permits UI operations only on the main thread. Or perhaps those + fibers need access to main-thread data, and it would be too expensive in + runtime (or development time) to robustly defend every such data item with + thread synchronization primitives. +

+

+ You must ensure that the application’s main loop itself + doesn’t monopolize the processor: that the fibers it launches will get the + CPU cycles they need. +

+

+ The solution is the same as for any fiber that might claim the CPU for an + extended time: introduce calls to this_fiber::yield(). The + most straightforward approach is to call yield() on every iteration of your existing main + loop. In effect, this unifies the application’s main loop with Boost.Fiber’s + internal main loop. yield() allows the fiber manager to run any fibers + that have become ready since the previous iteration of the application’s main + loop. When these fibers have had a turn, control passes to the thread’s main + fiber, which returns from yield() and resumes the application’s main loop. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/integration/overview.html b/doc/html/fiber/integration/overview.html new file mode 100644 index 00000000..d91b100d --- /dev/null +++ b/doc/html/fiber/integration/overview.html @@ -0,0 +1,48 @@ + + + +Overview + + + + + + + + + + + + + + + +
Boost C++ LibrariesHomeLibrariesPeopleFAQMore
+
+
+PrevUpHomeNext +
+
+ +

+ As always with cooperative concurrency, it is important not to let any one + fiber monopolize the processor too long: that could starve + other ready fibers. This section discusses a couple of solutions. +

+
+ + + +
+
+
+PrevUpHomeNext +
+ + diff --git a/doc/html/fiber/migration.html b/doc/html/fiber/migration.html index 1df65a89..d98ffb8e 100644 --- a/doc/html/fiber/migration.html +++ b/doc/html/fiber/migration.html @@ -34,7 +34,7 @@

Each fiber owns a stack and manages its execution state, including all registers and CPU flags, the instruction pointer and the stack pointer. That means, in - general, a fiber is not bound to a specific thread.[3] ,[4] + general, a fiber is not bound to a specific thread.[2],[3]

Migrating a fiber from a logical CPU with heavy workload to another logical @@ -49,13 +49,13 @@ due to increased latency of memory access.

- Only fibers that are contained in sched_algorithm's ready + Only fibers that are contained in sched_algorithm's ready queue can migrate between threads. You cannot migrate a running fiber, nor one that is blocked.

InBoost.Fiber a fiber is migrated by invoking - context::migrate() on the context instance for a + context::migrate() on the context instance for a fiber already associated with the destination thread, passing the context for the fiber to be migrated.

@@ -67,7 +67,7 @@ In the example work_sharing.cpp multiple worker fibers are created on the main thread. Each fiber gets a character as parameter at construction. This character is printed out ten times. Between - each iteration the fiber calls this_fiber::yield(). That puts + each iteration the fiber calls this_fiber::yield(). That puts the fiber in the ready queue of the fiber-scheduler shared_ready_queue, running in the current thread. The next fiber ready to be executed is dequeued from the shared ready queue and resumed by shared_ready_queue @@ -182,7 +182,7 @@

The start of the threads is synchronized with a barrier. The main fiber of each thread (including main thread) is suspended until all worker fibers are - complete. When the main fiber returns from condition_variable::wait(), + complete. When the main fiber returns from condition_variable::wait(), the thread terminates: the main thread joins all other threads.

@@ -311,13 +311,13 @@ The fiber scheduler shared_ready_queue is like round_robin, except that it shares a common ready queue among all participating threads. A thread - participates in this pool by executing use_scheduling_algorithm() + participates in this pool by executing use_scheduling_algorithm() before any other Boost.Fiber operation.

The important point about the ready queue is that it's a class static, common - to all instances of shared_ready_queue. Fibers that are enqueued via sched_algorithm::awakened() (fibers + to all instances of shared_ready_queue. Fibers that are enqueued via sched_algorithm::awakened() (fibers that are ready to be resumed) are thus available to all threads. It is required to reserve a separate, scheduler-specific queue for the thread's main fiber and dispatcher fibers: these may not be shared between @@ -356,7 +356,7 @@ before

- When sched_algorithm::pick_next() gets called inside + When sched_algorithm::pick_next() gets called inside one thread, a fiber is dequeued from rqueue_ and will be resumed in that thread.

@@ -409,12 +409,12 @@ before



-

[3] +

[2] The main fiber on each thread, that is, the fiber on which the thread is launched, cannot migrate to any other thread. Also Boost.Fiber implicitly creates a dispatcher fiber for each thread — this cannot migrate either.

-

[4] +

[3] Of course it would be problematic to migrate a fiber that relies on thread-local storage.

diff --git a/doc/html/fiber/nonblocking.html b/doc/html/fiber/nonblocking.html index ea50d932..558502bb 100644 --- a/doc/html/fiber/nonblocking.html +++ b/doc/html/fiber/nonblocking.html @@ -6,7 +6,7 @@ - + @@ -20,7 +20,7 @@
-PrevUpHomeNext +PrevUpHomeNext

@@ -60,7 +60,7 @@ Boost.Fiber can simplify this problem immensely. Once you have integrated with the application's main loop as described in Sharing a Thread with Another Main Loop, - waiting for the next main-loop iteration is as simple as calling this_fiber::yield(). + waiting for the next main-loop iteration is as simple as calling this_fiber::yield().

@@ -199,7 +199,7 @@

- Once we can transparently wait for the next main-loop iteration using this_fiber::yield(), + Once we can transparently wait for the next main-loop iteration using this_fiber::yield(), ordinary encapsulation Just Works.

@@ -216,7 +216,7 @@


-PrevUpHomeNext +PrevUpHomeNext
diff --git a/doc/html/fiber/overview.html b/doc/html/fiber/overview.html index 98e73120..2bf3eb66 100644 --- a/doc/html/fiber/overview.html +++ b/doc/html/fiber/overview.html @@ -120,7 +120,7 @@

Unless migrated, a fiber may access thread-local storage; however that storage will be shared among all fibers running on the same thread. For fiber-local - storage, please see fiber_specific_ptr. + storage, please see fiber_specific_ptr.

@@ -152,7 +152,7 @@ instance, when a fiber wants to wait for a value from another fiber in the same thread, using std::future would be unfortunate: std::future::get() would block the whole thread, preventing the other fiber from delivering its - value. Use future<> instead. + value. Use future<> instead.

Similarly, a fiber that invokes a normal blocking I/O operation will block diff --git a/doc/html/fiber/performance.html b/doc/html/fiber/performance.html index 86afd93c..2fb8cd45 100644 --- a/doc/html/fiber/performance.html +++ b/doc/html/fiber/performance.html @@ -6,7 +6,7 @@ - + @@ -20,7 +20,7 @@


-PrevUpHomeNext +PrevUpHomeNext

@@ -29,7 +29,7 @@

Performance measurements were taken using std::chrono::highresolution_clock, with overhead corrections. The code was compiled using the build options: variant - = release, optimization = speed [8]. + = release, optimization = speed [7].

The columns labeled fiber (atomics) were compiled @@ -490,7 +490,7 @@

Numbers of the microbenchmark - syknet from Alexander Temerev [9]: + syknet from Alexander Temerev [8]:

Table 1.7. performance of N=100000 actors/goroutines/fibers

@@ -560,10 +560,10 @@



-

[8] +

[7] Intel Core2 Q6700, x86_64, 3GHz

-

[9] +

[8] Intel Core2 Q6700, x86_64, 3GHz

@@ -578,7 +578,7 @@
-PrevUpHomeNext +PrevUpHomeNext
diff --git a/doc/html/fiber/rationale.html b/doc/html/fiber/rationale.html index 58ac5794..51d47df1 100644 --- a/doc/html/fiber/rationale.html +++ b/doc/html/fiber/rationale.html @@ -47,7 +47,7 @@ When a coroutine yields, it passes control directly to its caller (or, in the case of symmetric coroutines, a designated other coroutine). When a fiber blocks, it implicitly passes control to the fiber scheduler. Coroutines have no scheduler - because they need no scheduler.[11]. + because they need no scheduler.[10].

@@ -86,10 +86,10 @@ may still be false. Spurious wakeup can happen repeatedly and is caused on some multiprocessor systems where making std::condition_variable wakeup completely predictable would slow down all std::condition_variable - operations.[12] + operations.[11]

- condition_variable is not subject to spurious wakeup. + condition_variable is not subject to spurious wakeup. Nonetheless it is prudent to test the business-logic condition in a wait() loop — or, equivalently, use one of the wait( lock, predicate ) @@ -105,7 +105,7 @@

Support for migrating fibers between threads has been integrated. The user-defined - scheduler must call context::migrate() on a fiber-context on + scheduler must call context::migrate() on a fiber-context on the destination thread, passing migrate() the fiber-context to migrate. (For more information about custom schedulers, see Customization.) Examples work_sharing and @@ -122,11 +122,11 @@ for Boost.Asio

- Support for Boost.Asio's + Support for Boost.Asio’s async-result is not part of the official API. However, - to integrate with a boost::asio::io_service, + to integrate with a boost::asio::io_service, see Sharing a Thread with Another Main Loop. - To interface smoothly with an arbitrary Asio async I/O operation, see Then There's Boost.Asio. + To interface smoothly with an arbitrary Asio async I/O operation, see Then There’s Boost.Asio.

@@ -147,11 +147,11 @@



-

[11] +

-

[12] +

[11] David R. Butenhof Programming with POSIX Threads

diff --git a/doc/html/fiber/scheduling.html b/doc/html/fiber/scheduling.html index 57ad239a..ea6ffb27 100644 --- a/doc/html/fiber/scheduling.html +++ b/doc/html/fiber/scheduling.html @@ -40,10 +40,10 @@

Each thread has its own scheduler. Different threads in a process may use different schedulers. By default, Boost.Fiber implicitly - instantiates round_robin as the scheduler for each thread. + instantiates round_robin as the scheduler for each thread.

- You are explicitly permitted to code your own sched_algorithm subclass. + You are explicitly permitted to code your own sched_algorithm subclass. For the most part, your sched_algorithm subclass need not defend against cross-thread calls: the fiber manager intercepts and defers such calls. Most sched_algorithm @@ -52,7 +52,7 @@

Your sched_algorithm subclass - is engaged on a particular thread by calling use_scheduling_algorithm(): + is engaged on a particular thread by calling use_scheduling_algorithm():

void thread_fn() {
     boost::fibers::use_scheduling_algorithm< my_fiber_scheduler >();
@@ -60,8 +60,8 @@
 }
 

- A scheduler class must implement interface sched_algorithm. - Boost.Fiber provides one scheduler: round_robin. + A scheduler class must implement interface sched_algorithm. + Boost.Fiber provides one scheduler: round_robin.

@@ -113,7 +113,7 @@ Informs the scheduler that fiber f is ready to run. Fiber f might be newly launched, or it might have been blocked but has just been - awakened, or it might have called this_fiber::yield(). + awakened, or it might have called this_fiber::yield().

Note:

@@ -123,7 +123,7 @@

See also:

- round_robin + round_robin

@@ -155,7 +155,7 @@

See also:

- round_robin + round_robin

@@ -206,12 +206,12 @@ environment in whatever way makes sense. The fiber manager is stating that suspend_until() need not return until abs_time - — or sched_algorithm::notify() is called — whichever + — or sched_algorithm::notify() is called — whichever comes first. The interaction with notify() means that, for instance, calling std::this_thread::sleep_until(abs_time) - would be too simplistic. round_robin::suspend_until() uses + would be too simplistic. round_robin::suspend_until() uses a std::condition_variable to coordinate - with round_robin::notify(). + with round_robin::notify().

Note:

@@ -239,7 +239,7 @@

Effects:

- Requests the scheduler to return from a pending call to sched_algorithm::suspend_until(). + Requests the scheduler to return from a pending call to sched_algorithm::suspend_until().

Note:

@@ -261,7 +261,7 @@

- This class implements sched_algorithm, scheduling fibers + This class implements sched_algorithm, scheduling fibers in round-robin fashion.

#include <boost/fiber/round_robin.hpp>
@@ -406,7 +406,7 @@
 
Effects:

- Wake up a pending call to round_robin::suspend_until(), + Wake up a pending call to round_robin::suspend_until(), some fibers might be ready. This implementation wakes suspend_until() via std::condition_variable::notify_all().

Throws:
@@ -421,8 +421,8 @@ Scheduler Fiber Properties

- A scheduler class directly derived from sched_algorithm can - use any information available from context to implement the sched_algorithm interface. But a custom scheduler + A scheduler class directly derived from sched_algorithm can + use any information available from context to implement the sched_algorithm interface. But a custom scheduler might need to track additional properties for a fiber. For instance, a priority-based scheduler would need to track a fiber's priority.

@@ -495,8 +495,8 @@
Effects:

- Pass control to the custom sched_algorithm_with_properties<> subclass's - sched_algorithm_with_properties::property_change() method. + Pass control to the custom sched_algorithm_with_properties<> subclass's + sched_algorithm_with_properties::property_change() method.

Throws:

@@ -504,8 +504,8 @@

Note:

- A custom scheduler's sched_algorithm_with_properties::pick_next() method - might dynamically select from the ready fibers, or sched_algorithm_with_properties::awakened() might + A custom scheduler's sched_algorithm_with_properties::pick_next() method + might dynamically select from the ready fibers, or sched_algorithm_with_properties::awakened() might instead insert each ready fiber into some form of ready queue for pick_next(). In the latter case, if application code modifies a fiber property (e.g. priority) that should affect that fiber's relationship to other ready @@ -534,7 +534,7 @@

A custom scheduler that depends on a custom properties class PROPS should be derived from sched_algorithm_with_properties<PROPS>. PROPS should be derived from - fiber_properties. + fiber_properties.

#include <boost/fiber/algorithm.hpp>
 
@@ -575,7 +575,7 @@
 
Effects:

Informs the scheduler that fiber f - is ready to run, like sched_algorithm::awakened(). + is ready to run, like sched_algorithm::awakened(). Passes the fiber's associated PROPS instance.

@@ -616,7 +616,7 @@

Note:

- same as sched_algorithm::pick_next() + same as sched_algorithm::pick_next()

@@ -646,7 +646,7 @@

Note:

- same as sched_algorithm::has_ready_fibers() + same as sched_algorithm::has_ready_fibers()

@@ -671,7 +671,7 @@

Note:

- same as sched_algorithm::suspend_until() + same as sched_algorithm::suspend_until()

@@ -692,11 +692,11 @@
Effects:

- Requests the scheduler to return from a pending call to sched_algorithm_with_properties::suspend_until(). + Requests the scheduler to return from a pending call to sched_algorithm_with_properties::suspend_until().

Note:

- same as sched_algorithm::notify() + same as sched_algorithm::notify()

@@ -727,11 +727,11 @@
Note:

The fiber's associated PROPS - instance is already passed to sched_algorithm_with_properties::awakened() and - sched_algorithm_with_properties::property_change(). - However, every sched_algorithm subclass is expected - to track a collection of ready context instances. This method - allows your custom scheduler to retrieve the fiber_properties subclass + instance is already passed to sched_algorithm_with_properties::awakened() and + sched_algorithm_with_properties::property_change(). + However, every sched_algorithm subclass is expected + to track a collection of ready context instances. This method + allows your custom scheduler to retrieve the fiber_properties subclass instance for any context in its collection.

@@ -765,8 +765,8 @@

Note:

- This method is only called when a custom fiber_properties subclass - explicitly calls fiber_properties::notify(). + This method is only called when a custom fiber_properties subclass + explicitly calls fiber_properties::notify().

@@ -787,7 +787,7 @@
Returns:

- A new instance of fiber_properties subclass PROPS. + A new instance of fiber_properties subclass PROPS.

Note:

@@ -819,10 +819,10 @@ Of particular note is the fact that context contains a hook to participate in a boost::intrusive::list typedef'ed as boost::fibers::scheduler::ready_queue_t. This hook is reserved for - use by sched_algorithm implementations. (For instance, - round_robin contains a ready_queue_t - instance to manage its ready fibers.) See context::ready_is_linked(), - context::ready_link(), context::ready_unlink(). + use by sched_algorithm implementations. (For instance, + round_robin contains a ready_queue_t + instance to manage its ready fibers.) See context::ready_is_linked(), + context::ready_link(), context::ready_unlink().

Your sched_algorithm implementation @@ -929,7 +929,7 @@

See also:

- fiber::get_id() + fiber::get_id()

@@ -995,7 +995,7 @@ fiber is an implementation detail of the fiber manager. The context of the main or dispatching fiber — any fiber for which is_context(pinned_context) is true - — must never be passed to context::migrate() for any other + — must never be passed to context::migrate() for any other thread.

@@ -1047,7 +1047,7 @@
Returns:

- true if *this is stored in a sched_algorithm + true if *this is stored in a sched_algorithm implementation's ready-queue.

@@ -1057,7 +1057,7 @@ implementation's

Note:

- Specifically, this method indicates whether context::ready_link() has + Specifically, this method indicates whether context::ready_link() has been called on *this. ready_is_linked() has no information about participation in any other containers. @@ -1092,8 +1092,8 @@ implementation's

A context signaled as ready by another thread is first stored in the fiber manager's remote-ready-queue. - This is the mechanism by which the fiber manager protects a sched_algorithm implementation - from cross-thread sched_algorithm::awakened() calls. + This is the mechanism by which the fiber manager protects a sched_algorithm implementation + from cross-thread sched_algorithm::awakened() calls.

@@ -1246,7 +1246,7 @@ implementation's
Effects:

Removes *this - from ready-queue: undoes the effect of context::ready_link(). + from ready-queue: undoes the effect of context::ready_link().

Throws:

@@ -1322,7 +1322,7 @@ implementation's

Effects:

- Suspends the running fiber (the fiber associated with *this) until some other fiber passes this to context::set_ready(). + Suspends the running fiber (the fiber associated with *this) until some other fiber passes this to context::set_ready(). *this is marked as not-ready, and control passes to the scheduler to select another fiber to run. @@ -1364,8 +1364,8 @@ implementation's Mark the fiber associated with context *ctx as being ready to run. This does not immediately resume that fiber; rather it passes the fiber to the scheduler for subsequent resumption. If the scheduler is idle (has not - returned from a call to sched_algorithm::suspend_until()), - sched_algorithm::notify() is called to wake it up. + returned from a call to sched_algorithm::suspend_until()), + sched_algorithm::notify() is called to wake it up.

Throws:

@@ -1386,7 +1386,7 @@ implementation's

Note:

- See context::migrate() for a way to migrate the suspended + See context::migrate() for a way to migrate the suspended thread to the thread calling set_ready().

diff --git a/doc/html/fiber/stack.html b/doc/html/fiber/stack.html index 38469548..6e766802 100644 --- a/doc/html/fiber/stack.html +++ b/doc/html/fiber/stack.html @@ -27,14 +27,14 @@ Stack allocation

- A fiber uses internally an execution_context + A fiber uses internally an execution_context which manages a set of registers and a stack. The memory used by the stack is allocated/deallocated via a stack_allocator which is required to model a stack-allocator concept.

- A stack_allocator can be passed to fiber::fiber() or to fibers::async(). + A stack_allocator can be passed to fiber::fiber() or to fibers::async().

@@ -170,7 +170,7 @@

- Boost.Fiber provides the class protected_fixedsize_stack which + Boost.Fiber provides the class protected_fixedsize_stack which models the stack-allocator concept. It appends a guard page at the end of each stack to protect against exceeding the stack. If the guard page is accessed (read @@ -183,10 +183,10 @@ Important

- Using protected_fixedsize_stack is expensive. + Using protected_fixedsize_stack is expensive. Launching a new fiber with a stack of this type incurs the overhead of setting the memory protection; once allocated, this stack is just as efficient to - use as fixedsize_stack. + use as fixedsize_stack.

@@ -280,9 +280,9 @@

- Boost.Fiber provides the class pooled_fixedsize_stack which + Boost.Fiber provides the class pooled_fixedsize_stack which models the stack-allocator - concept. In contrast to protected_fixedsize_stack it + concept. In contrast to protected_fixedsize_stack it does not append a guard page at the end of each stack. The memory is managed internally by boost::pool<>.

@@ -399,9 +399,9 @@

- Boost.Fiber provides the class fixedsize_stack which + Boost.Fiber provides the class fixedsize_stack which models the stack-allocator - concept. In contrast to protected_fixedsize_stack it + concept. In contrast to protected_fixedsize_stack it does not append a guard page at the end of each stack. The memory is simply managed by std::malloc() and std::free(). @@ -485,12 +485,12 @@

- Boost.Fiber supports usage of a segmented_stack, + Boost.Fiber supports usage of a segmented_stack, i.e. the stack grows on demand. The fiber is created with a minimal stack size - which will be increased as required. Class segmented_stack models + which will be increased as required. Class segmented_stack models the stack-allocator concept. - In contrast to protected_fixedsize_stack and - fixedsize_stack it creates a stack which grows on demand. + In contrast to protected_fixedsize_stack and + fixedsize_stack it creates a stack which grows on demand.

@@ -501,7 +501,7 @@ Segmented stacks are currently only supported by gcc from version 4.7 and clang from version 3.4 onwards. In order to use - a segmented_stackBoost.Fiber + a segmented_stackBoost.Fiber must be built with property segmented-stacks, e.g. toolset=gcc segmented-stacks=on at b2/bjam command line. @@ -581,7 +581,7 @@
Note

- If the library is compiled for segmented stacks, segmented_stack is + If the library is compiled for segmented stacks, segmented_stack is the only available stack allocator.

diff --git a/doc/html/fiber/synchronization/barriers.html b/doc/html/fiber/synchronization/barriers.html index 557cc14f..675b4f0b 100644 --- a/doc/html/fiber/synchronization/barriers.html +++ b/doc/html/fiber/synchronization/barriers.html @@ -39,7 +39,7 @@ The fact that the barrier automatically resets is significant. Consider a case in which you launch some number of fibers and want to wait only until the first of them has completed. You might be tempted to use a barrier(2) as the synchronization - mechanism, making each new fiber call its barrier::wait() method, + mechanism, making each new fiber call its barrier::wait() method, then calling wait() in the launching fiber to wait until the first other fiber completes.

@@ -83,7 +83,7 @@ It is unwise to tie the lifespan of a barrier to any one of its participating fibers. Although conceptually all waiting fibers awaken simultaneously, because of the nature of fibers, in practice they will awaken one by one - in indeterminate order.[2] The rest of the waiting fibers will still be blocked in wait(), + in indeterminate order.[1] The rest of the waiting fibers will still be blocked in wait(), which must, before returning, access data members in the barrier object.

@@ -109,7 +109,7 @@ };

- Instances of barrier are not copyable or movable. + Instances of barrier are not copyable or movable.

@@ -168,16 +168,11 @@

fiber_error

-
Notes:
-

- wait() - is one of the predefined interruption-points. -



-

[2] +

[1] The current implementation wakes fibers in FIFO order: the first to call wait() wakes first, and so forth. But it is perilous to rely on the order in diff --git a/doc/html/fiber/synchronization/channels.html b/doc/html/fiber/synchronization/channels.html index ab09e2f0..8ac84580 100644 --- a/doc/html/fiber/synchronization/channels.html +++ b/doc/html/fiber/synchronization/channels.html @@ -289,7 +289,7 @@

Throws:

- fiber_interrupted + Nothing

@@ -317,7 +317,7 @@
Throws:

fiber_error if *this - is closed or fiber_interrupted + is closed

Error conditions:

@@ -387,7 +387,6 @@

Throws:

- fiber_interrupted or timeout-related exceptions.

@@ -426,7 +425,6 @@

Throws:

- fiber_interrupted or timeout-related exceptions.

@@ -656,7 +654,6 @@

Throws:

- fiber_interrupted or exceptions thrown by memory allocation and copying or moving va.

@@ -699,7 +696,6 @@

Throws:

- fiber_interrupted, exceptions thrown by memory allocation and copying or moving va or timeout-related exceptions.

@@ -744,7 +740,6 @@

Throws:

- fiber_interrupted or exceptions thrown by memory allocation and copying or moving va or timeout-related exceptions.

@@ -809,7 +804,7 @@

Throws:

- fiber_interrupted + Nothing

@@ -839,7 +834,7 @@
Throws:

fiber_error if *this - is closed or fiber_interrupted + is closed

Error conditions:

@@ -915,7 +910,6 @@

Throws:

- fiber_interrupted or timeout-related exceptions.

@@ -957,7 +951,6 @@

Throws:

- fiber_interrupted or timeout-related exceptions.

diff --git a/doc/html/fiber/synchronization/conditions.html b/doc/html/fiber/synchronization/conditions.html index a4694300..210dbd87 100644 --- a/doc/html/fiber/synchronization/conditions.html +++ b/doc/html/fiber/synchronization/conditions.html @@ -39,7 +39,7 @@ class condition_variable_any;

- The class condition_variable provides a mechanism + The class condition_variable provides a mechanism for a fiber to wait for notification from another fiber. When the fiber awakens from the wait, then it checks to see if the appropriate condition is now true, and continues if so. If the condition is not true, then the fiber calls @@ -64,8 +64,8 @@

Notice that the lk is passed - to condition_variable::wait(): wait() will atomically add the fiber to the set - of fibers waiting on the condition variable, and unlock the mutex. + to condition_variable::wait(): wait() will atomically add the fiber to the set + of fibers waiting on the condition variable, and unlock the mutex. When the fiber is awakened, the mutex will be locked again before the call to wait() returns. This allows other fibers to acquire the mutex in order to update @@ -87,8 +87,8 @@

In the meantime, another fiber sets data_ready to true, and then calls either - condition_variable::notify_one() or condition_variable::notify_all() on - the condition_variable cond + condition_variable::notify_one() or condition_variable::notify_all() on + the condition_variable cond to wake one waiting fiber or all the waiting fibers respectively.

void retrieve_data();
@@ -105,18 +105,18 @@
 }
 

- Note that the same mutex is locked before the shared data is updated, + Note that the same mutex is locked before the shared data is updated, but that the mutex does not - have to be locked across the call to condition_variable::notify_one(). + have to be locked across the call to condition_variable::notify_one().

Locking is important because the synchronization objects provided by Boost.Fiber can be used to synchronize fibers running on different threads.

- Boost.Fiber provides both condition_variable and - condition_variable_any. boost::fibers::condition_variable - can only wait on std::unique_lock< boost::fibers:: mutex > + Boost.Fiber provides both condition_variable and + condition_variable_any. boost::fibers::condition_variable + can only wait on std::unique_lock< boost::fibers::mutex > while boost::fibers::condition_variable_any can wait on user-defined lock types.

@@ -126,10 +126,10 @@ Wakeups

- Neither condition_variable nor condition_variable_any are - subject to spurious wakeup: condition_variable::wait() can - only wake up when condition_variable::notify_one() or - condition_variable::notify_all() is called. Even + Neither condition_variable nor condition_variable_any are + subject to spurious wakeup: condition_variable::wait() can + only wake up when condition_variable::notify_one() or + condition_variable::notify_all() is called. Even so, it is prudent to use one of the wait( lock, predicate ) overloads.

@@ -142,11 +142,11 @@

Because producer fibers might push() - items to the queue in bursts, they call condition_variable::notify_all() rather - than condition_variable::notify_one(). + items to the queue in bursts, they call condition_variable::notify_all() rather + than condition_variable::notify_one().

- But a given consumer fiber might well wake up from condition_variable::wait() and + But a given consumer fiber might well wake up from condition_variable::wait() and find the queue empty(), because other consumer fibers might already have processed all pending items.

@@ -408,9 +408,7 @@
Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution. + occurs.

Note:

@@ -418,7 +416,7 @@ concurrently calling wait on *this must wait on lk objects - governing the same mutex. Three distinct + governing the same mutex. Three distinct objects are involved in any condition_variable_any::wait() call: the condition_variable_any itself, the mutex coordinating access between fibers and a local lock object (e.g. std::unique_lock). In general, @@ -495,10 +493,7 @@

Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions.

Returns:

@@ -518,7 +513,7 @@

Note:

- See Note for condition_variable_any::wait(). + See Note for condition_variable_any::wait().

@@ -588,10 +583,7 @@
Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions.

Returns:

@@ -611,7 +603,7 @@

Note:

- See Note for condition_variable_any::wait(). + See Note for condition_variable_any::wait().

@@ -828,9 +820,7 @@
Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution. + occurs.

Note:

@@ -838,7 +828,7 @@ concurrently calling wait on *this must wait on lk objects - governing the same mutex. Three distinct + governing the same mutex. Three distinct objects are involved in any condition_variable::wait() call: the condition_variable itself, the mutex coordinating access between fibers and a local lock object (e.g. std::unique_lock). In general, @@ -915,10 +905,7 @@

Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions.

Returns:

@@ -938,7 +925,7 @@

Note:

- See Note for condition_variable::wait(). + See Note for condition_variable::wait().

@@ -1008,10 +995,7 @@
Throws:

fiber_error if an error - occurs, fiber_interrupted - if the wait was interrupted by a call to fiber::interrupt() on - the fiber object associated with the current fiber of execution - or timeout-related exceptions. + occurs or timeout-related exceptions.

Returns:

@@ -1031,7 +1015,7 @@

Note:

- See Note for condition_variable::wait(). + See Note for condition_variable::wait().

diff --git a/doc/html/fiber/synchronization/futures.html b/doc/html/fiber/synchronization/futures.html index cfdee8b4..319d4b99 100644 --- a/doc/html/fiber/synchronization/futures.html +++ b/doc/html/fiber/synchronization/futures.html @@ -43,34 +43,34 @@ in response to external stimuli, or on-demand.

- This is done through the provision of four class templates: future<> and - shared_future<> which are used to retrieve the asynchronous - results, and promise<> and packaged_task<> which + This is done through the provision of four class templates: future<> and + shared_future<> which are used to retrieve the asynchronous + results, and promise<> and packaged_task<> which are used to generate the asynchronous results.

- An instance of future<> holds the one and only reference + An instance of future<> holds the one and only reference to a result. Ownership can be transferred between instances using the move constructor or move-assignment operator, but at most one instance holds a reference to a given asynchronous result. When the result is ready, it is - returned from future::get() by rvalue-reference to allow the result + returned from future::get() by rvalue-reference to allow the result to be moved or copied as appropriate for the type.

- On the other hand, many instances of shared_future<> may + On the other hand, many instances of shared_future<> may reference the same result. Instances can be freely copied and assigned, and - shared_future::get() + shared_future::get() returns a const - reference so that multiple calls to shared_future::get() + reference so that multiple calls to shared_future::get() are - safe. You can move an instance of future<> into an instance - of shared_future<>, thus transferring ownership + safe. You can move an instance of future<> into an instance + of shared_future<>, thus transferring ownership of the associated asynchronous result, but not vice-versa.

- fibers::async() is a simple way of running asynchronous tasks. + fibers::async() is a simple way of running asynchronous tasks. A call to async() - spawns a fiber and returns a future<> that will deliver + spawns a fiber and returns a future<> that will deliver the result of the fiber function.

@@ -79,15 +79,15 @@ are asynchronous values

- You can set the value in a future with either a promise<> or - a packaged_task<>. A packaged_task<> is + You can set the value in a future with either a promise<> or + a packaged_task<>. A packaged_task<> is a callable object with void return that wraps a function or callable object returning the specified type. - When the packaged_task<> is invoked, it invokes the + When the packaged_task<> is invoked, it invokes the contained function in turn, and populates a future with the contained function's return value. This is an answer to the perennial question: How do I return a value from a fiber? Package the function you wish to run - as a packaged_task<> and pass the packaged task to + as a packaged_task<> and pass the packaged task to the fiber constructor. The future retrieved from the packaged task can then be used to obtain the return value. If the function throws an exception, that is stored in the future in place of the return value. @@ -108,7 +108,7 @@ are assert(fi.get()==42);

- A promise<> is a bit more low level: it just provides explicit + A promise<> is a bit more low level: it just provides explicit functions to store a value or an exception in the associated future. A promise can therefore be used where the value might come from more than one possible source. diff --git a/doc/html/fiber/synchronization/futures/future.html b/doc/html/fiber/synchronization/futures/future.html index 14f74fd8..e4962106 100644 --- a/doc/html/fiber/synchronization/futures/future.html +++ b/doc/html/fiber/synchronization/futures/future.html @@ -35,21 +35,21 @@ state

- Behind a promise<> and its future<> lies + Behind a promise<> and its future<> lies an unspecified object called their shared state. The shared state is what will actually hold the async result (or the exception).

- The shared state is instantiated along with the promise<>. + The shared state is instantiated along with the promise<>.

- Aside from its originating promise<>, a future<> holds - a unique reference to a particular shared state. However, multiple shared_future<> instances + Aside from its originating promise<>, a future<> holds + a unique reference to a particular shared state. However, multiple shared_future<> instances can reference the same underlying shared state.

- As packaged_task<> and fibers::async() are - implemented using promise<>, discussions of shared state + As packaged_task<> and fibers::async() are + implemented using promise<>, discussions of shared state apply to them as well.

@@ -58,7 +58,7 @@ future_status

- Timed wait-operations ( future::wait_for() and future::wait_until()) + Timed wait-operations (future::wait_for() and future::wait_until()) return the state of the future.

enum class future_status {
@@ -113,7 +113,7 @@
 

- A future<> contains a shared + A future<> contains a shared state which is not shared with any other future.

template< typename R >
@@ -218,20 +218,20 @@
         

  1. - instantiate promise<> + instantiate promise<>
  2. obtain its future<> - via promise::get_future() + via promise::get_future()
  3. - launch fiber, capturing promise<> + launch fiber, capturing promise<>
  4. destroy future<>
  5. - call promise::set_value() + call promise::set_value()

@@ -308,11 +308,11 @@

Effects:

- Move the state to a shared_future<>. + Move the state to a shared_future<>.

Returns:

- a shared_future<> containing the shared + a shared_future<> containing the shared state formerly belonging to *this.

Postcondition:
@@ -350,9 +350,9 @@

Returns:

- Waits until promise::set_value() or promise::set_exception() is - called. If promise::set_value() is called, returns - the value. If promise::set_exception() is called, + Waits until promise::set_value() or promise::set_exception() is + called. If promise::set_value() is called, returns + the value. If promise::set_exception() is called, throws the indicated exception.

Postcondition:
@@ -364,7 +364,6 @@

future_error with error condition future_errc::no_state, - fiber_interrupted, future_errc::broken_promise. Any exception passed to promise::set_exception().

@@ -392,7 +391,7 @@

Returns:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. If set_value() is called, returns a default-constructed std::exception_ptr. If set_exception() is called, returns the passed std::exception_ptr. @@ -400,13 +399,12 @@

Throws:

future_error with - error condition future_errc::no_state - or fiber_interrupted. + error condition future_errc::no_state.

Note:

get_exception_ptr() does not invalidate - the future. After calling get_exception_ptr(), you may still call future::get(). + the future. After calling get_exception_ptr(), you may still call future::get().

@@ -426,14 +424,13 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called.

Throws:

future_error with - error condition future_errc::no_state - or fiber_interrupted. + error condition future_errc::no_state.

@@ -455,7 +452,7 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_duration has passed.

@@ -468,7 +465,6 @@

future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions.

@@ -491,7 +487,7 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_time has passed.

@@ -504,7 +500,6 @@

future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions.

@@ -520,8 +515,8 @@

- A shared_future<> contains a shared - state which might be shared with other shared_future<> instances. + A shared_future<> contains a shared + state which might be shared with other shared_future<> instances.

template< typename R >
 class shared_future {
@@ -726,9 +721,9 @@
               

Returns:

- Waits until promise::set_value() or promise::set_exception() is - called. If promise::set_value() is called, returns - the value. If promise::set_exception() is called, + Waits until promise::set_value() or promise::set_exception() is + called. If promise::set_value() is called, returns + the value. If promise::set_exception() is called, throws the indicated exception.

Postcondition:
@@ -740,7 +735,6 @@

future_error with error condition future_errc::no_state, - fiber_interrupted, future_errc::broken_promise. Any exception passed to promise::set_exception().

@@ -768,7 +762,7 @@

Returns:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called. If set_value() is called, returns a default-constructed std::exception_ptr. If set_exception() is called, returns the passed std::exception_ptr. @@ -776,13 +770,12 @@

Throws:

future_error with - error condition future_errc::no_state - or fiber_interrupted. + error condition future_errc::no_state.

Note:

get_exception_ptr() does not invalidate - the shared_future. After calling get_exception_ptr(), you may still call shared_future::get(). + the shared_future. After calling get_exception_ptr(), you may still call shared_future::get().

@@ -803,14 +796,13 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called.

Throws:

future_error with - error condition future_errc::no_state - or fiber_interrupted. + error condition future_errc::no_state.

@@ -832,7 +824,7 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_duration has passed.

@@ -845,7 +837,6 @@

future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions.

@@ -868,7 +859,7 @@
Effects:

- Waits until promise::set_value() or promise::set_exception() is + Waits until promise::set_value() or promise::set_exception() is called, or timeout_time has passed.

@@ -881,7 +872,6 @@

future_error with error condition future_errc::no_state - or fiber_interrupted or timeout-related exceptions.

@@ -905,6 +895,14 @@ > async( Function && fn, Args && ... args); +template< class Function, class ... Args > +future< + std::result_of_t< + std::decay_t< Function >( std::decay_t< Args > ... ) + > +> +async( launch_policy lpol, Function && fn, Args && ... args); + template< typename StackAllocator, class Function, class ... Args > future< std::result_of_t< @@ -912,6 +910,15 @@ > > async( std::allocator_arg_t, StackAllocator salloc, Function && fn, Args && ... args); + +template< typename StackAllocator, class Function, class ... Args > +future< + std::result_of_t< + std::decay_t< Function >( std::decay_t< Args > ... ) + > +> +async( launch_policy lpol, std::allocator_arg_t, StackAllocator salloc, + Function && fn, Args && ... args);

@@ -919,7 +926,7 @@
Effects:

Executes fn in a - fiber and returns an associated future<>. + fiber and returns an associated future<>.

Result:
@@ -941,9 +948,15 @@

Notes:

- The overload accepting std::allocator_arg_t uses the + The overloads accepting std::allocator_arg_t use the passed StackAllocator when constructing the launched fiber. + The overloads accepting launch_policy use the passed + launch_policy when + constructing the launched fiber. + The default launch_policy + is post, as for the + fiber constructor.

diff --git a/doc/html/fiber/synchronization/futures/packaged_task.html b/doc/html/fiber/synchronization/futures/packaged_task.html index f11d4455..aa276ae5 100644 --- a/doc/html/fiber/synchronization/futures/packaged_task.html +++ b/doc/html/fiber/synchronization/futures/packaged_task.html @@ -28,7 +28,7 @@ packaged_task<>

- A packaged_task<> wraps a callable target that + A packaged_task<> wraps a callable target that returns a value so that the return value can be computed asynchronously.

@@ -40,22 +40,22 @@ the signature of the callable. Pass the callable to the constructor.

  • - Call packaged_task::get_future() and capture - the returned future<> instance. + Call packaged_task::get_future() and capture + the returned future<> instance.
  • - Launch a fiber to run the new packaged_task<>, passing any arguments required + Launch a fiber to run the new packaged_task<>, passing any arguments required by the original callable.
  • - Call fiber::detach() on the newly-launched fiber. + Call fiber::detach() on the newly-launched fiber.
  • At some later point, retrieve the result from the future<>.
  • - This is, in fact, pretty much what fibers::async() + This is, in fact, pretty much what fibers::async() encapsulates.

    template< class R, typename ... Args >
    @@ -191,7 +191,7 @@ encapsulates.
                     and abandons the shared state
                     if shared state is ready; otherwise stores future_error
                     with error condition future_errc::broken_promise
    -                as if by  promise::set_exception(): the shared
    +                as if by promise::set_exception(): the shared
                     state is set ready.
                   

    @@ -296,7 +296,7 @@ encapsulates.
    Returns:

    - A future<> with the same shared + A future<> with the same shared state.

    Throws:
    @@ -326,9 +326,9 @@ encapsulates. Invokes the stored callable target. Any exception thrown by the callable target fn is stored in the shared state as if by - promise::set_exception(). Otherwise, the value + promise::set_exception(). Otherwise, the value returned by fn is - stored in the shared state as if by promise::set_value(). + stored in the shared state as if by promise::set_value().

    Throws:

    diff --git a/doc/html/fiber/synchronization/futures/promise.html b/doc/html/fiber/synchronization/futures/promise.html index 0174b1f8..bf2768f5 100644 --- a/doc/html/fiber/synchronization/futures/promise.html +++ b/doc/html/fiber/synchronization/futures/promise.html @@ -28,8 +28,8 @@ promise<>

    - A promise<> provides a mechanism to store a value (or - exception) that can later be retrieved from the corresponding future<> object. + A promise<> provides a mechanism to store a value (or + exception) that can later be retrieved from the corresponding future<> object. promise<> and future<> communicate via their underlying shared state. @@ -153,7 +153,7 @@ and abandons the shared state if shared state is ready; otherwise stores future_error with error condition future_errc::broken_promise - as if by promise::set_exception(): the shared + as if by promise::set_exception(): the shared state is set ready.

    @@ -231,7 +231,7 @@
    Returns:

    - A future<> with the same shared + A future<> with the same shared state.

    Throws:
    diff --git a/doc/html/fiber/synchronization/mutex_types.html b/doc/html/fiber/synchronization/mutex_types.html index 7a2dedef..8397dfe8 100644 --- a/doc/html/fiber/synchronization/mutex_types.html +++ b/doc/html/fiber/synchronization/mutex_types.html @@ -51,8 +51,8 @@ };

    - mutex provides an exclusive-ownership mutex. At most one fiber - can own the lock on a given instance of mutex at any time. Multiple + mutex provides an exclusive-ownership mutex. At most one fiber + can own the lock on a given instance of mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock() shall be permitted.

    @@ -197,8 +197,8 @@ };

    - timed_mutex provides an exclusive-ownership mutex. At most - one fiber can own the lock on a given instance of timed_mutex at + timed_mutex provides an exclusive-ownership mutex. At most + one fiber can own the lock on a given instance of timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), try_lock_until(), try_lock_for() and unlock() shall be permitted.

    @@ -336,7 +336,7 @@

    Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as timed_mutex::try_lock(). + time has already passed, behaves as timed_mutex::try_lock().

    Returns:

    @@ -381,7 +381,7 @@

    Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as timed_mutex::try_lock(). + time has already passed, behaves as timed_mutex::try_lock().

    Returns:

    @@ -428,10 +428,10 @@ };

    - recursive_mutex provides an exclusive-ownership recursive - mutex. At most one fiber can own the lock on a given instance of recursive_mutex at + recursive_mutex provides an exclusive-ownership recursive + mutex. At most one fiber can own the lock on a given instance of recursive_mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock() shall be permitted. A fiber that already - has exclusive ownership of a given recursive_mutex instance + has exclusive ownership of a given recursive_mutex instance can call lock() or try_lock() to acquire an additional level of ownership of the mutex. unlock() must be called once for each level of ownership @@ -458,7 +458,7 @@

    Throws:

    - fiber_interrupted + Nothing

    @@ -555,16 +555,16 @@ };

    - recursive_timed_mutex provides an exclusive-ownership + recursive_timed_mutex provides an exclusive-ownership recursive mutex. At most one fiber can own the lock on a given instance of - recursive_timed_mutex at any time. Multiple concurrent + recursive_timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), try_lock_for(), try_lock_until() and unlock() shall be permitted. A fiber that already has exclusive ownership of a given - recursive_timed_mutex instance can call lock(), + recursive_timed_mutex instance can call lock(), try_lock(), try_lock_for() or try_lock_until() @@ -592,7 +592,7 @@

    Throws:

    - fiber_interrupted + Nothing

    @@ -678,7 +678,7 @@

    Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as recursive_timed_mutex::try_lock(). + time has already passed, behaves as recursive_timed_mutex::try_lock().

    Returns:

    @@ -712,7 +712,7 @@

    Attempt to obtain ownership for the current fiber. Blocks until ownership can be obtained, or the specified time is reached. If the specified - time has already passed, behaves as recursive_timed_mutex::try_lock(). + time has already passed, behaves as recursive_timed_mutex::try_lock().

    Returns:

    diff --git a/doc/html/fiber/when_any/when_all_functionality/when_all__heterogeneous_types.html b/doc/html/fiber/when_any/when_all_functionality/when_all__heterogeneous_types.html index 417fc8f3..d81cb9a7 100644 --- a/doc/html/fiber/when_any/when_all_functionality/when_all__heterogeneous_types.html +++ b/doc/html/fiber/when_any/when_all_functionality/when_all__heterogeneous_types.html @@ -106,8 +106,8 @@

    The trouble with this tactic is that it would serialize all the task functions. The runtime makes a single pass through functions, - calling fibers::async() for each and then immediately calling - future::get() on its returned future<>. That blocks the implicit loop. + calling fibers::async() for each and then immediately calling + future::get() on its returned future<>. That blocks the implicit loop. The above is almost equivalent to writing:

    return Result{ functions()... };
    diff --git a/doc/html/fiber/when_any/when_all_functionality/when_all__return_values.html b/doc/html/fiber/when_any/when_all_functionality/when_all__return_values.html
    index b2cd4a1b..8b5100fa 100644
    --- a/doc/html/fiber/when_any/when_all_functionality/when_all__return_values.html
    +++ b/doc/html/fiber/when_any/when_all_functionality/when_all__return_values.html
    @@ -40,7 +40,7 @@
               available?
             

    - Fortunately we can present both APIs. Let's define wait_all_values_source() to return shared_ptr<unbounded_channel<T>>.[7] + Fortunately we can present both APIs. Let's define wait_all_values_source() to return shared_ptr<unbounded_channel<T>>.[6]

    Given wait_all_values_source(), it's straightforward to implement wait_all_values(): @@ -86,9 +86,9 @@

    As you can see from the loop in wait_all_values(), instead of requiring its caller to count - values, we define wait_all_values_source() to unbounded_channel::close() the + values, we define wait_all_values_source() to unbounded_channel::close() the channel when done. But how do we do that? Each producer fiber is independent. - It has no idea whether it is the last one to unbounded_channel::push() a + It has no idea whether it is the last one to unbounded_channel::push() a value.

    @@ -203,9 +203,9 @@



    -

    [7] - We could have used either bounded_channel<> or - unbounded_channel<>. We chose unbounded_channel<> +

    [6] + We could have used either bounded_channel<> or + unbounded_channel<>. We chose unbounded_channel<> on the assumption that its simpler semantics imply a cheaper implementation.

    diff --git a/doc/html/fiber/when_any/when_all_functionality/when_all__simple_completion.html b/doc/html/fiber/when_any/when_all_functionality/when_all__simple_completion.html index 575eba72..01a89b7f 100644 --- a/doc/html/fiber/when_any/when_all_functionality/when_all__simple_completion.html +++ b/doc/html/fiber/when_any/when_all_functionality/when_all__simple_completion.html @@ -31,8 +31,8 @@ For the case in which we must wait for all task functions to complete — but we don't need results (or expect exceptions) from any of them — we can write wait_all_simple() that looks remarkably like wait_first_simple(). - The difference is that instead of our Done class, we instantiate a barrier and - call its barrier::wait(). + The difference is that instead of our Done class, we instantiate a barrier and + call its barrier::wait().

    We initialize the barrier diff --git a/doc/html/fiber/when_any/when_all_functionality/when_all_until_first_exception.html b/doc/html/fiber/when_any/when_all_functionality/when_all_until_first_exception.html index 6a1f46f2..45d45e31 100644 --- a/doc/html/fiber/when_any/when_all_functionality/when_all_until_first_exception.html +++ b/doc/html/fiber/when_any/when_all_functionality/when_all_until_first_exception.html @@ -35,7 +35,7 @@ instead of plain T.

    - wait_all_until_error() pops that future< T > and calls its future::get(): + wait_all_until_error() pops that future< T > and calls its future::get():

    diff --git a/doc/html/fiber/when_any/when_any/when_any__a_dubious_alternative.html b/doc/html/fiber/when_any/when_any/when_any__a_dubious_alternative.html index 6e7c17fd..cbf128d6 100644 --- a/doc/html/fiber/when_any/when_any/when_any__a_dubious_alternative.html +++ b/doc/html/fiber/when_any/when_any/when_any__a_dubious_alternative.html @@ -31,13 +31,12 @@ Certain topics in C++ can arouse strong passions, and exceptions are no exception. We cannot resist mentioning — for purely informational purposes — that when you need only the first result from some - number of concurrently-running fibers, it would be possible to pass a shared_ptr< - promise<>> to the participating fibers, then cause - the initiating fiber to call future::get() on its future<>. - The first fiber to call promise::set_value() on that shared - promise will succeed; subsequent - set_value() - calls on the same promise + number of concurrently-running fibers, it would be possible to pass a + shared_ptr<promise<>> to the + participating fibers, then cause the initiating fiber to call future::get() on + its future<>. The first fiber to call promise::set_value() on + that shared promise will + succeed; subsequent set_value() calls on the same promise instance will throw future_error.

    diff --git a/doc/html/fiber/when_any/when_any/when_any__produce_first_outcome__whether_result_or_exception.html b/doc/html/fiber/when_any/when_any/when_any__produce_first_outcome__whether_result_or_exception.html index cd09aa77..3db7db5b 100644 --- a/doc/html/fiber/when_any/when_any/when_any__produce_first_outcome__whether_result_or_exception.html +++ b/doc/html/fiber/when_any/when_any/when_any__produce_first_outcome__whether_result_or_exception.html @@ -35,15 +35,15 @@

    Let's at least ensure that such an exception would propagate to the fiber - awaiting the first result. We can use future<> to transport - either a return value or an exception. Therefore, we will change wait_first_value()'s unbounded_channel<> to + awaiting the first result. We can use future<> to transport + either a return value or an exception. Therefore, we will change wait_first_value()'s unbounded_channel<> to hold future< T > items instead of simply T.

    Once we have a future<> - in hand, all we need do is call future::get(), which will either + in hand, all we need do is call future::get(), which will either return the value or rethrow the exception.

    @@ -75,10 +75,10 @@

    So far so good — but there's a timing issue. How should we obtain the future<> - to unbounded_channel::push() on the channel? + to unbounded_channel::push() on the channel?

    - We could call fibers::async(). That would certainly produce + We could call fibers::async(). That would certainly produce a future<> for the task function. The trouble is that it would return too quickly! We only want future<> @@ -90,19 +90,19 @@ completes most quickly.

    - Calling future::get() on the future returned by async() + Calling future::get() on the future returned by async() wouldn't be right. You can only call get() once per future<> instance! And if there were an exception, it would be rethrown inside the helper fiber at the producer end of the channel, rather than propagated to the consumer end.

    - We could call future::wait(). That would block the helper fiber + We could call future::wait(). That would block the helper fiber until the future<> became ready, at which point we could push() it to be retrieved by wait_first_outcome().

    That would work — but there's a simpler tactic that avoids creating an extra - fiber. We can wrap the task function in a packaged_task<>. + fiber. We can wrap the task function in a packaged_task<>. While one naturally thinks of passing a packaged_task<> to a new fiber — that is, in fact, what async() does — in this case, we're already running in the helper fiber at the producer diff --git a/doc/html/fiber/when_any/when_any/when_any__produce_first_success.html b/doc/html/fiber/when_any/when_any/when_any__produce_first_success.html index 1e48139a..4eff7844 100644 --- a/doc/html/fiber/when_any/when_any/when_any__produce_first_success.html +++ b/doc/html/fiber/when_any/when_any/when_any__produce_first_success.html @@ -97,16 +97,16 @@

    Instead of retrieving only the first future<> from the channel, we must now loop over future<> - items. Of course we must limit that iteration! If we launch only count producer fibers, the (count+1) - st - unbounded_channel::pop() call would block forever. + items. Of course we must limit that iteration! If we launch only count producer fibers, the (count+1)st +unbounded_channel::pop() call + would block forever.

    Given a ready future<>, - we can distinguish failure by calling future::get_exception_ptr(). + we can distinguish failure by calling future::get_exception_ptr(). If the future<> in fact contains a result rather than an exception, get_exception_ptr() returns nullptr. - In that case, we can confidently call future::get() to return + In that case, we can confidently call future::get() to return that result to our caller.

    diff --git a/doc/html/fiber/when_any/when_any/when_any__return_value.html b/doc/html/fiber/when_any/when_any/when_any__return_value.html index 7734baec..5f237130 100644 --- a/doc/html/fiber/when_any/when_any/when_any__return_value.html +++ b/doc/html/fiber/when_any/when_any/when_any__return_value.html @@ -35,8 +35,8 @@

    One tactic would be to adapt our Done class to store the first of the return values, rather than a simple bool. - However, we choose instead to use a unbounded_channel<>. - We'll only need to enqueue the first value, so we'll unbounded_channel::close() it + However, we choose instead to use a unbounded_channel<>. + We'll only need to enqueue the first value, so we'll unbounded_channel::close() it once we've retrieved that value. Subsequent push() calls will return closed.

    diff --git a/doc/html/fiber/when_any/when_any/when_any__simple_completion.html b/doc/html/fiber/when_any/when_any/when_any__simple_completion.html index cd7bbdfe..f89127f2 100644 --- a/doc/html/fiber/when_any/when_any/when_any__simple_completion.html +++ b/doc/html/fiber/when_any/when_any/when_any__simple_completion.html @@ -35,7 +35,7 @@

    For this we introduce a Done class to wrap a bool variable - with a condition_variable and a mutex: + with a condition_variable and a mutex:

    diff --git a/doc/html/fiber_HTML.manifest b/doc/html/fiber_HTML.manifest index 5dbecb89..3553621a 100644 --- a/doc/html/fiber_HTML.manifest +++ b/doc/html/fiber_HTML.manifest @@ -18,6 +18,13 @@ fiber/synchronization/futures/packaged_task.html fiber/fls.html fiber/migration.html fiber/callbacks.html +fiber/callbacks/overview.html +fiber/callbacks/return_errorcode.html +fiber/callbacks/success_or_exception.html +fiber/callbacks/return_errorcode_or_data.html +fiber/callbacks/data_or_exception.html +fiber/callbacks/success_error_virtual_methods.html +fiber/callbacks/then_there_s____boost_asio__.html fiber/nonblocking.html fiber/when_any.html fiber/when_any/when_any.html @@ -34,6 +41,10 @@ fiber/when_any/when_all_functionality/when_all_until_first_exception.html fiber/when_any/when_all_functionality/wait_all__collecting_all_exceptions.html fiber/when_any/when_all_functionality/when_all__heterogeneous_types.html fiber/integration.html +fiber/integration/overview.html +fiber/integration/event_driven_program.html +fiber/integration/embedded_main_loop.html +fiber/integration/deeper_dive_into___boost_asio__.html fiber/performance.html fiber/custom.html fiber/rationale.html diff --git a/doc/html/index.html b/doc/html/index.html index 488ae77b..d9320ef2 100644 --- a/doc/html/index.html +++ b/doc/html/index.html @@ -66,6 +66,19 @@ between threads
    Integrating Fibers with Asynchronous Callbacks
    +
    +
    Overview
    +
    Return Errorcode
    +
    Success or Exception
    +
    Return Errorcode + or Data
    +
    Data + or Exception
    +
    Success/Error + Virtual Methods
    +
    Then + There’s Boost.Asio
    +
    Integrating Fibers with Nonblocking I/O
    when_any / when_all @@ -102,6 +115,15 @@
    Sharing a Thread with Another Main Loop
    +
    +
    Overview
    +
    Event-Driven + Program
    +
    Embedded + Main Loop
    +
    Deeper + Dive into Boost.Asio
    +
    Performance
    Customization
    Rationale
    @@ -111,7 +133,7 @@ - +

    Last revised: March 28, 2016 at 18:28:49 GMT

    Last revised: May 01, 2016 at 07:21:27 GMT


    diff --git a/doc/integration.qbk b/doc/integration.qbk index 55272c55..f10bf727 100644 --- a/doc/integration.qbk +++ b/doc/integration.qbk @@ -25,31 +25,31 @@ __boost_fiber__ because certain asynchronous I/O sequences are logically sequential, and for those you want to write and maintain code that looks and acts sequential. -You are launching fibers on the application's main thread because certain of -their actions will affect its user interface, and the application's UI +You are launching fibers on the application[s] main thread because certain of +their actions will affect its user interface, and the application[s] UI framework permits UI operations only on the main thread. Or perhaps those fibers need access to main-thread data, and it would be too expensive in runtime (or development time) to robustly defend every such data item with thread synchronization primitives. -You must ensure that the application's main loop ['itself] doesn't monopolize +You must ensure that the application[s] main loop ['itself] doesn[t] monopolize the processor: that the fibers it launches will get the CPU cycles they need. The solution is the same as for any fiber that might claim the CPU for an extended time: introduce calls to [ns_function_link this_fiber..yield]. The most straightforward approach is to call `yield()` on every iteration of your -existing main loop. In effect, this unifies the application's main loop with -__boost_fiber__'s internal main loop. `yield()` allows the fiber manager to +existing main loop. In effect, this unifies the application[s] main loop with +__boost_fiber__[s] internal main loop. `yield()` allows the fiber manager to run any fibers that have become ready since the previous iteration of the -application's main loop. When these fibers have had a turn, control passes to -the thread's main fiber, which returns from `yield()` and resumes the -application's main loop. +application[s] main loop. When these fibers have had a turn, control passes to +the thread[s] main fiber, which returns from `yield()` and resumes the +application[s] main loop. [endsect] [#embedded_main_loop] [section Embedded Main Loop] -More challenging is when the application's main loop is embedded in some other +More challenging is when the application[s] main loop is embedded in some other library or framework. Such an application will typically, after performing all necessary setup, pass control to some form of `run()` function from which control does not return until application shutdown. diff --git a/doc/rationale.qbk b/doc/rationale.qbk index f37f9b4e..f0639028 100644 --- a/doc/rationale.qbk +++ b/doc/rationale.qbk @@ -77,11 +77,10 @@ See also [link migration Migrating fibers between threads]. [heading support for Boost.Asio] -Support for __boost_asio__'s __async_result__ is not part of the official API. -However, to integrate with a `boost::asio::io_service`, see [link integration -Sharing a Thread with Another Main Loop]. To interface smoothly with an -arbitrary Asio async I/O operation, see [link callbacks_asio Then There's -__boost_asio__]. +Support for __boost_asio__[s] __async_result__ is not part of the official API. +However, to integrate with a __io_service__, see [link integration Sharing a +Thread with Another Main Loop]. To interface smoothly with an arbitrary Asio +async I/O operation, see [link callbacks_asio Then There[s] __boost_asio__]. [heading tested compilers] diff --git a/doc/when_any.qbk b/doc/when_any.qbk index 11a928c4..6785994a 100644 --- a/doc/when_any.qbk +++ b/doc/when_any.qbk @@ -302,7 +302,7 @@ Certain topics in C++ can arouse strong passions, and exceptions are no exception. We cannot resist mentioning [mdash] for purely informational purposes [mdash] that when you need only the ['first] result from some number of concurrently-running fibers, it would be possible to pass a -[`shared_ptr<[template_link promise]>] to the participating fibers, then cause +[^shared_ptr<[template_link promise]>] to the participating fibers, then cause the initiating fiber to call [member_link future..get] on its [template_link future]. The first fiber to call [member_link promise..set_value] on that shared `promise` will succeed; subsequent `set_value()` calls on the same