2
0
mirror of https://github.com/boostorg/fiber.git synced 2026-02-20 02:32:19 +00:00

Merge pull request #85 from nat-goodspeed/attach-detach-doc

Attach / detach doc
This commit is contained in:
Oliver Kowalke
2016-05-03 06:17:19 +02:00
4 changed files with 73 additions and 24 deletions

View File

@@ -32,12 +32,29 @@ another logical CPU ['cpuX] which is part of a different NUMA node ['nodeX]
might reduce the performance of the application due to increased latency of
memory access.
Only fibers that are contained in __algo__'s ready queue can migrate between
threads. You cannot migrate a running fiber, nor one that is __blocked__.
Only fibers that are contained in __algo__[s] ready queue can migrate between
threads. You cannot migrate a running fiber, nor one that is __blocked__. You
cannot migrate a fiber if its [member_link context..is_context] method returns
`true` for `pinned_context`.
In__boost_fiber__ a fiber is migrated by invoking __context_detach__ within the
thread from which the fiber migrates from and __context_attach__ within the the
thread the fiber migrates to.
In __boost_fiber__ a fiber is migrated by invoking __context_detach__ on the
thread from which the fiber migrates and __context_attach__ on the thread to
which the fiber migrates.
Thus, fiber migration is accomplished by sharing state between instances of a
user-coded __algo__ implementation running on different threads. The fiber[s]
original thread calls [member_link sched_algorithm..awakened], passing the
fiber[s] [class_link context][^*]. The `awakened()` implementation calls
__context_detach__.
At some later point, when the same or a different thread calls [member_link
sched_algorithm..pick_next], the `pick_next()` implementation selects a ready
fiber and calls __context_attach__ on it before returning it.
As stated above, a `context` for which `is_context(pinned_context) == true`
must never be passed to either __context_detach__ or __context_attach__. It
may only be returned from `pick_next()` called by the ['same] thread that
passed that context to `awakened()`.
[heading Example of work sharing]
@@ -83,15 +100,15 @@ shares a common ready queue among all participating threads. A thread
participates in this pool by executing [function_link use_scheduling_algorithm]
before any other __boost_fiber__ operation.
The important point about the ready queue is that it's a class static, common
The important point about the ready queue is that it[s] a class static, common
to all instances of shared_ready_queue.
Fibers that are enqueued via __algo_awakened__ (fibers that are ready to be
resumed) are thus available to all threads.
It is required to reserve a separate, scheduler-specific queue for the thread's
It is required to reserve a separate, scheduler-specific queue for the thread[s]
main fiber and dispatcher fibers: these may ['not] be shared between threads!
When we're passed either of these fibers, push it there instead of in the
When we[,]re passed either of these fibers, push it there instead of in the
shared queue: it would be Bad News for thread B to retrieve and attempt to
execute thread A's main fiber.
execute thread A[s] main fiber.
[awakened_ws]

View File

@@ -418,11 +418,11 @@ of typical STL containers.
#include <boost/fiber/context.hpp>
enum class type {
none,
main_context, // fiber associated with thread's stack
dispatcher_context, // special fiber for maintenance operations
worker_context, // fiber not special to the library
pinned_context // fiber must not be migrated to another thread
none = ``['unspecified]``,
main_context = ``['unspecified]``, // fiber associated with thread's stack
dispatcher_context = ``['unspecified]``, // special fiber for maintenance operations
worker_context = ``['unspecified]``, // fiber not special to the library
pinned_context = ``['unspecified]`` // fiber must not be migrated to another thread
};
class context {
@@ -490,8 +490,22 @@ default-constructed __fiber_id__.]]
void attach( context * f) noexcept;
[variablelist
[[Effects:] [Atach fiber `f` to scheduler running `*this`.]]
[[Precondition:] [`this->get_scheduler() == nullptr`]]
[[Effects:] [Attach fiber `f` to scheduler running `*this`.]]
[[Postcondition:] [`this->get_scheduler() != nullptr`]]
[[Throws:] [Nothing]]
[[Note:] [A typical call: `boost::fibers::context::active()->attach(f);`]]
[[Note:] [`f` must not be the running fiber[s] context. It must not be
__blocked__ or terminated. It must not be a `pinned_context`. It must be
currently detached. It must not currently be linked into a [class_link
sched_algorithm] implementation[s] ready queue. Most of these conditions are
implied by `f` being owned by a `sched_algorithm` implementation: that is, it
has been passed to [member_link sched_algorithm..awakened] but has not yet
been returned by [member_link sched_algorithm..pick_next]. Typically a
`pick_next()` implementation would call `attach()` with the `context*` it is
about to return. It must first remove `f` from its ready queue. You should
never pass a `pinned_context` to `attach()` because you should never have
called its `detach()` method in the first place.]]
]
[member_heading context..detach]
@@ -499,8 +513,27 @@ default-constructed __fiber_id__.]]
void detach() noexcept;
[variablelist
[[Precondition:] [`(this->get_scheduler() != nullptr) && ! this->is_context(pinned_context)`]]
[[Effects:] [Detach fiber `*this` from its scheduler running `*this`.]]
[[Throws:] [Nothing]]
[[Postcondition:] [`this->get_scheduler() == nullptr`]]
[[Note:] [This method must be called on the thread with which the fiber is
currently associated. `*this` must not be the running fiber[s] context. It
must not be __blocked__ or terminated. It must not be a `pinned_context`. It
must not be detached already. It must not already be linked into a [class_link
sched_algorithm] implementation[s] ready queue. Most of these conditions are
implied by `*this` being passed to [member_link sched_algorithm..awakened]; an
`awakened()` implementation must, however, test for `pinned_context`. It must
call `detach()` ['before] linking `*this` into its ready queue.]]
[[Note:] [In particular, it is erroneous to attempt to migrate a fiber from
one thread to another by calling both `detach()` and `attach()` in the
[member_link sched_algorithm..pick_next] method. `pick_next()` is called on
the intended destination thread. `detach()` must be called on the fiber[s]
original thread. You must call `detach()` in the corresponding `awakened()`
method.]]
[[Note:] [Unless you intend make a fiber available for potential migration to
a different thread, you should call neither `detach()` nor `attach()` with its
`context`.]]
]
[member_heading context..is_context]
@@ -519,7 +552,7 @@ dispatching awakened fibers to a scheduler's ready-queue. The ["dispatching]
fiber is an implementation detail of the fiber manager. The context of the
["main] or ["dispatching] fiber [mdash] any fiber for which
`is_context(pinned_context)` is `true` [mdash] must never be passed to
[member_link context..migrate] for any other thread.]]
[member_link context..detach].]]
]
[member_heading context..is_terminated]
@@ -675,8 +708,8 @@ application program.]]
[[Note:] [It is explicitly supported to call `set_ready(ctx)` from a thread
other than the one on which `*ctx` is currently suspended. The corresponding
fiber will be resumed on its original thread in due course.]]
[[Note:] [See [member_link context..migrate] for a way to migrate the
suspended thread to the thread calling `set_ready()`.]]
[/[[Note:] [See [member_link context..migrate] for a way to migrate the
suspended thread to the thread calling `set_ready()`.]]]
]
[hding context_less..Non-member function [`operator<()]]

View File

@@ -73,8 +73,7 @@ public:
BOOST_ASSERT( nullptr != ctx);
boost::fibers::context::active()->attach( ctx); /*<
attach context to current scheduler via the active fiber
of this thread; benign if the fiber already belongs to this
thread
of this thread
>*/
} else {
lk.unlock();
@@ -208,7 +207,7 @@ int main( int argc, char *argv[]) {
if all worker fibers are complete.
>*/
} /*<
Releasing lock of mtx_count is required before joining the threads, othwerwise
Releasing lock of mtx_count is required before joining the threads, otherwise
the other threads would be blocked inside condition_variable::wait() and
would never return (deadlock).
>*/

View File

@@ -139,7 +139,7 @@ public:
}
};
class tief_algo : public boost::fibers::sched_algorithm {
class thief_algo : public boost::fibers::sched_algorithm {
private:
typedef boost::fibers::scheduler::ready_queue_t rqueue_t;
typedef work_stealing_queue ws_rqueue_t;
@@ -151,7 +151,7 @@ private:
bool flag_{ false };
public:
tief_algo( std::shared_ptr< ws_rqueue_t > ws_rqueue) :
thief_algo( std::shared_ptr< ws_rqueue_t > ws_rqueue) :
ws_rqueue_( ws_rqueue) {
}
@@ -228,7 +228,7 @@ boost::fibers::future< int > fibonacci( int n) {
}
void thread( std::shared_ptr< work_stealing_queue > ws_queue) {
boost::fibers::use_scheduling_algorithm< tief_algo >( ws_queue);
boost::fibers::use_scheduling_algorithm< thief_algo >( ws_queue);
while ( ! fini) {
// To guarantee progress, we must ensure that