2
0
mirror of https://github.com/boostorg/fiber.git synced 2026-02-19 14:22:23 +00:00
Files
fiber/doc/migration.qbk
2016-02-16 08:10:17 +01:00

108 lines
4.4 KiB
Plaintext

[/
Copyright Oliver Kowalke 2016.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt
]
[/ import path is relative to this .qbk file]
[import ../examples/work_sharing.cpp]
[#migration]
[section:migration Migrating fibers between threads]
[heading Overview]
Each fiber owns a stack and manages its execution state, including all
registers and CPU flags, the instruction pointer and the stack pointer. That
means, in general, a fiber is not bound to a specific thread.[footnote The
["main] fiber on each thread, that is, the fiber on which the thread is
launched, cannot migrate to any other thread. Also __boost_fiber__ implicitly
creates a dispatcher fiber for each thread [mdash] this cannot migrate
either.][superscript,][footnote Of course it would be problematic to migrate a
fiber that relies on [link thread_local_storage thread-local storage].]
Migrating a fiber from a logical CPU with heavy workload to another
logical CPU with a lighter workload might speed up the overall execution.
Note that in the case of NUMA-architectures, it is not always advisable to
migrate data between threads. Suppose fiber ['f] is running on logical CPU
['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on
the physical memory located at ['node0]. Migrating the fiber from ['cpu0] to
another logical CPU ['cpuX] which is part of a different NUMA node ['nodeX]
might reduce the performance of the application due to increased latency of
memory access.
Only fibers that are contained in __algo__'s ready queue can migrate between
threads. You cannot migrate a running fiber, nor one that is __blocked__.
In__boost_fiber__ a fiber is migrated by invoking __context_migrate__ on the
[class_link context] instance for a fiber already associated with the
destination thread, passing the `context` for the fiber to be migrated.
[heading Example of work sharing]
In the example [@../../examples/work_sharing.cpp work_sharing.cpp]
multiple worker fibers are created on the main thread. Each fiber gets a
character as parameter at construction. This character is printed out ten times.
Between each iteration the fiber calls __yield__. That puts the fiber in the
ready queue of the fiber-scheduler ['shared_ready_queue], running in the current
thread.
The next fiber ready to be executed is dequeued from the shared ready queue
and resumed by ['shared_ready_queue] running on ['any participating thread].
All instances of ['shared_ready_queue] share one global concurrent queue, used
as ready queue. This mechanism shares all worker fibers between all instances
of ['shared_ready_queue], thus between all participating threads.
[heading Setup of threads and fibers]
In `main()` the fiber-scheduler is installed and the worker fibers and the
threads are launched.
[main_ws]
The start of the threads is synchronized with a barrier. The main fiber of
each thread (including main thread) is suspended until all worker fibers are
complete. When the main fiber returns from __cond_wait__, the thread
terminates: the main thread joins all other threads.
[thread_fn_ws]
Each worker fiber executes function `whatevah()` with character `me` as
parameter. The fiber yields in a loop and prints out a message if it was migrated
to another thread.
[fiber_fn_ws]
[heading Scheduling fibers]
The fiber scheduler `shared_ready_queue` is like `round_robin`, except that it
shares a common ready queue among all participating threads. A thread
participates in this pool by executing [function_link use_scheduling_algorithm]
before any other __boost_fiber__ operation.
The important point about the ready queue is that it's a class static, common
to all instances of shared_ready_queue.
Fibers that are enqueued via __algo_awakened__ (fibers that are ready to be
resumed) are thus available to all threads.
It is required to reserve a separate, scheduler-specific queue for the thread's
main fiber and dispatcher fibers: these may ['not] be shared between threads!
When we're passed either of these fibers, push it there instead of in the
shared queue: it would be Bad News for thread B to retrieve and attempt to
execute thread A's main fiber.
[awakened_ws]
When __algo_pick_next__ gets called inside one thread, a fiber is dequeued from
['rqueue_] and will be resumed in that thread.
[pick_next_ws]
The source code above is found in
[@../../examples/work_sharing.cpp work_sharing.cpp].
[endsect]