mirror of
https://github.com/boostorg/thread.git
synced 2026-01-27 19:32:11 +00:00
Merge branch 'develop'
This commit is contained in:
@@ -0,0 +1,956 @@
|
||||
[/
|
||||
/ Copyright (c) 2014 Vicente J. Botet Escriba
|
||||
/
|
||||
/ Distributed under the Boost Software License, Version 1.0. (See accompanying
|
||||
/ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
||||
/]
|
||||
|
||||
[//////////////////////////////////////////////////////////]
|
||||
[section:executors Executors and Schedulers -- EXPERIMENTAL]
|
||||
|
||||
[warning These features are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(]
|
||||
|
||||
[note These features are based on the [@http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3785.pdf [*N3785 - Executors and Schedulers revision 3]] C++1y proposal from Chris Mysen, Niklas Gustafsson, Matt Austern, Jeffrey Yasskin. The text that follows has been adapted from tis paper to show the differences.]
|
||||
|
||||
Executors are objects that can execute units of work packaged as function objects. Boost.Thread differs from N3785 mainly in the an Executor doesn't needs to inherit from an abstract class Executor. Static polymorphism is used instead and type erasure is used internally.
|
||||
|
||||
[////////////////////]
|
||||
[section Introduction]
|
||||
|
||||
Multithreaded programs often involve discrete (sometimes small) units of work that are executed asynchronously. This often involves passing work units to some component that manages execution. We already have boost::async, which potentially executes a function asynchronously and eventually returns its result in a future. (“As if” by launching a new thread.)
|
||||
|
||||
If there is a regular stream of small work items then we almost certainly don’t want to launch a new thread for each, and it’s likely that we want at least some control over which thread(s) execute which items. It is often convenient to represent that control as multiple executor objects. This allows programs to start executors when necessary, switch from one executor to another to control execution policy, and use multiple executors to prevent interference and thread exhaustion. Several possible implementations exist of the executor class and in practice there are a number of main groups of executors which have been found to be useful in real-world code (more implementations exist, this is simply a high level classification of them). These differ along a couple main dimensions, how many execution contexts will be used, how they are selected, and how they are prioritized.
|
||||
|
||||
# Thread Pools
|
||||
# Simple unbounded thread pool, which can queue up an unbounded amount of work and maintains a dedicated set of threads (up to some maximum) which
|
||||
dequeue and execute work as available.
|
||||
# Bounded thread pools, which can be implemented as a specialization of the previous ones with a bounded queue or semaphore, which limits the amount of queuing in an attempt to bound the time spent waiting to execute and/or limit resource utilization for work tasks which hold state which is expensive to hold.
|
||||
# Thread-spawning executors, in which each work always executes in a new thread.
|
||||
# Prioritized thread pools, which have works which are not equally prioritized such that work can move to the front of the execution queue if necessary. This requires a special comparator or prioritization function to allow for work ordering and normally is implemented as a blocking priority queue in front of the pool instead of a blocking queue. This has many uses but is a somewhat specialized in nature and would unnecessarily clutter the initial interface.
|
||||
# Work stealing thread pools, this is a specialized use case and is encapsulated in the ForkJoinPool in java, which allows lightweight work to be created by tasks in the pool and either run by the same thread for invocation efficiency or stolen by another thread without additional work. These have been left out until there is a more concrete fork-join proposal or until there is a more clear need as these can be complicated to implement
|
||||
|
||||
# Mutual exclusion executors
|
||||
# Serial executors, which guarantee all work to be executed such that no two works will execute concurrently. This allows for a sequence of operations to be queued in sequence and that sequential order is maintained and work can be queued on a separate thread but with no mutual exclusion required.
|
||||
# Loop executor, in which one thread donates itself to the executor to execute all queued work. This is related to the serial executor in that it guarantees mutual exclusion, but instead guarantees a particular thread will execute the work. These are particularly useful for testing purposes where code assumes an executor but testing code desires control over execution.
|
||||
# GUI thread executor, where a GUI framework can expose an executor interface to allow other threads to queue up work to be executed as part of the GUI thread. This behaves similarly to a loop executor, but must be implemented as a custom interface as part of the framework.
|
||||
|
||||
# Inline executors, which execute inline to the thread which calls submit(). This has no queuing and behaves like a normal executor, but always uses the caller’s thread to execute. This allows parallel execution of works, though. This type of executor is often useful when there is an executor required by an interface, but when for performance reasons it’s better not to queue work or switch threads. This is often very useful as an optimization for work continuations which should execute immediately or quickly and can also be useful for optimizations when an interface requires an executor but the work tasks are too small to justify the overhead of a full thread pool.
|
||||
|
||||
A question arises of which of these executors (or others) be included in this library. There are use cases for these and many other executors. Often it is useful to have more than one implemented executor (e.g. the thread pool) to have more precise control of where the work is executed due to the existence of a GUI thread, or for testing purposes. A few core executors are frequently useful and these have been outlined here as the core of what should be in this library, if common use cases arise for alternative executor implementations, they can be added in the future. The current set provided here are: a basic thread pool `basic_thread_pool`, a serial executor `serial_executor`, a loop executor `loop_executor`, an inline executor `inline_executor` and a thread-spawning executor `thread_executor`.
|
||||
[endsect]
|
||||
|
||||
[
|
||||
[/////////////////////////]
|
||||
[section:tutorial Tutorial]
|
||||
|
||||
|
||||
[endsect]
|
||||
]
|
||||
[////////////////]
|
||||
[section:examples Examples]
|
||||
|
||||
[section:quick_sort Parallel Quick Sort]
|
||||
|
||||
|
||||
#include <boost/thread/executors/basic_thread_pool.hpp>
|
||||
#include <boost/thread/future.hpp>
|
||||
#include <numeric>
|
||||
#include <algorithm>
|
||||
#include <functional>
|
||||
#include <iostream>
|
||||
#include <list>
|
||||
|
||||
template<typename T>
|
||||
struct sorter
|
||||
{
|
||||
boost::basic_thread_pool pool;
|
||||
typedef std::list<T> return_type;
|
||||
|
||||
std::list<T> do_sort(std::list<T> chunk_data)
|
||||
{
|
||||
if(chunk_data.empty()) {
|
||||
return chunk_data;
|
||||
}
|
||||
|
||||
std::list<T> result;
|
||||
result.splice(result.begin(),chunk_data, chunk_data.begin());
|
||||
T const& partition_val=*result.begin();
|
||||
|
||||
typename std::list<T>::iterator divide_point =
|
||||
std::partition(chunk_data.begin(), chunk_data.end(),
|
||||
[&](T const& val){return val<partition_val;});
|
||||
|
||||
std::list<T> new_lower_chunk;
|
||||
new_lower_chunk.splice(new_lower_chunk.end(), chunk_data,
|
||||
chunk_data.begin(), divide_point);
|
||||
boost::future<std::list<T> > new_lower =
|
||||
boost::async(pool, &sorter::do_sort, this, std::move(new_lower_chunk));
|
||||
std::list<T> new_higher(do_sort(chunk_data));
|
||||
result.splice(result.end(),new_higher);
|
||||
while(!new_lower.is_ready()) {
|
||||
pool.schedule_one_or_yield();
|
||||
}
|
||||
result.splice(result.begin(),new_lower.get());
|
||||
return result;
|
||||
}
|
||||
};
|
||||
|
||||
template<typename T>
|
||||
std::list<T> parallel_quick_sort(std::list<T>& input) {
|
||||
if(input.empty()) {
|
||||
return input;
|
||||
}
|
||||
sorter<T> s;
|
||||
return s.do_sort(input);
|
||||
}
|
||||
|
||||
|
||||
[endsect]
|
||||
[endsect]
|
||||
|
||||
|
||||
[////////////////////////]
|
||||
[section:rationale Design Rationale]
|
||||
|
||||
The authors of Boost.Thread have taken a different approach respect to N3785. Instead of basing all the design on a abstract executor class we make executor concepts. We believe that this is the good direction as a static polymorphic executor can be seen as a dynamic polymorphic executor using a simple adaptor. We believe also that it would make the library more usable, and more convenient for users.
|
||||
|
||||
The major design decisions concern deciding what a unit of work is, how to manage with units of work and time related functions in a polymorphic way.
|
||||
|
||||
An Executor is an object that schedules the closures that have been submitted to it, usually asynchronously. There could be multiple models of the Executor class. Some specific design notes:
|
||||
|
||||
* Thread pools are well know models of the Executor concept, and this library does indeed include a basic_thread_pool class, but other implementations also exist, including the ability to schedule work on GUI threads, scheduling work on a donor thread, as well as several specializations of thread pools.
|
||||
|
||||
* The choice of which executor to use is explicit. This is important for reasons described in the Motivation section. In particular, consider the common case of an asynchronous operation that itself spawns asynchronous operations. If both operations ran on the same executor, and if that executor had a bounded number of worker threads, then we could get deadlock. Programs often deal with such issues by splitting different kinds of work between different executors.
|
||||
|
||||
* Even if there could be a strong value in having a default executor, that can be used when detailed control is unnecessary, the authors don't know how to implement it in a portable a robust way.
|
||||
|
||||
* The library provides Executors based on static and dynamic polymorphism. The static polymorphism interface is intended to be used on contexts that need to have the best performances. The dynamic polymorphism interface has the advantage to been able to change the executor a function is suing without making it a template and it possible to pass executors across a binary interface. For some applications, the cost of an additional virtual dispatch could be almost certainly negligible compared to the other operations involved.
|
||||
|
||||
* Conceptually, an executor puts closures on a queue and at some point executes them. The queue is always unbounded, so adding a closure to an executor never blocks. (Defining “never blocks” formally is challenging, but informally we just mean that submit() is an ordinary function that executes something and returns, rather than waiting for the completion of some potentially long running operation in another thread.)
|
||||
|
||||
[heading Closure]
|
||||
|
||||
One important question is just what a closure is. This library has a very simple answer: a closure is a `Callable` with no parameters and returning `voidv.
|
||||
|
||||
N3785 choose the more specific `std::function<void()>` as it provides only dynamic polymorphism and states that in practice the implementation of a template based approach or another approach is impractical. The authors of this library think that the template based approach is compatible with a dynamic based approach. They give some arguments:
|
||||
|
||||
The first one is that a virtual function can not be a template. This is true but it is also true that the executor interface can provide the template functions that call to the virtual public functions. Another reason they give is that "a template parameter would complicate the interface without adding any real generality. In the end an executor class is going to need some kind of type erasure to handle all the different kinds of function objects with `void()` signature, and that’s exactly what std::function already does". We think that it is up to the executor to manage with this implementation details, not to the user.
|
||||
|
||||
We share all the argument they give related to the `void()` interface of the work unit. A work unit is a closure that takes no arguments and returns no value. This is indeed a limitation on user code, but combined with `boost::async` taking executors as parameters the user has all what is needs.
|
||||
|
||||
The third one is related to performance. They assert that "any mechanism for storing closures on an executor’s queue will have to use some form of type erasure. There’s no reason to believe that a custom closure mechanism, written just for std::executor and used nowhere else within the standard library, would be better in that respect than `std::function<void()>`". We believe that the implementation can do better that storing the closure on a `std::function<void()>`. e.g. the implementation can use intrusive data to store the closure and the pointers to other nodes needed to store the closures in a given order.
|
||||
|
||||
In addition `std::function<void()>` can not be constructed by moving the closure, so e.g. `std::packaged_task` could not be a Closure.
|
||||
|
||||
[/
|
||||
[heading Scheduled work]
|
||||
|
||||
The approach of this library respect to scheduled work of the N3785 proposal is quite different. Instead of adding the scheduled operations to a specific scheduled_executor polymorphic interface, we opt by adding two member template functions to a class scheduled_executor that wraps an existing executor. This has several the advantages:
|
||||
|
||||
* The scheduled operations are available for all the executors.
|
||||
* The template functions could accept any chrono `time_point` and `duration` respectively as we are not working with virtual functions.
|
||||
|
||||
In order to manage with all the clocks, there are two alternatives:
|
||||
|
||||
* transform the submit_at operation to a `submit_after` operation and let a single `scheduled_executor` manage with a single clock.
|
||||
* have a single instance of a `scheduled_executor<Clock>` for each `CLock`.
|
||||
|
||||
The library chose the first of those options, largely for simplicity.
|
||||
]
|
||||
|
||||
[heading Not Handled Exceptions]
|
||||
As in N3785 and based on the same design decision than `std`/`boost::thread` if a user closure throws an exception, the executor must call the `std::terminate` function.
|
||||
Note that when we combine `boost::async` and `Executors`, the exception will be caught by the closure associated to the returned future, so that the exception is stored on the returned future, as for the other `async` overloads.
|
||||
|
||||
[heading At thread entry]
|
||||
|
||||
It is common idiom to set some thread local variable at the beginning of a thread. As Executors could instantiate threads internally these Executors shall have the ability to call a user specific function at thread entry on the executor constructor.
|
||||
|
||||
For executors that don't instantiate any thread an that would use the current thread this function shall be called only for the thread calling the `at_thread_entry` member function.
|
||||
|
||||
[heading Cancelation]
|
||||
|
||||
The library does not provision yet for the ability to cancel/interrupt work, though this is a commonly requested feature.
|
||||
|
||||
This could be managed externally by an additional cancelation object that can be shared between the creator of the unit of work and the unit of work.
|
||||
|
||||
We can think also of a cancelable closure that could be used in a more transparent way.
|
||||
|
||||
An alternative is to make async return a cancelable_task but this will need also a cancelable closure.
|
||||
|
||||
|
||||
[/
|
||||
The library would provide in the future a cancelable_task that could support cancelation.
|
||||
|
||||
class cancelation_state {
|
||||
std::atomic<bool> requested; std::atomic<bool> enabled; std::condition_variable* cond; std::mutex cond_mutex;
|
||||
public:
|
||||
cancelation_state(): thread_cond(0) {} void cancel() { requested.store(true,std::memory_order_relaxed); std::lock_guard<std::mutex> lk(cond_mutex); if(cond) { cond->notify_all(); } } bool cancellation_requested() const { return requested.load(std::memory_order_relaxed); }
|
||||
void enable() { enable.store(true,std::memory_order_relaxed); } void disable() { enable.store(false,std::memory_order_relaxed); } bool cancellation_enabled() const { return enabled.load(std::memory_order_relaxed); } void set_condition_variable(std::condition_variable& cv) { std::lock_guard<std::mutex> lk(cond_mutex); cond = &cv; } void clear_condition_variable() { std::lock_guard<std::mutex> lk(cond_mutex); cond = 0; } struct clear_cv_on_destruct { ~clear_cv_on_destruct() { this_thread_interrupt_flag.clear_condition_variable(); } };
|
||||
void cancelation_point();
|
||||
void cancelable_wait(std::condition_variable& cv, std::unique_lock<std::mutex>& lk) { cancelation_point(); this_cancelable_state.set_condition_variable(cv); this_cancelable_state::clear_cv_on_destruct guard; interruption_point();
|
||||
cv.wait_for(lk, std::chrono::milliseconds(1)); this_cancelable_state.clear_condition_variable(); cancelation_point(); }
|
||||
class disable_cancelation
|
||||
{
|
||||
public:
|
||||
disable_cancelation(const disable_cancelation&) = delete;
|
||||
disable_cancelation& operator=(const disable_cancelation&) = delete;
|
||||
disable_cancelation(cancelable_closure& closure) noexcept;
|
||||
~disable_cancelation() noexcept;
|
||||
};
|
||||
class restore_cancelation
|
||||
{
|
||||
public:
|
||||
restore_cancelation(const restore_cancelation&) = delete;
|
||||
restore_cancelation& operator=(const restore_cancelation&) = delete;
|
||||
explicit restore_cancelation(cancelable_closure& closure, disable_cancelation& disabler) noexcept;
|
||||
~restore_cancelation() noexcept;
|
||||
};
|
||||
};
|
||||
|
||||
template <class Closure>
|
||||
struct cancelable_closure_mixin : cancelable_closure {
|
||||
void operator() {
|
||||
cancel_point();
|
||||
this->Closure::run();
|
||||
}
|
||||
};
|
||||
|
||||
struct my_clousure : cancelable_closure_mixin<my_clousure>
|
||||
{
|
||||
void run() {
|
||||
while () {
|
||||
cancel_point();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
]
|
||||
|
||||
[heading Current executor]
|
||||
|
||||
The library does not provision for the ability to get the current executor, though having access to it could simplify a lot the user code.
|
||||
|
||||
The reason is that the user can always use a thread_local variable and reset it using the `at_thread_entry ` member function.
|
||||
|
||||
thread_local current_executor_state_type current_executor_state;
|
||||
executor* current_executor() { return current_executor_state.current_executor(); }
|
||||
basic_thread_pool pool(
|
||||
// at_thread_entry
|
||||
[](basic_thread_pool& pool) {
|
||||
current_executor_state.set_current_executor(pool);
|
||||
}
|
||||
);
|
||||
|
||||
[heading Default executor]
|
||||
|
||||
The library authors share some of the concerns of the C++ standard committee (introduction of a new single shared resource, a singleton, could make it difficult to make it portable to all the environments) and that this library doesn't need to provide a default executor for the time been.
|
||||
|
||||
The user can always define his default executor himself and use the `at_thread_entry ` member function to set the default constructor.
|
||||
|
||||
thread_local default_executor_state_type default_executor_state;
|
||||
executor* default_executor() { return default_executor_state.default_executor(); }
|
||||
|
||||
// in main
|
||||
MyDefaultExecutor myDefaultExecutor(
|
||||
// at_thread_entry
|
||||
[](MyDefaultExecutor& ex) {
|
||||
default_executor_state.set_default_executor(ex);
|
||||
}
|
||||
);
|
||||
|
||||
basic_thread_pool pool(
|
||||
// at_thread_entry
|
||||
[&myDefaultExecutor](basic_thread_pool& pool) {
|
||||
default_executor_state.set_default_executor(myDefaultExecutor);
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////]
|
||||
[section:ref Reference]
|
||||
|
||||
|
||||
[////////////////////////////////]
|
||||
[section:concept_closure Concept `Closure`]
|
||||
|
||||
|
||||
A type `E` meets the `Closure` requirements if is a model of `Callable(void())` and a model of `CopyConstructible`/`MoveConstructible`.
|
||||
|
||||
[endsect]
|
||||
[////////////////////////////////]
|
||||
[section:concept_executor Concept `Executor`]
|
||||
|
||||
The `Executor` concept models the common operations of all the executors.
|
||||
|
||||
A type `E` meets the `Executor` requirements if the following expressions are well-formed and have the specified semantics
|
||||
|
||||
* `E::work`
|
||||
* `e.submit(lw);`
|
||||
* `e.submit(rw);`
|
||||
* `e.submit(lc);`
|
||||
* `e.submit(rc);`
|
||||
* `e.close();`
|
||||
* `b = e.closed();`
|
||||
* `e.try_executing_one();`
|
||||
* `e.reschedule_until(p);`
|
||||
|
||||
where
|
||||
|
||||
* `e` denotes a value of type `E`,
|
||||
* `lw` denotes a lvalue referece of type `E::work`,
|
||||
* `rc` denotes a rvalue referece of type `E::work`
|
||||
* `lc` denotes a lvalue referece of type `Closure`,
|
||||
* `rc` denotes a rvalue referece of type `Closure`
|
||||
* `p` denotes a value of type `Predicate`
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:submitlw `e.submit(lw);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [The specified closure will be scheduled for execution at some point in the future.
|
||||
If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.]]
|
||||
|
||||
[[Synchronization:] [completion of closure on a particular thread happens before destruction of thread's thread local variables.]]
|
||||
|
||||
[[Return type:] [`void`.]]
|
||||
|
||||
[[Throws:] [sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the executor state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:submitrw `e.submit(rw);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [The specified closure will be scheduled for execution at some point in the future.
|
||||
If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.]]
|
||||
|
||||
[[Synchronization:] [completion of closure on a particular thread happens before destruction of thread's thread local variables.]]
|
||||
|
||||
[[Return type:] [`void`.]]
|
||||
|
||||
[[Throws:] [sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the executor state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:submitlc `e.submit(lc);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [The specified closure will be scheduled for execution at some point in the future.
|
||||
If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.]]
|
||||
|
||||
[[Synchronization:] [completion of closure on a particular thread happens before destruction of thread's thread local variables.]]
|
||||
|
||||
[[Return type:] [`void`.]]
|
||||
|
||||
[[Throws:] [sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the executor state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:submitrc `e.submit(lc);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [The specified closure will be scheduled for execution at some point in the future.
|
||||
If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.]]
|
||||
|
||||
[[Synchronization:] [completion of closure on a particular thread happens before destruction of thread's thread local variables.]]
|
||||
|
||||
[[Return type:] [`void`.]]
|
||||
|
||||
[[Throws:] [sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the executor state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:close `e.close();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [close the executor `e` for submissions.]]
|
||||
|
||||
[[Remark:] [The worker threads will work until there is no more closures to run.]]
|
||||
|
||||
[[Return type:] [`void`.]]
|
||||
|
||||
[[Throws:] [Whatever exception that can be throw while ensuring the thread safety.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the executor state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:close `b = e.close();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [`void`.]]
|
||||
|
||||
[[Throws:] [whether the pool is closed for submissions.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:try_executing_one `e.try_executing_one();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [try to execute one work.]]
|
||||
|
||||
[[Remark:] [whether a work has been executed.]]
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [Whether a work has been executed.]]
|
||||
|
||||
[[Throws:] [whatever the current work constructor throws or the `work()` throws.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:try_executing_one `e.reschedule_until(p);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Requires:] [This must be called from an scheduled work]]
|
||||
|
||||
[[Effects:] [reschedule works until `p()`.]]
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [Whether a work has been executed.]]
|
||||
|
||||
[[Throws:] [whatever the current work constructor throws or the `work()` throws.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////]
|
||||
[section:work Class `work`]
|
||||
|
||||
#include <boost/thread/work.hpp>
|
||||
namespace boost {
|
||||
typedef 'implementation_defined' work;
|
||||
}
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Requires:] [work is a model of 'Closure']]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////]
|
||||
[section:executor Class `executor`]
|
||||
|
||||
Executor abstract base class.
|
||||
|
||||
#include <boost/thread/executor.hpp>
|
||||
namespace boost {
|
||||
class executor
|
||||
{
|
||||
public:
|
||||
typedef boost::work work;
|
||||
|
||||
executor(executor const&) = delete;
|
||||
executor& operator=(executor const&) = delete;
|
||||
|
||||
executor();
|
||||
virtual ~executor() {};
|
||||
|
||||
virtual void close() = 0;
|
||||
virtual bool closed() = 0;
|
||||
|
||||
virtual void submit(work&& closure) = 0;
|
||||
template <typename Closure>
|
||||
void submit(Closure&& closure);
|
||||
|
||||
virtual bool try_executing_one() = 0;
|
||||
template <typename Pred>
|
||||
bool reschedule_until(Pred const& pred);
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `executor()`]
|
||||
|
||||
executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Constructs a executor. ]]
|
||||
|
||||
[[Throws:] [Nothing. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Destructor `~executor()`]
|
||||
|
||||
virtual ~executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the executor.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
[//////////////////////////////////////////////////////////]
|
||||
[section:executor_adaptor Template Class `executor_adaptor`]
|
||||
|
||||
Polymorphic adaptor of a model of Executor to an executor.
|
||||
|
||||
#include <boost/thread/executor.hpp>
|
||||
namespace boost {
|
||||
template <typename Executor>
|
||||
class executor_adaptor : public executor
|
||||
{
|
||||
Executor ex; // for exposition only
|
||||
public:
|
||||
typedef executor::work work;
|
||||
|
||||
executor_adaptor(executor_adaptor const&) = delete;
|
||||
executor_adaptor& operator=(executor_adaptor const&) = delete;
|
||||
|
||||
template <typename ...Args>
|
||||
executor_adaptor(Args&& ... args);
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
void close();
|
||||
bool closed();
|
||||
|
||||
void submit(work&& closure);
|
||||
|
||||
bool try_executing_one();
|
||||
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `executor_adaptor(Args&& ...)`]
|
||||
|
||||
template <typename ...Args>
|
||||
executor_adaptor(Args&& ... args);
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Constructs a executor_adaptor. ]]
|
||||
|
||||
[[Throws:] [Nothing. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:destructor Destructor `~executor_adaptor()`]
|
||||
|
||||
virtual ~ executor_adaptor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the executor_adaptor.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:underlying_executor Function member `underlying_executor()`]
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return:] [The underlying executor instance. ]]
|
||||
|
||||
[[Throws:] [Nothing.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
[
|
||||
[//////////////////////////////////////////////////////////]
|
||||
[section:scheduled_executor Template Class `scheduled_executor`]
|
||||
|
||||
Executor providing time related functions.
|
||||
|
||||
#include <boost/thread/scheduled_executor.hpp>
|
||||
namespace boost {
|
||||
template <class Executor>
|
||||
class scheduled_executor
|
||||
{
|
||||
Executor& ex;
|
||||
public:
|
||||
typedef executor::work work;
|
||||
|
||||
scheduled_executor(scheduled_executor const&) = delete;
|
||||
scheduled_executor& operator=(scheduled_executor const&) = delete;
|
||||
|
||||
template <class Rep, class Period>
|
||||
scheduled_executor(Executor& ex, chrono::duration<Rep, Period> granularity=chrono::milliseconds(100));
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
void close();
|
||||
bool closed();
|
||||
|
||||
void submit(work&& closure);
|
||||
template <typename Closure>
|
||||
void submit(Closure&& closure);
|
||||
|
||||
bool try_executing_one();
|
||||
template <typename Pred>
|
||||
bool reschedule_until(Pred const& pred);
|
||||
|
||||
template <class Clock, class Duration>
|
||||
void submit_at(chrono::time_point<Clock,Duration> abs_time, work&& closure);
|
||||
template <class Rep, class Period>
|
||||
void submit_after(chrono::duration<Rep,Period> rel_time, work&& closure);
|
||||
template <class Clock, class Duration, typename Closure>
|
||||
void submit_at(chrono::time_point<Clock,Duration> abs_time, Closure&& closure);
|
||||
template <class Rep, class Period, typename Closure>
|
||||
void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `scheduled_executor(Executor&, chrono::duration<Rep, Period>)`]
|
||||
|
||||
template <class Rep, class Period>
|
||||
scheduled_executor(Executor& ex, chrono::duration<Rep, Period> granularity=chrono::milliseconds(100));
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Constructs a scheduled_executor. ]]
|
||||
|
||||
[[Throws:] [Nothing. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:destructor Destructor `~scheduled_executor()`]
|
||||
|
||||
~scheduled_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the executor_adaptor.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:underlying_executor Function member `underlying_executor()`]
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return:] [The underlying executor instance. ]]
|
||||
|
||||
[[Throws:] [Nothing.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
]
|
||||
|
||||
[//////////////////////////////////////////////////////////]
|
||||
[section:scheduled_executor Template Class `serial_executor`]
|
||||
|
||||
A serial executor ensuring that there are no two work units that executes concurrently.
|
||||
|
||||
#include <boost/thread/serial_executor.hpp>
|
||||
namespace boost {
|
||||
template <class Executor>
|
||||
class serial_executor
|
||||
{
|
||||
Executor& ex;
|
||||
public:
|
||||
typedef executors::work work;
|
||||
|
||||
serial_executor(serial_executor const&) = delete;
|
||||
serial_executor& operator=(serial_executor const&) = delete;
|
||||
|
||||
serial_executor(Executor& ex);
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
void close();
|
||||
bool closed();
|
||||
|
||||
void submit(work&& closure);
|
||||
template <typename Closure>
|
||||
void submit(Closure&& closure);
|
||||
|
||||
bool try_executing_one();
|
||||
template <typename Pred>
|
||||
bool reschedule_until(Pred const& pred);
|
||||
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `serial_executor(Executor&, chrono::duration<Rep, Period>)`]
|
||||
|
||||
serial_executor(Executor& ex);
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Constructs a serial_executor. ]]
|
||||
|
||||
[[Throws:] [Nothing. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:destructor Destructor `~serial_executor()`]
|
||||
|
||||
~serial_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the serial_executor.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:underlying_executor Function member `underlying_executor()`]
|
||||
|
||||
Executor& underlying_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return:] [The underlying executor instance. ]]
|
||||
|
||||
[[Throws:] [Nothing.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
|
||||
|
||||
|
||||
[///////////////////////////////////////]
|
||||
[section:basic_thread_pool Class `basic_thread_pool`]
|
||||
|
||||
A thread pool with up to a fixed number of threads.
|
||||
|
||||
#include <boost/thread/work.hpp>
|
||||
namespace boost {
|
||||
class basic_thread_pool
|
||||
{
|
||||
public:
|
||||
typedef boost::work work;
|
||||
|
||||
basic_thread_pool(basic_thread_pool const&) = delete;
|
||||
basic_thread_pool& operator=(basic_thread_pool const&) = delete;
|
||||
|
||||
basic_thread_pool(unsigned const thread_count = thread::hardware_concurrency());
|
||||
template <class AtThreadEntry>
|
||||
basic_thread_pool( unsigned const thread_count, AtThreadEntry at_thread_entry);
|
||||
~basic_thread_pool();
|
||||
|
||||
void close();
|
||||
bool closed();
|
||||
|
||||
template <typename Closure>
|
||||
void submit(Closure&& closure);
|
||||
|
||||
bool try_executing_one();
|
||||
|
||||
template <typename Pred>
|
||||
bool reschedule_until(Pred const& pred);
|
||||
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `basic_thread_pool(unsigned const)`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [creates a thread pool that runs closures on `thread_count` threads. ]]
|
||||
|
||||
[[Throws:] [Whatever exception is thrown while initializing the needed resources. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:destructor Destructor `~basic_thread_pool()`]
|
||||
|
||||
virtual ~basic_thread_pool();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the thread pool.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////]
|
||||
[section:loop_executor Class `loop_executor`]
|
||||
|
||||
A user scheduled executor.
|
||||
|
||||
#include <boost/thread/loop_executor.hpp>
|
||||
namespace boost {
|
||||
class loop_executor
|
||||
{
|
||||
public:
|
||||
typedef thread_detail::work work;
|
||||
|
||||
loop_executor(loop_executor const&) = delete;
|
||||
loop_executor& operator=(loop_executor const&) = delete;
|
||||
|
||||
loop_executor();
|
||||
~loop_executor();
|
||||
|
||||
void close();
|
||||
bool closed();
|
||||
|
||||
template <typename Closure>
|
||||
void submit(Closure&& closure);
|
||||
|
||||
bool try_executing_one();
|
||||
template <typename Pred>
|
||||
bool reschedule_until(Pred const& pred);
|
||||
|
||||
void loop();
|
||||
void run_queued_closures();
|
||||
};
|
||||
}
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `loop_executor()`]
|
||||
|
||||
loop_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [creates a executor that runs closures using one of its closure-executing methods. ]]
|
||||
|
||||
[[Throws:] [Whatever exception is thrown while initializing the needed resources. ]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:destructor Destructor `~loop_executor()`]
|
||||
|
||||
virtual ~loop_executor();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Destroys the thread pool.]]
|
||||
|
||||
[[Synchronization:] [The completion of all the closures happen before the completion of the executor destructor.]]
|
||||
|
||||
]
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:loop Function member `loop()`]
|
||||
|
||||
void loop();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return:] [reschedule works until `closed()` or empty. ]]
|
||||
|
||||
[[Throws:] [whatever the current work constructor throws or the `work()` throws.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:run_queued_closures Function member `run_queued_closures()`]
|
||||
|
||||
void run_queued_closures();
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return:] [reschedule the enqueued works. ]]
|
||||
|
||||
[[Throws:] [whatever the current work constructor throws or the `work()` throws.]]
|
||||
|
||||
]
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
[endsect]
|
||||
|
||||
@@ -23,8 +23,6 @@
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9310 #9310] test_4648_lib fails on clang-darwin-asan11
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9311 #9311] ex_lambda_future fails on msvc-11.0
|
||||
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9333 #9333] ex_scoped_thread compile fails on msvc-12.0
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9404 #9404] ex_make_future regression error
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9425 #9425] Boost promise & future does not use supplied allocator for value storage
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9558 #9558] future continuations unit test hangs in get()/pthread_cond_wait() on Mac 10.7/32-bit/x86/darwin-4.2.1
|
||||
|
||||
@@ -52,8 +50,17 @@ There are some severe bugs that prevent the use of the library on concrete conte
|
||||
|
||||
[*Fixed Bugs:]
|
||||
|
||||
* [@http://svn.boost.org/trac/boost/ticket/8070 #8070] prefer GetTickCount64 over GetTickCount
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9333 #9333] ex_scoped_thread compile fails on msvc-12.0
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9366 #9366] async(Executor, ...) fails to compile with msvc-10,11,12
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9402 #9402] test_excutor regression on msvc-10,11,12
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9404 #9404] ex_make_future regression error
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9471 #9471] Synchronization documentation nits
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9535 #9535] Missing exception safety might result in crash
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9618 #9618] try_join_for problem: program is not terminate.
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9673 #9673] thread compilation with MingW/gcc on Windows gives errors
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9708 #9708] boost::condition_variable::timed_wait unexpectedly wakes up while should wait infinite
|
||||
* [@http://svn.boost.org/trac/boost/ticket/9711 #9711] future continuation called twice
|
||||
|
||||
[heading Version 4.2.0 - boost 1.55]
|
||||
|
||||
|
||||
@@ -87,8 +87,6 @@
|
||||
[[30.y] [Class thread_group] [-]]
|
||||
]
|
||||
[endsect]
|
||||
|
||||
|
||||
[section:cxx14 C++14 standard Thread library - accepted changes]
|
||||
|
||||
[note [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3797.html Working Draft, Standard for Programming Language C++]]
|
||||
@@ -131,16 +129,16 @@
|
||||
[[X.1.2.1] [try_push] [yes] [ renamed try_push_back ]]
|
||||
[[X.1.2.2] [try_pop] [yes] [ renamed try_pull_back ]]
|
||||
[[X.1.3] [Non-blocking operations] [ - ] [ - ]]
|
||||
[[X.1.3.1] [nonblocking_push] [Partial] [ renamed nonblocking_push_back ]]
|
||||
[[X.1.3.2] [nonblocking_pop] [Partial] [ renamed nonblocking_pull_front ]]
|
||||
[[X.1.3.1] [nonblocking_push] [Yes] [ renamed nonblocking_push_back ]]
|
||||
[[X.1.3.2] [nonblocking_pop] [Yes] [ renamed nonblocking_pull_front ]]
|
||||
[[X.1.4] [Push-front operations] [No] [ - ]]
|
||||
[[X.1.5] [Closed queues] [Partial] [ - ]]
|
||||
[[X.1.5.1] [close] [Yes] [ - ]]
|
||||
[[X.1.5.2] [is_closed] [Yes] [ renamed closed ]]
|
||||
[[X.1.5.3] [wait_push] [No] [ - ]]
|
||||
[[X.1.5.4] [wait_pop] [No] [ - ]]
|
||||
[[X.1.5.3] [wait_push] [Yes] [ renamed wait_push_back ]]
|
||||
[[X.1.5.4] [wait_pop] [Yes] [ renamed wait_pull_front ]]
|
||||
[[X.1.5.5] [wait_push_front] [no] [ - ]]
|
||||
[[X.1.5.6] [wait_pop] [Partial] [ - ]]
|
||||
[[X.1.5.6] [wait_pop_back] [no] [ - ]]
|
||||
[[X.1.5.6] [open] [no] [ - ]]
|
||||
[[X.1.6] [Empty and Full Queues] [Yes] [ - ]]
|
||||
[[X.1.6.1] [is_empty] [Yes] [ - ]]
|
||||
@@ -155,65 +153,61 @@
|
||||
[[X.2.1] [Lock-Free Buffer Queue] [No] [ waiting to stabilize the lock-based interface. Will use Boost.LockFree once it is Boost.Move aware. ]]
|
||||
[[X.3] [Additional Conceptual Tools] [No] [ - ]]
|
||||
[[X.3.1] [Fronts and Backs] [No] [ - ]]
|
||||
[[X.3.2] [Streaming Iterators] [No] [ - ]]
|
||||
[[X.3.2] [Streaming Iterators] [No] [ - ]]
|
||||
[[X.3.3] [Storage Iterators] [No] [ - ]]
|
||||
[[X.3.4] [Binary Interfaces] [No] [ - ]]
|
||||
[[X.3.4] [Managed Indirection] [No] [ - ]]
|
||||
]
|
||||
[endsect]
|
||||
|
||||
|
||||
[section:executors Asynchronous Executors]
|
||||
|
||||
While Boost.Thread implementation of executors would not use dynamic polymorphism, it is worth comparing with the current trend on the standard.
|
||||
|
||||
[note [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3785.pdf N3785 Executors and Schedulers]]
|
||||
|
||||
[note [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3785.pdf N3785 Executors and Schedulers]]
|
||||
|
||||
[table Asynchronous Executors
|
||||
[[Section] [Description] [Status] [Comments]]
|
||||
[[30.X.1] [Class executor] [Yes] [ - ]]
|
||||
[[30.X.1.1] [add] [Yes] [ renamed with a function template submit ]]
|
||||
[[30.X.1.1] [num_of_pendin_closures] [No] [ ]]
|
||||
[[30.X.2] [Class sceduled_executor] [No] [ - ]]
|
||||
[[30.X.2.1] [add_at] [No] [ renamed with a function template submit_at ]]
|
||||
[[30.X.2.2] [add_after] [No] [ renamed with a function template submit_after ]]
|
||||
[[30.X.3] [Executor utilities functions] [No] [ - ]]
|
||||
[[30.X.3.1] [default_executor] [No] [ - ]]
|
||||
[[30.X.3.2] [set_default_executor] [No] [ - ]]
|
||||
[[30.X.4] [Concrete executor classes] [No] [ - ]]
|
||||
[[30.X.4.1] [loop_executor] [Yes] [ static version user_scheduler, dynamic one execduler_adaptor<user_scheduler> ]]
|
||||
[[30.X.4.1] [serial_executor] [No] [ - ]]
|
||||
[[30.X.4.1] [thread_pool] [Yes] [ static version thread_pool, dynamic one execduler_adaptor<thread_pool> ]]
|
||||
[[V.1.1] [Class executor] [Yes] [ - ]]
|
||||
[[V.1.1] [add] [Yes] [ renamed with a function template submit ]]
|
||||
[[V.1.1] [num_of_pendin_closures] [No] [ ]]
|
||||
[[V.1.2] [Class sceduled_executor] [No] [ - ]]
|
||||
[[V.1.2] [add_at] [No] [ renamed with a function template submit_at ]]
|
||||
[[V.1.2] [add_after] [No] [ renamed with a function template submit_after ]]
|
||||
[[V.2] [Concrete executor classes] [No] [ - ]]
|
||||
[[V.2.1] [thread_pool] [Yes] [ static version Basic_thread_pool, dynamic one execduler_adaptor<basic_thread_pool> ]]
|
||||
[[V.2.2] [serial_executor] [yes] [ - ]]
|
||||
[[V.2.3] [loop_executor] [Yes] [ static version loop_scheduler, dynamic one execduler_adaptor<loop_scheduler> ]]
|
||||
[[V.2.4] [inline_executor] [Yes] [ static version inline_executor, dynamic one execduler_adaptor<inline_executor> ]]
|
||||
[[V.2.5] [thread_executor] [Yes] [ static version thread_executor, dynamic one execduler_adaptor<thread_executor> ]]
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[section:async Improvements to std::future<T> and Related APIs]
|
||||
|
||||
|
||||
[section:async A Standardized Representation of Asynchronous Operations]
|
||||
|
||||
[note [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3558.pdf N3558 A Standardized Representation of Asynchronous Operations]]
|
||||
[note [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3857.pdf N3857-Improvements to std::future<T> and Related APIs]]
|
||||
|
||||
[note These functions are based on [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3634.pdf [*N3634 - Improvements to std::future<T> and related APIs]] C++1y proposal by N. Gustafsson, A. Laksberg, H. Sutter, S. Mithani.]
|
||||
|
||||
[table Improvements to std::future<T> and related APIs]
|
||||
[[Section] [Description] [Status] [Comments]]
|
||||
[[30.6.6] [Class template future] [Partial] [ - ]]
|
||||
[[30.6.6.1] [then] [No] [ executor interface missing #8516 ]]
|
||||
[[30.6.6.2] [unwrap] [Yes] [ - ]]
|
||||
[[30.6.6.3] [ready] [Partial] [ is_ready ]]
|
||||
[[30.6.6] [unwrap constructor] [Yes] [ - ]]
|
||||
[[30.6.6] [then] [Yes] [ - ]]
|
||||
[[30.6.6] [unwrap] [Yes] [ - ]]
|
||||
[[30.6.6] [ready] [Partial] [ is_ready ]]
|
||||
[[30.6.7] [Class template shared_future] [Partial] [ - ]]
|
||||
[[30.6.7.1] [then] [No] [ executor interface missing #8516 ]]
|
||||
[[30.6.7.2] [unwrap] [Yes] [ #XXXX ]]
|
||||
[[30.6.7.3] [ready] [Partial] [ is_ready ]]
|
||||
[[30.6.X] [Function template when_any] [Partial] [ interface not complete #7446 ]]
|
||||
[[30.6.6] [unwrap constructor] [Yes] [ - ]]
|
||||
[[30.6.7] [then] [Yes] [ - ]]
|
||||
[[30.6.7] [unwrap] [No] [ #XXXX ]]
|
||||
[[30.6.7] [ready] [Partial] [ is_ready ]]
|
||||
[[30.6.X] [Function template when_all] [Partial] [ interface not complete #7447 ]]
|
||||
[[30.6.X] [Function template when_any] [Partial] [ interface not complete #7446 ]]
|
||||
[[30.6.X] [Function template when_any_swaped] [No] [ #XXXX ]]
|
||||
[[30.6.X] [Function template make_ready_future] [Yes] [ - ]]
|
||||
[[30.6.8] [Function template async ] [Partial] [ executor interface not complete #7448 ]]
|
||||
[[30.6.8] [Function template async ] [Yes] [ - ]]
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
[section:stream_mutex C++ Stream Mutexes - C++ Stream Guards]
|
||||
|
||||
While Boost.Thread implementation of stream mutexes differ in the approach, it is worth comparing with the current trend on the standard.
|
||||
@@ -222,7 +216,7 @@ While Boost.Thread implementation of stream mutexes differ in the approach, it i
|
||||
|
||||
[note This proposal has been replaced already by [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3678.html N3678 - C++ Stream Guards], which has been replaced by [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3678.html N3665 - Uninterleaved String Output Streaming] and [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3678.html N3750 - C++ Ostream Buffers]]
|
||||
|
||||
[table C++ C++ Stream Mutexes Conformance
|
||||
[table C++ Stream Mutexes Conformance
|
||||
[[Section] [Description] [Status] [Comments]]
|
||||
[[X.1] [Class template stream_mutex] [Partial] [ Renamed externally_locked_stream<> ]]
|
||||
[[X.2.1] [constructor] [Partial] [ externally_locked_stream needs a mutex in addition as argument. ]]
|
||||
|
||||
@@ -308,7 +308,7 @@ The second `get()` call in the following example is undefined.
|
||||
use3( ftr.get() ); // second use is undefined
|
||||
}
|
||||
|
||||
Using a `shared_mutex` solves the issue
|
||||
Using a `shared_future` solves the issue
|
||||
|
||||
void good_second_use( type arg ) {
|
||||
|
||||
|
||||
@@ -101,7 +101,7 @@ A type `Q` meets the BasicConcurrentQueue requirements if the following expressi
|
||||
* `q.push_back(rve);`
|
||||
* `q.pull_front(lre);`
|
||||
* `lre = q.pull_front();`
|
||||
* `spe = q.ptr_pull_front();`
|
||||
[/* `spe = q.ptr_pull_front();`]
|
||||
* `b = q.empty();`
|
||||
* `u = q.size();`
|
||||
|
||||
@@ -112,7 +112,7 @@ where
|
||||
* `u` denotes a value of type Q::size_type,
|
||||
* `lve` denotes a lvalue referece of type Q::value_type,
|
||||
* `rve` denotes a rvalue referece of type Q::value_type:
|
||||
* `spe` denotes a shared_ptr<Q::value_type>
|
||||
[/* `spe` denotes a shared_ptr<Q::value_type>]
|
||||
* `qs` denotes a variable of of type `queus_op_status`,
|
||||
|
||||
|
||||
@@ -200,6 +200,7 @@ where
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/
|
||||
[/////////////////////////////////////]
|
||||
[section:ptr_pull_front `spe = q.ptr_pull_front()`]
|
||||
|
||||
@@ -224,7 +225,7 @@ where
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
]
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////////]
|
||||
@@ -235,34 +236,42 @@ The ConcurrentQueue concept models a queue with .
|
||||
|
||||
A type `Q` meets the ConcurrentQueue requirements if the following expressions are well-formed and have the specified semantics
|
||||
|
||||
* `b = q.try_push_back(e);`
|
||||
* `b = q.try_push_back(rve);`
|
||||
* `b = q.try_pull_front(lre);`
|
||||
* `s = q.try_push_back(e);`
|
||||
* `s = q.try_push_back(rve);`
|
||||
* `s = q.try_pull_front(lre);`
|
||||
|
||||
where
|
||||
|
||||
* `q` denotes a value of type `Q`,
|
||||
* `e` denotes a value of type Q::value_type,
|
||||
* `u` denotes a value of type Q::size_type,
|
||||
* `e` denotes a value of type `Q::value_type`,
|
||||
* `s` denotes a value of type `queue_status`,
|
||||
* `u` denotes a value of type `Q::size_type`,
|
||||
* `lve` denotes a lvalue referece of type Q::value_type,
|
||||
* `rve` denotes a rvalue referece of type Q::value_type:
|
||||
* `spe` denotes a shared_ptr<Q::value_type>
|
||||
[/* `spe` denotes a shared_ptr<Q::value_type>]
|
||||
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:try_push_back `q.try_push_back(e);`]
|
||||
[section:try_push_back `s = q.try_push_back(e);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [If the queue `q` is not full, push back the `e` to the queue copying it.]]
|
||||
[[Effects:] [If the queue `q` is not full and not closed, push back the `e` to the queue copying it.]]
|
||||
|
||||
[[Synchronization:] [Prior pull-like operations on the same object synchronizes with this operation when the operation succeeds. ]]
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [If the queue `q` is full return `false`, otherwise return `true`;]]
|
||||
[[Return:] [
|
||||
|
||||
[[Postcondition:] [If the call returns `true`, `! q.empty()`.]]
|
||||
- If the queue is closed, returns `queue_op_status::closed`,
|
||||
|
||||
- otherwise if the queue `q` is full return `queue_op_status::full`,
|
||||
|
||||
- otherwise return `queue_op_status::success`;
|
||||
]]
|
||||
|
||||
[[Postcondition:] [If the call returns `queue_op_status::success`, `! q.empty()`.]]
|
||||
|
||||
[[Throws:] [If the queue is closed, throws sync_queue_is_closed. Any exception thrown by the copy of `e`.]]
|
||||
|
||||
@@ -272,21 +281,28 @@ where
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:try_push_back_m `q.try_push_back(rve());`]
|
||||
[section:try_push_back_m `s = q.try_push_back(rve());`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [If the queue `q` is not full, push back the `e` onto the queue moving it.]]
|
||||
[[Effects:] [If the queue `q` is not full and not closed, push back the `e` onto the queue moving it.]]
|
||||
|
||||
[[Synchronization:] [Prior pull-like operations on the same object synchronizes with this operation.]]
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [If the queue `q` is full return `false`, otherwise return `true`;]]
|
||||
[[Return:] [
|
||||
|
||||
[[Postcondition:] [If the call returns `true`, `! q.empty()`.]]
|
||||
- If the queue is closed, returns `queue_op_status::closed`,
|
||||
|
||||
- otherwise if the queue `q` is full return `queue_op_status::full`,
|
||||
|
||||
[[Throws:] [If the queue is closed, throws sync_queue_is_closed. Any exception thrown by the copy of `e`.]]
|
||||
- otherwise return `queue_op_status::success`;
|
||||
]]
|
||||
|
||||
[[Postcondition:] [If the call returns `queue_op_status::success`, `! q.empty()`.]]
|
||||
|
||||
[[Throws:] [ Any exception thrown by the copy of `e`.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the queue state is unmodified.]]
|
||||
|
||||
@@ -294,7 +310,7 @@ where
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:pull_front_lv `b = q.try_pull_front(lve)`]
|
||||
[section:try_pull_front_lv `s = q.try_pull_front(lve)`]
|
||||
|
||||
[variablelist
|
||||
|
||||
@@ -306,7 +322,13 @@ where
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [If the queue `q` is full return `false`, otherwise return `true`;]]
|
||||
[[Return:] [
|
||||
|
||||
- If the queue `q` is empty return `queue_op_status::empty`,
|
||||
|
||||
- otherwise return `queue_op_status::success`;
|
||||
|
||||
]]
|
||||
|
||||
[[Throws:] [Any exception thrown by the move of `e`.]]
|
||||
|
||||
@@ -319,25 +341,115 @@ where
|
||||
[/////////////////////////////////////]
|
||||
[section:non_blocking Non-blocking Concurrent Queue Operations]
|
||||
|
||||
For cases when blocking for mutual exclusion is undesirable, we have non-blocking operations. The interface is the same as the try operations but is allowed to also return queue_op_status::busy in case the operation is unable to complete without blocking.
|
||||
For cases when blocking for mutual exclusion is undesirable, we have non-blocking operations.
|
||||
The interface is the same as the try operations but is allowed to also return queue_op_status::busy
|
||||
in case the operation is unable to complete without blocking.
|
||||
|
||||
Non-blocking operations are provided only for BlockingQueues
|
||||
|
||||
* `b = q.try_push_back(nb, e);`
|
||||
* `b = q.try_push_back(nb, rve);`
|
||||
* `b = q.try_pull_front(nb, lre);`
|
||||
* `s = q.nonblocking_push_back(nb, e);`
|
||||
* `s = q.nonblocking_push_back(nb, rve);`
|
||||
* `s = q.nonblocking_pull_front(nb, lre);`
|
||||
|
||||
|
||||
where
|
||||
|
||||
* `q` denotes a value of type `Q`,
|
||||
* `e` denotes a value of type Q::value_type,
|
||||
* `u` denotes a value of type Q::size_type,
|
||||
* `s` denotes a value of type `queue_status`,
|
||||
* `lve` denotes a lvalue referece of type Q::value_type,
|
||||
* `rve` denotes a rvalue referece of type Q::value_type:
|
||||
* `spe` denotes a shared_ptr<Q::value_type>
|
||||
[/* `spe` denotes a shared_ptr<Q::value_type>]
|
||||
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:nonblocking_push_back `s = q.nonblocking_push_back(e);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [If the queue `q` is not full and not closed, push back the `e` to the queue copying it.]]
|
||||
|
||||
[[Synchronization:] [Prior pull-like operations on the same object synchronizes with this operation when the operation succeeds. ]]
|
||||
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [
|
||||
|
||||
- If the operation would block, return queue_op_status::busy,
|
||||
|
||||
- otherwise, if the queue is closed, return `queue_op_status::closed`,
|
||||
|
||||
- otherwise, if the queue `q` is full return `queue_op_status::full`,
|
||||
|
||||
- otherwise return `queue_op_status::success`;]]
|
||||
|
||||
[[Postcondition:] [If the call returns `queue_op_status::success`, `! q.empty()`.]]
|
||||
|
||||
[[Throws:] [If the queue is closed, throws sync_queue_is_closed. Any exception thrown by the copy of `e`.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the queue state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:nonblocking_push_back_m `s = q.nonblocking_push_back(rve());`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [If the queue `q` is not full and not closed, push back the `e` onto the queue moving it.]]
|
||||
|
||||
[[Synchronization:] [Prior pull-like operations on the same object synchronizes with this operation.]]
|
||||
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [
|
||||
|
||||
- If the operation would block, return queue_op_status::busy,
|
||||
|
||||
- otherwise if the queue is closed, returns `queue_op_status::closed`,
|
||||
|
||||
- otherwise if the queue `q` is full return `queue_op_status::full`,
|
||||
|
||||
- otherwise return `queue_op_status::success`;]]
|
||||
|
||||
[[Postcondition:] [If the call returns `queue_op_status::success`, `! q.empty()`.]]
|
||||
|
||||
[[Throws:] [ Any exception thrown by the copy of `e`.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the queue state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:nonblocking_pull_front_lv `s = q.nonblocking_pull_front(lve)`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Waits until the queue is not empty and then pull the element from the queue `q` and moves the pulled element into `lve` (this could need an allocation for unbounded queues).]]
|
||||
|
||||
[[Synchronization:] [Prior pull-like operations on the same object synchronizes with this operation.]]
|
||||
|
||||
[[Postcondition:] [`! q.full()`.]]
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [
|
||||
|
||||
- If the operation would block, return queue_op_status::busy,
|
||||
|
||||
- otherwise if the queue `q` is empty return `queue_op_status::empty`,
|
||||
|
||||
- otherwise return `queue_op_status::success`;]]
|
||||
|
||||
[[Throws:] [Any exception thrown by the move of `e`.]]
|
||||
|
||||
[[Exception safety:] [If an exception is thrown then the queue state is unmodified.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:bounded Bounded Concurrent Queue Operations]
|
||||
@@ -348,24 +460,78 @@ Bounded queues add the following valid expressions
|
||||
* `b = q.full();`
|
||||
* `u = q.capacity();`
|
||||
|
||||
|
||||
where
|
||||
|
||||
* `q` denotes a value of type `Q`,
|
||||
* `b` denotes a value of type `bool`,
|
||||
* `u` denotes a value of type `Q::size_type`,
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:full `b = q.full();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [Return `true` iff the queue is full.]]
|
||||
|
||||
[[Remark:] [Not all queues will have a full state, and these would always return false if the function is provided.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:capacity `b = q.capacity();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return type:] [`Q::size_type`.]]
|
||||
|
||||
[[Return:] [Return the capacity of queue.]]
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:closed_op Closed Concurrent Queue Operations]
|
||||
|
||||
Closed queues add the following valid expressions
|
||||
|
||||
* `q.close();`
|
||||
* `b = q.closed();`
|
||||
|
||||
Basic expressions
|
||||
|
||||
* `qs = q.wait_push_back(e);`
|
||||
* `qs = q.wait_push_back(rve);`
|
||||
* `qs = q.wait_pull_front(lre);`
|
||||
|
||||
|
||||
* `s = q.wait_push_back(e);`
|
||||
* `s = q.wait_push_back(rve);`
|
||||
* `s = q.wait_pull_front(lre);`
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:wait_push_back `q.wait_push_back(e);`]
|
||||
[section:wait_push_back `q.close();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Effects:] [Close the queue.]]
|
||||
|
||||
]
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:wait_push_back `b = q.closed();`]
|
||||
|
||||
[variablelist
|
||||
|
||||
[[Return type:] [`bool`.]]
|
||||
|
||||
[[Return:] [Return `true` iff the queue is closed.]]
|
||||
]
|
||||
|
||||
[endsect]
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:wait_push_back `s = q.wait_push_back(e);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
@@ -377,7 +543,13 @@ Basic expressions
|
||||
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [`queue_op_status::closed` is the queue is closed.]]
|
||||
[[Return:] [
|
||||
|
||||
- If the queue is closed retun `queue_op_status::closed`,
|
||||
|
||||
- otherwise, return `queue_op_status::success` if no exception is thrown.
|
||||
|
||||
]]
|
||||
|
||||
[[Throws:] [Any exception thrown by the copy of `e`.]]
|
||||
|
||||
@@ -387,7 +559,7 @@ Basic expressions
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:wait_push_back_m `q.wait_push_back(rve);`]
|
||||
[section:wait_push_back_m `s = q.wait_push_back(rve);`]
|
||||
|
||||
[variablelist
|
||||
|
||||
@@ -399,7 +571,13 @@ Basic expressions
|
||||
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [`queue_op_status::closed` is the queue is closed.]]
|
||||
[[Return:] [
|
||||
|
||||
- If the queue is closed retun `queue_op_status::closed`,
|
||||
|
||||
- otherwise, return `queue_op_status::success` if no exception is thrown.
|
||||
|
||||
.]]
|
||||
|
||||
[[Throws:] [Any exception thrown by the copy of `e`.]]
|
||||
|
||||
@@ -409,7 +587,7 @@ Basic expressions
|
||||
|
||||
[endsect]
|
||||
[/////////////////////////////////////]
|
||||
[section:wait_pull_front_lv `q.wait_pull_front(lve)`]
|
||||
[section:wait_pull_front_lv `s = q.wait_pull_front(lve)`]
|
||||
|
||||
[variablelist
|
||||
|
||||
@@ -421,8 +599,14 @@ Basic expressions
|
||||
|
||||
[[Return type:] [`queue_op_status`.]]
|
||||
|
||||
[[Return:] [If the queue is empty and closed, return `queue_op_status::closed`. Otherwise, if the queue is empty, return `queue_op_status::empty`.
|
||||
Otherwise, return `queue_op_status::success` if no exception is thrown.]]
|
||||
[[Return:] [
|
||||
|
||||
- If the queue is empty and closed, return `queue_op_status::closed`,
|
||||
|
||||
- otherwise, if the queue is empty, return `queue_op_status::empty`,
|
||||
|
||||
- otherwise, return `queue_op_status::success` if no exception is thrown.
|
||||
]]
|
||||
|
||||
[[Throws:] [Any exception thrown by the move of `e`.]]
|
||||
|
||||
@@ -506,7 +690,6 @@ Otherwise, return `queue_op_status::success` if no exception is thrown.]]
|
||||
|
||||
void pull_front(value_type&);
|
||||
value_type pull_front();
|
||||
shared_ptr<ValueType> ptr_pull_front();
|
||||
|
||||
queue_op_status try_pull_front(value_type&);
|
||||
queue_op_status nonblocking_pull_front(value_type&);
|
||||
@@ -516,6 +699,8 @@ Otherwise, return `queue_op_status::success` if no exception is thrown.]]
|
||||
};
|
||||
}
|
||||
|
||||
[/ shared_ptr<ValueType> ptr_pull_front();]
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `sync_bounded_queue(size_type)`]
|
||||
|
||||
@@ -632,7 +817,6 @@ Otherwise, return `queue_op_status::success` if no exception is thrown.]]
|
||||
|
||||
void pull_front(value_type&);
|
||||
value_type pull_front();
|
||||
shared_ptr<ValueType> ptr_pull_front();
|
||||
|
||||
queue_op_status try_pull_front(value_type&);
|
||||
queue_op_status nonblocking_pull_front(value_type&);
|
||||
@@ -642,6 +826,10 @@ Otherwise, return `queue_op_status::success` if no exception is thrown.]]
|
||||
void close();
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
[/ shared_ptr<ValueType> ptr_pull_front();]
|
||||
|
||||
|
||||
[/////////////////////////////////////]
|
||||
[section:constructor Constructor `sync_bounded_queue(size_type)`]
|
||||
|
||||
Reference in New Issue
Block a user