diff --git a/doc/Jamfile.v2 b/doc/Jamfile.v2 index 42c88f1d..dc68fc39 100644 --- a/doc/Jamfile.v2 +++ b/doc/Jamfile.v2 @@ -1,11 +1,55 @@ -# Copyright (C) 2001-2003 -# William E. Kempf +# (C) Copyright 2008 Anthony Williams # # Distributed under the Boost Software License, Version 1.0. (See accompanying # file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) -import toolset ; -toolset.using doxygen ; +path-constant boost-images : ../../../doc/src/images ; -boostbook thread : thread.xml ; +xml thread : thread.qbk ; + +boostbook standalone + : + thread + : + # HTML options first: + # Use graphics not text for navigation: + navig.graphics=1 + # How far down we chunk nested sections, basically all of them: + chunk.section.depth=3 + # Don't put the first section on the same page as the TOC: + chunk.first.sections=1 + # How far down sections get TOC's + toc.section.depth=10 + # Max depth in each TOC: + toc.max.depth=3 + # How far down we go with TOC's + generate.section.toc.level=10 + # Path for links to Boost: + boost.root=../../../.. + # Path for libraries index: + boost.libraries=../../../../libs/libraries.htm + # Use the main Boost stylesheet: + html.stylesheet=../../../../doc/html/boostbook.css + + # PDF Options: + # TOC Generation: this is needed for FOP-0.9 and later: + #fop1.extensions=1 + # Or enable this if you're using XEP: + xep.extensions=1 + # TOC generation: this is needed for FOP 0.2, but must not be set to zero for FOP-0.9! + fop.extensions=0 + # No indent on body text: + body.start.indent=0pt + # Margin size: + page.margin.inner=0.5in + # Margin size: + page.margin.outer=0.5in + # Yes, we want graphics for admonishments: + admon.graphics=1 + # Set this one for PDF generation *only*: + # default pnd graphics are awful in PDF form, + # better use SVG's instead: + pdf:admon.graphics.extension=".svg" + pdf:admon.graphics.path=$(boost-images)/ + ; diff --git a/doc/acknowledgements.qbk b/doc/acknowledgements.qbk new file mode 100644 index 00000000..ec008cf3 --- /dev/null +++ b/doc/acknowledgements.qbk @@ -0,0 +1,16 @@ +[section:acknowledgements Acknowledgments] + +The original implementation of __boost_thread__ was written by William Kempf, with contributions from numerous others. This new +version initially grew out of an attempt to rewrite __boost_thread__ to William Kempf's design with fresh code that could be +released under the Boost Software License. However, as the C++ Standards committee have been actively discussing standardizing a +thread library for C++, this library has evolved to reflect the proposals, whilst retaining as much backwards-compatibility as +possible. + +Particular thanks must be given to Roland Schwarz, who contributed a lot of time and code to the original __boost_thread__ library, +and who has been actively involved with the rewrite. The scheme for dividing the platform-specific implementations into separate +directories was devised by Roland, and his input has contributed greatly to improving the quality of the current implementation. + +Thanks also must go to Peter Dimov, Howard Hinnant, Alexander Terekhov, Chris Thomasson and others for their comments on the +implementation details of the code. + +[endsect] diff --git a/doc/acknowledgements.xml b/doc/acknowledgements.xml deleted file mode 100644 index 60ad9198..00000000 --- a/doc/acknowledgements.xml +++ /dev/null @@ -1,73 +0,0 @@ - - - %thread.entities; -]> - -
- Acknowledgements - William E. Kempf was the architect, designer, and implementor of - &Boost.Thread;. - Mac OS Carbon implementation written by Mac Murrett. - Dave Moore provided initial submissions and further comments on the - barrier - , - thread_pool - , - read_write_mutex - , - read_write_try_mutex - and - read_write_timed_mutex - classes. - Important contributions were also made by Jeremy Siek (lots of input - on the design and on the implementation), Alexander Terekhov (lots of input - on the Win32 implementation, especially in regards to boost::condition, as - well as a lot of explanation of POSIX behavior), Greg Colvin (lots of input - on the design), Paul Mclachlan, Thomas Matelich and Iain Hanson (for help - in trying to get the build to work on other platforms), and Kevin S. Van - Horn (for several updates/corrections to the documentation). - Mike Glassford finished changes to &Boost.Thread; that were begun - by William Kempf and moved them into the main CVS branch. - He also addressed a number of issues that were brought up on the Boost - developer's mailing list and provided some additions and changes to the - read_write_mutex and related classes. - The documentation was written by William E. Kempf. Beman Dawes - provided additional documentation material and editing. - Mike Glassford finished William Kempf's conversion of the documentation to - BoostBook format and added a number of new sections. - Discussions on the boost.org mailing list were essential in the - development of &Boost.Thread; - . As of August 1, 2001, participants included Alan Griffiths, Albrecht - Fritzsche, Aleksey Gurtovoy, Alexander Terekhov, Andrew Green, Andy Sawyer, - Asger Alstrup Nielsen, Beman Dawes, Bill Klein, Bill Rutiser, Bill Wade, - Branko èibej, Brent Verner, Craig Henderson, Csaba Szepesvari, - Dale Peakall, Damian Dixon, Dan Nuffer, Darryl Green, Daryle Walker, David - Abrahams, David Allan Finch, Dejan Jelovic, Dietmar Kuehl, Douglas Gregor, - Duncan Harris, Ed Brey, Eric Swanson, Eugene Karpachov, Fabrice Truillot, - Frank Gerlach, Gary Powell, Gernot Neppert, Geurt Vos, Ghazi Ramadan, Greg - Colvin, Gregory Seidman, HYS, Iain Hanson, Ian Bruntlett, J Panzer, Jeff - Garland, Jeff Paquette, Jens Maurer, Jeremy Siek, Jesse Jones, Joe Gottman, - John (EBo) David, John Bandela, John Maddock, John Max Skaller, John - Panzer, Jon Jagger , Karl Nelson, Kevlin Henney, KG Chandrasekhar, Levente - Farkas, Lie-Quan Lee, Lois Goldthwaite, Luis Pedro Coelho, Marc Girod, Mark - A. Borgerding, Mark Rodgers, Marshall Clow, Matthew Austern, Matthew Hurd, - Michael D. Crawford, Michael H. Cox , Mike Haller, Miki Jovanovic, Nathan - Myers, Paul Moore, Pavel Cisler, Peter Dimov, Petr Kocmid, Philip Nash, - Rainer Deyke, Reid Sweatman, Ross Smith, Scott McCaskill, Shalom Reich, - Steve Cleary, Steven Kirk, Thomas Holenstein, Thomas Matelich, Trevor - Perrin, Valentin Bonnard, Vesa Karvonen, Wayne Miller, and William - Kempf. - - As of February 2006 Anthony Williams and Roland Schwarz took over maintainance - and further development of the library after it has been in an orphaned state - for a rather long period of time. - - Apologies for anyone inadvertently missed. -
- diff --git a/doc/barrier-ref.xml b/doc/barrier-ref.xml deleted file mode 100644 index 10b5519b..00000000 --- a/doc/barrier-ref.xml +++ /dev/null @@ -1,82 +0,0 @@ - - - %thread.entities; -]> - -
- - - - boost::noncopyable - Exposition only - - - - An object of class barrier is a synchronization - primitive used to cause a set of threads to wait until they each perform a - certain function or each reach a particular point in their execution. - - - - When a barrier is created, it is initialized with a thread count N. - The first N-1 calls to wait() will all cause their threads to be blocked. - The Nth call to wait() will allow all of the waiting threads, including - the Nth thread, to be placed in a ready state. The Nth call will also "reset" - the barrier such that, if an additional N+1th call is made to wait(), - it will be as though this were the first call to wait(); in other - words, the N+1th to 2N-1th calls to wait() will cause their - threads to be blocked, and the 2Nth call to wait() will allow all of - the waiting threads, including the 2Nth thread, to be placed in a ready state - and reset the barrier. This functionality allows the same set of N threads to re-use - a barrier object to synchronize their execution at multiple points during their - execution. - See for definitions of thread - states blocked - and ready. - Note that "waiting" is a synonym for blocked. - - - - - size_t - - - Constructs a barrier object that - will cause count threads to block on a call to wait(). - - - - - Destroys *this. If threads are still executing - their wait() operations, the behavior for these threads is undefined. - - - - - - bool - - Wait until N threads call wait(), where - N equals the count provided to the constructor for the - barrier object. - Note that if the barrier is - destroyed before wait() can return, the behavior is - undefined. - - Exactly one of the N threads will receive a return value - of true, the others will receive a value of false. - Precisely which thread receives the return value of true will - be implementation-defined. Applications can use this value to designate one - thread as a leader that will take a certain action, and the other threads - emerging from the barrier can wait for that action to take place. - - - - -
diff --git a/doc/barrier.qbk b/doc/barrier.qbk new file mode 100644 index 00000000..c5df571d --- /dev/null +++ b/doc/barrier.qbk @@ -0,0 +1,65 @@ +[section:barriers Barriers] + +A barrier is a simple concept. Also known as a ['rendezvous], it is a synchronization point between multiple threads. The barrier is +configured for a particular number of threads (`n`), and as threads reach the barrier they must wait until all `n` threads have +arrived. Once the `n`-th thread has reached the barrier, all the waiting threads can proceed, and the barrier is reset. + +[section:barrier Class `barrier`] + + #include + + class barrier + { + public: + barrier(unsigned int count); + ~barrier(); + + bool wait(); + }; + +Instances of __barrier__ are not copyable or movable. + +[heading Constructor] + + barrier(unsigned int count); + +[variablelist + +[[Effects:] [Construct a barrier for `count` threads.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[heading Destructor] + + ~barrier(); + +[variablelist + +[[Precondition:] [No threads are waiting on `*this`.]] + +[[Effects:] [Destroys `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[heading Member function `wait`] + + bool wait(); + +[variablelist + +[[Effects:] [Block until `count` threads have called `wait` on `*this`. When the `count`-th thread calls `wait`, all waiting threads +are unblocked, and the barrier is reset. ]] + +[[Returns:] [`true` for exactly one thread from each batch of waiting threads, `false` otherwise.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[endsect] + +[endsect] diff --git a/doc/bibliography.xml b/doc/bibliography.xml deleted file mode 100644 index 458d6392..00000000 --- a/doc/bibliography.xml +++ /dev/null @@ -1,234 +0,0 @@ - - - %thread.entities; -]> - - - Bibliography - - AndrewsSchnieder83 - - ACM Computing Surveys - Vol. 15 - No. 1 - March, 1983 - - - - - Gregory - R. - Andrews - - - Fred - B. - Schneider - - - - <ulink - url="http://www.acm.org/pubs/citations/journals/surveys/1983-15-1/p3-andrews/" - >Concepts and Notations for Concurrent Programming</ulink> - - - Good general background reading. Includes descriptions of Path - Expressions, Message Passing, and Remote Procedure Call in addition to the - basics - - - Boost - The Boost world wide web site. - http://www.boost.org - &Boost.Thread; is one of many Boost libraries. The Boost web - site includes a great deal of documentation and general information which - applies to all Boost libraries. Current copies of the libraries including - documentation and test programs may be downloaded from the web - site. - - - Hansen73 - - ACM Computing Surveys - Vol. 5 - No. 4 - December, 1973 - - - 0-201-63392-2 - Per Brinch - Hansen - - - <ulink - url="http://www.acm.org/pubs/articles/journals/surveys/1973-5-4/p223-hansen/" - >Concurrent Programming Concepts</ulink> - - - "This paper describes the evolution of language features for - multiprogramming from event queues and semaphores to critical regions and - monitors." Includes analysis of why events are considered error-prone. Also - noteworthy because of an introductory quotation from Christopher Alexander; - Brinch Hansen was years ahead of others in recognizing pattern concepts - applied to software, too. - - - Butenhof97 - - <ulink url="http://cseng.aw.com/book/0,3828,0201633922,00.html" - >Programming with POSIX Threads </ulink> - - - David - R. - Butenhof - - Addison-Wesley - 1997 - ISNB: 0-201-63392-2 - This is a very readable explanation of threads and how to use - them. Many of the insights given apply to all multithreaded programming, not - just POSIX Threads - - - Hoare74 - - Communications of the ACM - Vol. 17 - No. 10 - October, 1974 - - - - <ulink url=" http://www.acm.org/classics/feb96/" - >Monitors: An Operating System Structuring Concept</ulink> - - - C.A.R. - Hoare - - 549-557 - - Hoare and Brinch Hansen's work on Monitors is the basis for reliable - multithreading patterns. This is one of the most often referenced papers in - all of computer science, and with good reason. - - - ISO98 - - <ulink url="http://www.ansi.org">Programming Language C++</ulink> - - ISO/IEC - 14882:1998(E) - This is the official C++ Standards document. Available from the ANSI - (American National Standards Institute) Electronic Standards Store. - - - McDowellHelmbold89 - - Communications of the ACM - Vol. 21 - No. 2 - December, 1989 - - - - Charles - E. - McDowell - - - David - P. - Helmbold - - - <ulink - url="http://www.acm.org/pubs/citations/journals/surveys/1989-21-4/p593-mcdowell/" - >Debugging Concurrent Programs</ulink> - - - Identifies many of the unique failure modes and debugging difficulties - associated with concurrent programs. - - - SchmidtPyarali - - <ulink url="http://www.cs.wustl.edu/~schmidt/win32-cv-1.html8" - >Strategies for Implementing POSIX Condition Variables on Win32</ulink> - - - - Douglas - C. - Schmidt - - - Irfan - Pyarali - - - Department of Computer Science, Washington University, St. Louis, - Missouri - Rationale for understanding &Boost.Thread; condition - variables. Note that Alexander Terekhov found some bugs in the - implementation given in this article, so pthreads-win32 and &Boost.Thread; - are even more complicated yet. - - - SchmidtStalRohnertBuschmann - - <ulink - url="http://www.wiley.com/Corporate/Website/Objects/Products/0,9049,104671,00.html" - >Pattern-Oriented Architecture Volume 2</ulink> - - Patterns for Concurrent and Networked Objects - POSA2 - - - Douglas - C. - Schmidt - - - Michael - Stal - - - Hans - Rohnert - - - Frank - Buschmann - - - Wiley - 2000 - This is a very good explanation of how to apply several patterns - useful for concurrent programming. Among the patterns documented is the - Monitor Pattern mentioned frequently in the &Boost.Thread; - documentation. - - - Stroustrup - - <ulink url="http://cseng.aw.com/book/0,3828,0201700735,00.html" - >The C++ Programming Language</ulink> - - Special Edition - Addison-Wesley - 2000 - ISBN: 0-201-70073-5 - The first book a C++ programmer should own. Note that the 3rd edition - (and subsequent editions like the Special Edition) has been rewritten to - cover the ISO standard language and library. - - diff --git a/doc/build.xml b/doc/build.xml deleted file mode 100644 index 7f426771..00000000 --- a/doc/build.xml +++ /dev/null @@ -1,137 +0,0 @@ - - - %thread.entities; -]> - -
- Build - - How you build the &Boost.Thread; libraries, and how you build your own applications - that use those libraries, are some of the most frequently asked questions. Build - processes are difficult to deal with in a portable manner. That's one reason - why &Boost.Thread; makes use of &Boost.Build;. - In general you should refer to the documentation for &Boost.Build;. - This document will only supply you with some simple usage examples for how to - use bjam to build and test &Boost.Thread;. In addition, this document - will try to explain the build requirements so that users may create their own - build processes (for instance, create an IDE specific project), both for building - and testing &Boost.Thread;, as well as for building their own projects using - &Boost.Thread;. - -
- Building the &Boost.Thread; Libraries - - Building the &Boost.Thread; Library depends on how you intend to use it. You have several options: - - - Using as a precompiled library, possibly - with auto-linking, or for use from within an IDE. - - - Use from a &Boost.Build; project. - - - Using in source form. - - - -
- Precompiled - - Using the &Boost.Thread; library in precompiled form is the way to go if you want to - install the library to a standard place, from where your linker is able to resolve code - in binary form. You also will want this option if compile time is a concern. Multiple - variants are available, for different toolsets and build variants (debug/release). - The library files are named {lead}boost_thread{build-specific-tags}.{extension}, - where the build-specific-tags indicate the toolset used to build the library, whether it's - a debug or release build, what version of &Boost; was used, etc.; and the lead and extension - are the appropriate extensions for a dynamic link library or static library for the platform - for which &Boost.Thread; is being built. - For instance, a debug build of the dynamic library built for Win32 with VC++ 7.1 using Boost 1.34 would - be named boost_thread-vc71-mt-gd-1_34.dll. - More information on this should be available from the &Boost.Build; documentation. - - - Building should be possible with the default configuration. If you are running into problems, - it might be wise to adjust your local settings of &Boost.Build; though. Typically you will - need to get your user-config.jam file to reflect your environment, i.e. used toolsets. Please - refer to the &Boost.Build; documentation to learn how to do this. - - - To create the libraries you need to open a command shell and change to the - boost_root directory. From there you give the command - bjam --toolset=mytoolset stage --with-thread - Replace mytoolset with the name of your toolset, e.g. msvc-7.1 . - This will compile and put the libraries into the stage directory which is just below the - boost_root directory. &Boost.Build; by default will generate static and - dynamic variants for debug and release. - - - Invoking the above command without the --with-thread switch &Boost.Build; will build all of - the Boost distribution, including &Boost.Thread;. - - - The next step is to copy your libraries to a place where your linker is able to pick them up. - It is also quite possible to leave them in the stage directory and instruct your IDE to take them - from there. - - - In your IDE you then need to add boost_root/boost to the paths where the compiler - expects to find files to be included. For toolsets that support auto-linking - it is not necessary to explicitly specify the name of the library to link against, it is sufficient - to specify the path of the stage directory. Typically this is true on Windows. For gcc you need - to specify the exact library name (including all the tags). Please don't forget that threading - support must be turned on to be able to use the library. You should be able now to build your - project from the IDE. - -
-
- &Boost.Build; Project - - If you have decided to use &Boost.Build; as a build environment for your application, you simply - need to add a single line to your Jamroot file: - use-project /boost : {path-to-boost-root} ; - where {path-to-boost-root} needs to be replaced with the location of - your copy of the boost tree. - Later when you specify a component that needs to link against &Boost.Thread; you specify this - as e.g.: - exe myapp : {myappsources} /boost//thread ; - and you are done. - -
-
- Source Form - - Of course it is also possible to use the &Boost.Thread; library in source form. - First you need to specify the boost_root/boost directory as - a path where your compiler expects to find files to include. It is not easy - to isolate the &Boost.Thread; include files from the rest of the boost - library though. You would also need to isolate every include file that the thread - library depends on. Next you need to copy the files from - boost_root/libs/thread/src to your project and instruct your - build system to compile them together with your project. Please look into the - Jamfile in boost_root/libs/thread/build - to find out which compiler options and defines you will need to get a clean compile. - Using the boost library in this way is the least recommended, and should only be - considered if avoiding dependency on &Boost.Build; is a requirement. Even if so - it might be a better option to use the library in it's precompiled form. - Precompiled downloads are available from the boost consulting web site, or as - part of most linux distributions. - -
-
-
- Testing the &Boost.Thread; Libraries - - To test the &Boost.Thread; libraries using &Boost.Build;, simply change to the - directory boost_root/libs/thread/test and execute the command: - bjam --toolset=mytoolset test - -
-
diff --git a/doc/changes.qbk b/doc/changes.qbk new file mode 100644 index 00000000..030dbc3a --- /dev/null +++ b/doc/changes.qbk @@ -0,0 +1,50 @@ +[section:changes Changes since boost 1.34] + +Almost every line of code in __boost_thread__ has been changed since the 1.34 release of boost. However, most of the interface +changes have been extensions, so the new code is largely backwards-compatible with the old code. The new features and breaking +changes are described below. + +[heading New Features] + +* Instances of __thread__ and of the various lock types are now movable. + +* Threads can be interrupted at __interruption_points__. + +* Condition variables can now be used with any type that implements the __lockable_concept__, through the use of +`boost::condition_variable_any` (`boost::condition` is a `typedef` to `boost::condition_variable_any`, provided for backwards +compatibility). `boost::condition_variable` is provided as an optimization, and will only work with +`boost::unique_lock` (`boost::mutex::scoped_lock`). + +* Thread IDs are separated from __thread__, so a thread can obtain it's own ID (using `boost::this_thread::get_id()`), and IDs can +be used as keys in associative containers, as they have the full set of comparison operators. + +* Timeouts are now implemented using the Boost DateTime library, through a typedef `boost::system_time` for absolute timeouts, and +with support for relative timeouts in many cases. `boost::xtime` is supported for backwards compatibility only. + +* Locks are implemented as publicly accessible templates `boost::lock_guard`, `boost::unique_lock`, `boost::shared_lock`, and +`boost::upgrade_lock`, which are templated on the type of the mutex. The __lockable_concept__ has been extended to include publicly +available __lock_ref__ and __unlock_ref__ member functions, which are used by the lock types. + +[heading Breaking Changes] + +The list below should cover all changes to the public interface which break backwards compatibility. + +* __try_mutex__ has been removed, and the functionality subsumed into __mutex__. __try_mutex__ is left as a `typedef`, +but is no longer a separate class. + +* __recursive_try_mutex__ has been removed, and the functionality subsumed into +__recursive_mutex__. __recursive_try_mutex__ is left as a `typedef`, but is no longer a separate class. + +* `boost::detail::thread::lock_ops` has been removed. Code that relies on the `lock_ops` implementation detail will no longer work, +as this has been removed, as it is no longer necessary now that mutex types now have public __lock_ref__ and __unlock_ref__ member +functions. + +* `scoped_lock` constructors with a second parameter of type `bool` are no longer provided. With previous boost releases, +``boost::mutex::scoped_lock some_lock(some_mutex,false);`` could be used to create a lock object that was associated with a mutex, +but did not lock it on construction. This facility has now been replaced with the constructor that takes a +`boost::defer_lock_type` as the second parameter: ``boost::mutex::scoped_lock some_lock(some_mutex,boost::defer_lock);`` + +* The broken `boost::read_write_mutex` has been replaced with __shared_mutex__. + + +[endsect] diff --git a/doc/concepts.xml b/doc/concepts.xml deleted file mode 100644 index 30386f70..00000000 --- a/doc/concepts.xml +++ /dev/null @@ -1,2305 +0,0 @@ - - - %thread.entities; -]> - -
- Concepts - - &Boost.Thread; currently supports two types of mutex concepts: - ordinary Mutexes, - which allow only one thread at a time to access a resource, and - Read/Write Mutexes, - which allow only one thread at a time to access a resource when it is - being modified (the "Write" part of Read/Write), but allows multiple threads - to access a resource when it is only being referenced (the "Read" part of - Read/Write). - Unfortunately it turned out that the current implementation of Read/Write Mutex has - some serious problems. So it was decided not to put this implementation into - release grade code. Also discussions on the mailing list led to the - conclusion that the current concepts need to be rethought. In particular - the schedulings - Inter-Class Scheduling Policies are deemed unnecessary. - There seems to be common belief that a fair scheme suffices. - The following documentation has been retained however, to give - readers of this document the opportunity to study the original design. - - -
- Mutexes - - Certain changes to the mutexes and lock concepts are - currently under discussion. In particular, the combination of - the multiple lock concepts into a single lock concept - is likely, and the combination of the multiple mutex - concepts into a single mutex concept is also possible. - - A mutex (short for mutual-exclusion) object is used to serialize - access to a resource shared between multiple threads. The - Mutex concept, with - TryMutex and - TimedMutex refinements, - formalize the requirements. A model that implements Mutex and its - refinements has two states: locked and - unlocked. Before using a shared resource, a - thread locks a &Boost.Thread; mutex object - (an object whose type is a model of - Mutex or one of it's - refinements), ensuring - thread-safe access to - the shared resource. When use of the shared resource is complete, the thread - unlocks the mutex object, allowing another thread to acquire the lock and - use the shared resource. - Traditional C thread APIs, like POSIX threads or the Windows thread - APIs, expose functions to lock and unlock a mutex object. This is dangerous - since it's easy to forget to unlock a locked mutex. When the flow of control - is complex, with multiple return points, the likelihood of forgetting to - unlock a mutex object becomes even greater. When exceptions are thrown, - it becomes nearly impossible to ensure that the mutex object is unlocked - properly when using these traditional API's. The result is - deadlock. - Many C++ threading libraries use a pattern known as Scoped - Locking &cite.SchmidtStalRohnertBuschmann; to free the programmer from - the need to explicitly lock and unlock mutex objects. With this pattern, a - Lock concept is employed where - the lock object's constructor locks the associated mutex object and the - destructor automatically does the unlocking. The - &Boost.Thread; library takes this pattern to - the extreme in that Lock concepts are the only way to lock and unlock a - mutex object: lock and unlock functions are not exposed by any - &Boost.Thread; mutex objects. This helps to - ensure safe usage patterns, especially when code throws exceptions. - -
- Locking Strategies - - Every mutex object follows one of several locking strategies. These - strategies define the semantics for the locking operation when the calling - thread already owns a lock on the mutex object. - -
- Recursive Locking Strategy - - With a recursive locking strategy, when a thread attempts to acquire - a lock on the mutex object for which it already owns a lock, the operation - is successful. Note the distinction between a thread, which may have - multiple locks outstanding on a recursive mutex object, and a lock object, - which even for a recursive mutex object cannot have any of its lock - functions called multiple times without first calling unlock. - - Internally a lock count is maintained and the owning thread must - unlock the mutex object the same number of times that it locked it before - the mutex object's state returns to unlocked. Since mutex objects in - &Boost.Thread; expose locking - functionality only through lock concepts, a thread will always unlock a - mutex object the same number of times that it locked it. This helps to - eliminate a whole set of errors typically found in traditional C style - thread APIs. - - Classes boost::recursive_mutex, - boost::recursive_try_mutex and - boost::recursive_timed_mutex use this locking - strategy. -
- -
- Checked Locking Strategy - - With a checked locking strategy, when a thread attempts to acquire a - lock on the mutex object for which the thread already owns a lock, the - operation will fail with some sort of error indication. Further, attempts - by a thread to unlock a mutex object that was not locked by the thread - will also return some sort of error indication. In - &Boost.Thread;, an exception of type - boost::lock_error - would be thrown in these cases. - - &Boost.Thread; does not currently - provide any mutex objects that use this strategy. -
- -
- Unchecked Locking Strategy - - With an unchecked locking strategy, when a thread attempts to acquire - a lock on a mutex object for which the thread already owns a lock the - operation will - deadlock. In general - this locking strategy is less safe than a checked or recursive strategy, - but it's also a faster strategy and so is employed by many libraries. - - &Boost.Thread; does not currently - provide any mutex objects that use this strategy. -
- -
- Unspecified Locking Strategy - - With an unspecified locking strategy, when a thread attempts to - acquire a lock on a mutex object for which the thread already owns a lock - the operation results in - undefined behavior. - - - In general a mutex object with an unspecified locking strategy is - unsafe, and it requires programmer discipline to use the mutex object - properly. However, this strategy allows an implementation to be as fast as - possible with no restrictions on its implementation. This is especially - true for portable implementations that wrap the native threading support - of a platform. For this reason, the classes - boost::mutex, - boost::try_mutex and - boost::timed_mutex use this locking strategy - despite the lack of safety. -
-
- -
- Scheduling Policies - - Every mutex object follows one of several scheduling policies. These - policies define the semantics when the mutex object is unlocked and there is - more than one thread waiting to acquire a lock. In other words, the policy - defines which waiting thread shall acquire the lock. - -
- FIFO Scheduling Policy - - With a FIFO ("First In First Out") scheduling policy, threads waiting - for the lock will acquire it in a first-come-first-served order. - This can help prevent a high priority thread from starving lower priority - threads that are also waiting on the mutex object's lock. -
- -
- Priority Driven Policy - - With a Priority Driven scheduling policy, the thread with the - highest priority acquires the lock. Note that this means that low-priority - threads may never acquire the lock if the mutex object has high contention - and there is always at least one high-priority thread waiting. This is - known as thread starvation. When multiple threads of the same priority are - waiting on the mutex object's lock one of the other scheduling priorities - will determine which thread shall acquire the lock. -
- -
- Unspecified Policy - - The mutex object does not specify a scheduling policy. In order to - ensure portability, all &Boost.Thread; - mutex objects use an unspecified scheduling policy. -
-
- -
- Mutex Concepts - -
- Mutex Concept - - A Mutex object has two states: locked and unlocked. Mutex object - state can only be determined by a lock object meeting the - appropriate lock concept requirements - and constructed for the Mutex object. - - A Mutex is - - NonCopyable. - For a Mutex type M - and an object m of that type, - the following expressions must be well-formed - and have the indicated effects. - - - Mutex Expressions - - - - - Expression - Effects - - - - - - M m; - Constructs a mutex object m. - Postcondition: m is unlocked. - - - (&m)->~M(); - Precondition: m is unlocked. Destroys a mutex object - m. - - - M::scoped_lock - A model of - ScopedLock - - - - -
-
- -
- TryMutex Concept - - A TryMutex is a refinement of - Mutex. - For a TryMutex type M - and an object m of that type, - the following expressions must be well-formed - and have the indicated effects. - - - TryMutex Expressions - - - - - Expression - Effects - - - - - - M::scoped_try_lock - A model of - ScopedTryLock - - - - -
-
- -
- TimedMutex Concept - - A TimedMutex is a refinement of - TryMutex. - For a TimedMutex type M - and an object m of that type, - the following expressions must be well-formed - and have the indicated effects. - - - TimedMutex Expressions - - - - - Expression - Effects - - - - - - M::scoped_timed_lock - A model of - ScopedTimedLock - - - - -
-
-
- -
- Mutex Models - - &Boost.Thread; currently supplies six models of - Mutex - and its refinements. - - - Mutex Models - - - - - Concept - Refines - Models - - - - - - Mutex - - - boost::mutex - boost::recursive_mutex - - - - TryMutex - Mutex - - boost::try_mutex - boost::recursive_try_mutex - - - - TimedMutex - TryMutex - - boost::timed_mutex - boost::recursive_timed_mutex - - - - -
-
- -
- Lock Concepts - - A lock object provides a safe means for locking and unlocking a mutex - object (an object whose type is a model of Mutex or one of its refinements). In - other words they are an implementation of the Scoped - Locking &cite.SchmidtStalRohnertBuschmann; pattern. The ScopedLock, - ScopedTryLock, and - ScopedTimedLock - concepts formalize the requirements. - Lock objects are constructed with a reference to a mutex object and - typically acquire ownership of the mutex object by setting its state to - locked. They also ensure ownership is relinquished in the destructor. Lock - objects also expose functions to query the lock status and to manually lock - and unlock the mutex object. - Lock objects are meant to be short lived, expected to be used at block - scope only. The lock objects are not thread-safe. Lock objects must - maintain state to indicate whether or not they've been locked and this state - is not protected by any synchronization concepts. For this reason a lock - object should never be shared between multiple threads. - -
- Lock Concept - - For a Lock type L - and an object lk - and const object clk of that type, - the following expressions must be well-formed - and have the indicated effects. - - - Lock Expressions - - - - - Expression - Effects - - - - - - (&lk)->~L(); - if (locked()) unlock(); - - - (&clk)->operator const void*() - Returns type void*, non-zero if the associated mutex - object has been locked by clk, otherwise 0. - - - clk.locked() - Returns a bool, (&clk)->operator - const void*() != 0 - - - lk.lock() - - Throws boost::lock_error - if locked(). - - If the associated mutex object is - already locked by some other thread, places the current thread in the - Blocked state until - the associated mutex is unlocked, after which the current thread - is placed in the Ready state, - eventually to be returned to the Running state. If - the associated mutex object is already locked by the same thread - the behavior is dependent on the locking - strategy of the associated mutex object. - - Postcondition: locked() == true - - - - lk.unlock() - - Throws boost::lock_error - if !locked(). - - Unlocks the associated mutex. - - Postcondition: !locked() - - - -
-
- -
- ScopedLock Concept - - A ScopedLock is a refinement of Lock. - For a ScopedLock type L - and an object lk of that type, - and an object m of a type meeting the - Mutex requirements, - and an object b of type bool, - the following expressions must be well-formed - and have the indicated effects. - - - ScopedLock Expressions - - - - - Expression - Effects - - - - - - L lk(m); - Constructs an object lk, and associates mutex - object m with it, then calls - lock() - - - L lk(m,b); - Constructs an object lk, and associates mutex - object m with it, then if b, calls - lock() - - - -
-
- -
- TryLock Concept - - A TryLock is a refinement of Lock. - For a TryLock type L - and an object lk of that type, - the following expressions must be well-formed - and have the indicated effects. - - - TryLock Expressions - - - - - Expression - Effects - - - - - - lk.try_lock() - - Throws boost::lock_error - if locked(). - - Makes a - non-blocking attempt to lock the associated mutex object, - returning true if the lock attempt is successful, - otherwise false. If the associated mutex object is - already locked by the same thread the behavior is dependent on the - locking - strategy of the associated mutex object. - - - - -
-
- -
- ScopedTryLock Concept - - A ScopedTryLock is a refinement of TryLock. - For a ScopedTryLock type L - and an object lk of that type, - and an object m of a type meeting the - TryMutex requirements, - and an object b of type bool, - the following expressions must be well-formed - and have the indicated effects. - - - ScopedTryLock Expressions - - - - - Expression - Effects - - - - - - L lk(m); - Constructs an object lk, and associates mutex - object m with it, then calls - try_lock() - - - L lk(m,b); - Constructs an object lk, and associates mutex - object m with it, then if b, calls - lock() - - - -
-
- -
- TimedLock Concept - - A TimedLock is a refinement of TryLock. - For a TimedLock type L - and an object lk of that type, - and an object t of type boost::xtime, - the following expressions must be well-formed - and have the indicated effects. - - - TimedLock Expressions - - - - Expression - Effects - - - - - - lk.timed_lock(t) - - Throws boost::lock_error - if locked(). - - Makes a blocking attempt - to lock the associated mutex object, and returns true - if successful within the specified time t, otherwise - false. If the associated mutex object is already - locked by the same thread the behavior is dependent on the locking - strategy of the associated mutex object. - - - - -
-
- -
- ScopedTimedLock Concept - - A ScopedTimedLock is a refinement of TimedLock. - For a ScopedTimedLock type L - and an object lk of that type, - and an object m of a type meeting the - TimedMutex requirements, - and an object b of type bool, - and an object t of type boost::xtime, - the following expressions must be well-formed - and have the indicated effects. - - - ScopedTimedLock Expressions - - - - Expression - Effects - - - - - - L lk(m,t); - Constructs an object lk, and associates mutex - object m with it, then calls - timed_lock(t) - - - L lk(m,b); - Constructs an object lk, and associates mutex - object m with it, then if b, calls - lock() - - - -
-
-
- -
- Lock Models - - &Boost.Thread; currently supplies twelve models of - Lock - and its refinements. - - - Lock Models - - - - - Concept - Refines - Models - - - - - - Lock - - - - - ScopedLock - Lock - - boost::mutex::scoped_lock - boost::recursive_mutex::scoped_lock - - boost::try_mutex::scoped_lock - boost::recursive_try_mutex::scoped_lock - - boost::timed_mutex::scoped_lock - boost::recursive_timed_mutex::scoped_lock - - - - TryLock - Lock - - - - ScopedTryLock - TryLock - - boost::try_mutex::scoped_try_lock - boost::recursive_try_mutex::scoped_try_lock - - boost::timed_mutex::scoped_try_lock - boost::recursive_timed_mutex::scoped_try_lock - - - - TimedLock - TryLock - - - - ScopedTimedLock - TimedLock - - boost::timed_mutex::scoped_timed_lock - boost::recursive_timed_mutex::scoped_timed_lock - - - - -
-
-
- -
- Read/Write Mutexes - Unfortunately it turned out that the current implementation has - some serious problems. So it was decided not to put this implementation into - release grade code. Also discussions on the mailing list led to the - conclusion that the current concepts need to be rethought. In particular - the schedulings - Inter-Class Scheduling Policies are deemed unnecessary. - There seems to be common belief that a fair scheme suffices. - The following documentation has been retained however, to give - readers of this document the opportunity to study the original design. - - - A read/write mutex (short for reader/writer mutual-exclusion) object - is used to serialize access to a resource shared between multiple - threads, where multiple "readers" can share simultaneous access, but - "writers" require exclusive access. The - ReadWriteMutex concept, with - TryReadWriteMutex and - TimedReadWriteMutex - refinements formalize the requirements. A model that implements - ReadWriteMutex and its refinements has three states: - read-locked, - write-locked, and - unlocked. - Before reading from a shared resource, a thread - read-locks - a &Boost.Thread; read/write mutex object - (an object whose type is a model of - ReadWriteMutex - or one of it's refinements), ensuring - thread-safe - access for reading from the shared resource. Before writing - to a shared resource, a thread - write-locks a &Boost.Thread; - read/write mutex object - (an object whose type is a model of - ReadWriteMutex - or one of it's refinements), ensuring - thread-safe - access for altering the shared resource. When use of the shared - resource is complete, the thread unlocks the mutex object, - allowing another thread to acquire the lock and use the shared - resource. - - Traditional C thread APIs that provide read/write mutex - primitives (like POSIX threads) expose functions to lock and unlock a - mutex object. This is dangerous since it's easy to forget to unlock a - locked mutex. When the flow of control is complex, with multiple - return points, the likelihood of forgetting to unlock a mutex object - becomes even greater. When exceptions are thrown, it becomes nearly - impossible to ensure that the mutex object is unlocked - properly when using these traditional API's. The result is - deadlock. - - Many C++ threading libraries use a pattern known as Scoped - Locking &cite.SchmidtStalRohnertBuschmann; to free the - programmer from the need to explicitly lock and unlock - read/write mutex objects. With this pattern, a - Read/Write Lock - concept is employed where the lock object's constructor locks - the associated read/write mutex object - and the destructor automatically does the unlocking. The - &Boost.Thread; library takes this pattern to - the extreme in that - Read/Write Lock - concepts are the only way to lock and unlock a read/write mutex - object: lock and unlock functions are not exposed by any - &Boost.Thread; read/write mutex objects. This helps to - ensure safe usage patterns, especially when code throws exceptions. - -
- Locking Strategies - - Every read/write mutex object follows one of several locking - strategies. These strategies define the semantics for the locking - operation when the calling thread already owns a lock on the - read/write mutex object. - -
- Recursive Locking Strategy - - With a recursive locking strategy, when a thread attempts - to acquire a lock on a read/write mutex object - for which it already owns a lock, the operation is successful, - except in the case where a thread holding a read-lock - attempts to obtain a write lock, in which case a - boost::lock_error exception will - be thrown. Note the distinction between a thread, which may have - multiple locks outstanding on a recursive read/write mutex object, - and a lock object, which even for a recursive read/write mutex - object cannot have any of its lock functions called multiple - times without first calling unlock. - - - - - - Lock Type Held - Lock Type Requested - Action - - - - - - read-lock - read-lock - Grant the read-lock immediately - - - read-lock - write-lock - If this thread is the only holder of the read-lock, - grants the write lock immediately. Otherwise throws a - boost::lock_error exception. - - - write-locked - read-lock - Grants the (additional) read-lock immediately. - - - write-locked - write-lock - Grant the write-lock immediately - - - - - - Internally a lock count is maintained and the owning - thread must unlock the mutex object the same number of times - that it locked it before the mutex object's state returns - to unlocked. Since mutex objects in &Boost.Thread; expose - locking functionality only through lock concepts, a thread - will always unlock a mutex object the same number of times - that it locked it. This helps to eliminate a whole set of - errors typically found in traditional C style thread APIs. - - - &Boost.Thread; does not currently provide any read/write mutex objects - that use this strategy. A successful implementation of this locking strategy - may require - thread identification. - -
- -
- Checked Locking Strategy - - With a checked locking strategy, when a thread attempts - to acquire a lock on the mutex object for which the thread - already owns a lock, the operation will fail with some sort of - error indication, except in the case of multiple read-lock - acquisition which is a normal operation for ANY ReadWriteMutex. - Further, attempts by a thread to unlock a mutex that was not - locked by the thread will also return some sort of error - indication. In &Boost.Thread;, an exception of type - boost::lock_error would be thrown in - these cases. - - - - - - Lock Type Held - Lock Type Requested - Action - - - - - - read-lock - read-lock - Grant the read-lock immediately - - - read-lock - write-lock - Throw boost::lock_error - - - write-locked - read-lock - Throw boost::lock_error - - - write-locked - write-lock - Throw boost::lock_error - - - - - - &Boost.Thread; does not currently provide any read/write mutex objects - that use this strategy. A successful implementation of this locking strategy - may require - thread identification. - -
- -
- Unchecked Locking Strategy - - With an unchecked locking strategy, when a thread - attempts to acquire a lock on the read/write mutex object - for which the thread already owns a lock, the operation - will deadlock. - In general this locking strategy is less safe than a checked - or recursive strategy, but it can be a faster strategy and so - is employed by many libraries. - - - - - - Lock Type Held - Lock Type Requested - Action - - - - - - read-lock - read-lock - Grant the read-lock immediately - - - read-lock - write-lock - Deadlock - - - write-locked - read-lock - Deadlock - - - write-locked - write-lock - Deadlock - - - - - - &Boost.Thread; does not currently provide any mutex - objects that use this strategy. For ReadWriteMutexes on - platforms that contain natively recursive synchronization - primitives, implementing a guaranteed-deadlock can actually - involve extra work, and would likely require - thread identification. - -
- -
- Unspecified Locking Strategy - - With an unspecified locking strategy, when a thread - attempts to acquire a lock on a read/write mutex object for - which the thread already owns a lock, the operation results - in undefined behavior. - When a read/write mutex object has an unspecified locking - strategy the programmer must assume that the read/write mutex - object instead uses an unchecked strategy as the worse case, - although some platforms may exhibit a mix of unchecked and - recursive behavior. - - - - - - Lock Type Held - Lock Type Requested - Action - - - - - - read-lock - read-lock - Grant the read-lock immediately - - - read-lock - write-lock - - Undefined, but generally deadlock - - - - write-locked - read-lock - Undefined, but generally deadlock - - - write-locked - write-lock - Undefined, but generally deadlock - - - - - - In general a read/write mutex object with an unspecified - locking strategy is unsafe, and it requires programmer discipline - to use the read/write mutex object properly. However, this strategy - allows an implementation to be as fast as possible with no restrictions - on its implementation. This is especially true for portable implementations - that wrap the native threading support of a platform. For this reason, the - classes - read_write_mutex, - try_read_write_mutex, and - timed_read_write_mutex - use this locking strategy despite the lack of safety. -
- -
- Thread Identification - - ReadWriteMutexes can support specific Locking Strategies - (recursive and checked) which help to detect and protect against - self-deadlock. Self-deadlock can occur when a holder of a locked - ReadWriteMutex attempts to obtain another lock. Given an - implemention I which is susceptible to - self-deadlock but otherwise correct and efficient, a recursive or - checked implementation Ir or - Ic can use the same basic implementation, - but make special checks against self-deadlock by tracking the - identities of thread(s) currently holding locks. This approach - makes deadlock detection othrogonal to the basic ReadWriteMutex - implementaion. - - Alternatively, a different basic implementation for - ReadWriteMutex concepts, - I' (I-Prime) may exist which uses recursive - or checked versions of synchronization primitives to produce - a recursive or checked ReadWriteMutex while still providing - flexibility in terms of Scheduling Policies. - - Please refer to the &Boost.Thread; - read/write mutex concept - documentation for a discussion of locking strategies. - The read/write mutex supports only the - unspecified - locking strategy. ReadWriteMutexes are parameterized on a - Mutex type which they use to control write-locking - and access to internal state. -
- -
- Lock Promotion - - ReadWriteMutexes can support lock promotion, where a - mutex which is in the read-locked state transitions to a - write-locked state without releasing the lock. Lock - promotion can be tricky to implement; for instance, - extra care must be taken to ensure that only one thread holding a - read-lock can block awaiting promotion at any given time. If - more than one read-lock holder is allowed to enter a blocked - state while waiting to be promoted, deadlock will result since - both threads will be waiting for the other to release their read-lock. - - - Currently, &Boost.Thread; supports lock promotion - through promote(), try_promote(), - and timed_promote() operations. -
- -
- Lock Demotion - - ReadWriteMutexes can support lock demotion, where a - mutex which is in the write-locked state transitions to a - read-locked state without releasing the lock. - Since by definition only one thread at a time may hold - a write-lock, the problem with deadlock that can occur - during lock promotion is not a problem for lock - demotion. - - Currently, &Boost.Thread; supports lock demotion - through demote(), try_demote(), - and timed_demote() operations. -
-
- -
- Scheduling Policies - - Every read/write mutex object follows one of several scheduling - policies. These policies define the semantics when the mutex object - is unlocked and there is more than one thread waiting to acquire a - lock. In other words, the policy defines which waiting thread shall - acquire the lock. For a read/write mutex, it is particularly important - to define the behavior when threads are requesting both read and - write access simultaneously. This will be referred to as "inter-class - scheduling" because it describes the scheduling between two - classes of threads (those waiting for a read lock and those - waiting for a write lock). - - For some types of inter-class scheduling, an "intra-class" - scheduling policy can also be defined that will describe the order - in which waiting threads of the same class (i.e., those - waiting for the same type of lock) will acquire the thread. - - -
- Inter-Class Scheduling Policies - -
- ReaderPriority - - With ReaderPriority scheduling, any pending request for - a read-lock will have priority over a pending request for a - write-lock, irrespective of the current lock state of the - read/write mutex, and irrespective of the relative order - that the pending requests arrive. - - - - - - Current mutex state - Request Type - Action - - - - - - unlocked - read-lock - Grant the read-lock immediately - - - read-locked - read-lock - Grant the additional read-lock immediately. - - - write-locked - read-lock - Wait to acquire the lock until the thread - holding the write-lock releases its lock (or until - the specified time, if any). A - read-lock will be granted to all pending readers - before any other thread can acquire a write-lock. - TODO: try-lock, timed-lock. - - - - unlocked - write-lock - Grant the write-lock immediately, if and - only if there are no pending read-lock requests. - TODO: try-lock, timed-lock. - - - - read-locked - write-lock - Wait to acquire the lock until all - threads holding read-locks release their locks - AND no requests - for read-locks exist. If other write-lock - requests exist, the lock is granted in accordance - with the intra-class scheduling policy. - TODO: try-lock, timed-lock. - - - - write-locked - write-lock - Wait to acquire the lock until the thread - holding the write-lock releases its lock - AND no requests - for read-locks exist. If other write-lock - requests exist, the lock is granted in accordance - with the intra-class scheduling policy. - TODO: try-lock, timed-lock. - - - - read-locked - promote - TODO - - - write-locked - demote - TODO - - - - -
- -
- WriterPriority - - With WriterPriority scheduling, any pending request - for a write-lock will have priority over a pending request - for a read-lock, irrespective of the current lock state - of the read/write mutex, and irrespective of the relative - order that the pending requests arrive. - - - - - - Current mutex state - Request Type - Action - - - - - - unlocked - read-lock - Grant the read-lock immediately. - - - read-locked - read-lock - Grant the additional read-lock immediately, - IF no outstanding - requests for a write-lock exist; otherwise TODO. - TODO: try-lock, timed-lock. - - - - write-locked - read-lock - Wait to acquire the lock until the - thread holding the write-lock - releases its lock. The read lock will be granted - once no other outstanding write-lock requests - exist. - TODO: try-lock, timed-lock. - - - - unlocked - write-lock - Grant the write-lock immediately. - - - read-locked - write-lock - Wait to acquire the lock until all - threads holding read-locks release their locks. - If other write-lock requests exist, the lock - is granted in accordance with the intra-class - scheduling policy. This request will be granted - before any new read-lock requests are granted. - TODO: try-lock, timed-lock. - - - - write-locked - write-lock - Wait to acquire the lock until the thread - holding the write-lock releases its lock. If - other write-lock requests exist, the lock is - granted in accordance with the intra-class - scheduling policy. This request will be granted - before any new read-lock requests are granted. - TODO: try-lock, timed-lock. - - - - read-locked - promote - TODO - - - write-locked - demote - TODO - - - - -
- -
- AlternatingPriority/ManyReads - - With AlternatingPriority/ManyReads scheduling, reader - or writer starvation is avoided by alternatively granting read - or write access when pending requests exist for both types of - locks. Outstanding read-lock requests are treated as a group - when it is the "readers' turn" - - - - - - Current mutex state - Request Type - Action - - - - - - unlocked - read-lock - Grant the read-lock immediately. - - - read-locked - read-lock - Grant the additional read-lock immediately, - IF no outstanding - requests for a write-lock exist. If outstanding - write-lock requests exist, this lock will not - be granted until at least one of the - write-locks is granted and released. If other - read-lock requests exist, all read-locks will be - granted as a group. - TODO: try-lock, timed-lock. - - - - write-locked - read-lock - Wait to acquire the lock until the thread - holding the write-lock releases its lock. If other - outstanding write-lock requests exist, they will - have to wait until all current read-lock requests - are serviced. - TODO: try-lock, timed-lock. - - - - unlocked - write-lock - Grant the write-lock immediately. - - - read-locked - write-lock - - Wait to acquire the lock until all threads - holding read-locks release their locks. - - If other write-lock requests exist, this - lock will be granted to one of them in accordance - with the intra-class scheduling policy. - - TODO: try-lock, timed-lock. - - - - write-locked - write-lock - Wait to acquire the lock until the thread - holding the write-lock releases its lock. If - other outstanding read-lock requests exist, this - lock will not be granted until all of the - currently waiting read-locks are granted and - released. If other write-lock requests exist, - this lock will be granted in accordance with the - intra-class scheduling policy. - TODO: try-lock, timed-lock. - - - - read-locked - promote - TODO - - - write-locked - demote - TODO - - - - -
- -
- AlternatingPriority/SingleRead - - With AlternatingPriority/SingleRead scheduling, reader - or writer starvation is avoided by alternatively granting read - or write access when pending requests exist for both types of - locks. Outstanding read-lock requests are services one at a - time when it is the "readers' turn" - - - - - - Current mutex state - Request Type - Action - - - - - - unlocked - read-lock - Grant the read-lock immediately. - - - read-locked - read-lock - Grant the additional read-lock immediately, - IF no outstanding - requests for a write-lock exist. If outstanding - write-lock requests exist, this lock will not - be granted until at least one of the write-locks - is granted and released. - TODO: try-lock, timed-lock. - - - - write-locked - read-lock - - Wait to acquire the lock until the thread - holding the write-lock releases its lock. - If other outstanding write-lock requests - exist, exactly one read-lock request will be - granted before the next write-lock is granted. - - TODO: try-lock, timed-lock. - - - - unlocked - write-lock - Grant the write-lock immediately. - - - read-locked - write-lock - - Wait to acquire the lock until all - threads holding read-locks release their - locks. - - If other write-lock requests exist, - this lock will be granted to one of them - in accordance with the intra-class - scheduling policy. - - TODO: try-lock, timed-lock. - - - write-locked - write-lock - Wait to acquire the lock until the - thread holding the write-lock releases its - lock. If other outstanding read-lock requests - exist, this lock can not be granted until - exactly one read-lock request is granted and - released. If other write-lock requests exist, - this lock will be granted in accordance with - the intra-class scheduling policy. - TODO: try-lock, timed-lock. - - - - read-locked - promote - TODO - - - write-locked - demote - TODO - - - - -
-
- -
- Intra-Class Scheduling Policies - - Please refer to - - for a discussion of mutex scheduling policies, which are identical to - read/write mutex intra-class scheduling policies. - - For threads waiting to obtain write-locks, the read/write mutex - supports only the - Unspecified - intra-class scheduling policy. That is, given a set of threads - waiting for write-locks, the order, relative to one another, in - which they receive the write-lock is unspecified. - - For threads waiting to obtain read-locks, the read/write mutex - supports only the - Unspecified - intra-class scheduling policy. That is, given a set of threads - waiting for read-locks, the order, relative to one another, in - which they receive the read-lock is unspecified. -
-
- -
- Mutex Concepts - -
- ReadWriteMutex Concept - - A ReadWriteMutex object has three states: read-locked, - write-locked, and unlocked. ReadWriteMutex object state can - only be determined by a lock object meeting the appropriate lock concept - requirements and constructed for the ReadWriteMutex object. - - A ReadWriteMutex is - NonCopyable. - - - For a ReadWriteMutex type M, - and an object m of that type, - the following expressions must be well-formed - and have the indicated effects. - - - ReadWriteMutex Expressions - - - - - Expression - Effects - - - - - - M m; - Constructs a read/write mutex object m. - Post-condition: m is unlocked. - - - (&m)->~M(); - Precondition: m is unlocked. - Destroys a read/write mutex object m. - - - - M::scoped_read_write_lock - A type meeting the - ScopedReadWriteLock - requirements. - - - M::scoped_read_lock - A type meeting the - ScopedLock - requirements. - - - M::scoped_write_lock - A type meeting the - ScopedLock - requirements. - - - -
-
- -
- TryReadWriteMutex Concept - - A TryReadWriteMutex is a refinement of - ReadWriteMutex. - For a TryReadWriteMutex type M - and an object m of that type, - the following expressions must be well-formed - and have the indicated effects. - - - TryReadWriteMutex Expressions - - - - - Expression - Effects - - - - - - M::scoped_try_read_write_lock - A type meeting the - ScopedTryReadWriteLock - requirements. - - - M::scoped_try_read_lock - A type meeting the - ScopedTryLock - requirements. - - - M::scoped_try_write_lock - A type meeting the - ScopedTryLock - requirements. - - - -
-
- -
- TimedReadWriteMutex Concept - - A TimedReadWriteMutex is a refinement of - TryReadWriteMutex. - For a TimedReadWriteMutex type M - and an object m of that type - the following expressions must be well-formed - and have the indicated effects. - - - TimedReadWriteMutex Expressions - - - - - Expression - Effects - - - - - - M::scoped_timed_read_write_lock - A type meeting the - ScopedTimedReadWriteLock - requirements. - - - M::scoped_timed_read_lock - A type meeting the - ScopedTimedLock - requirements. - - - M::scoped_timed_write_lock - A type meeting the - ScopedTimedLock - requirements. - - - -
-
-
- -
- Mutex Models - - &Boost.Thread; currently supplies three models of - ReadWriteMutex - and its refinements. - - - Mutex Models - - - - - Concept - Refines - Models - - - - - - ReadWriteMutex - - boost::read_write_mutex - - - TryReadWriteMutex - ReadWriteMutex - boost::try_read_write_mutex - - - TimedReadWriteMutex - TryReadWriteMutex - boost::timed_read_write_mutex - - - -
-
- -
- Lock Concepts - - A read/write lock object provides a safe means for locking - and unlocking a read/write mutex object (an object whose type is - a model of - ReadWriteMutex - or one of its refinements). In other words they are an - implementation of the Scoped Locking - &cite.SchmidtStalRohnertBuschmann; pattern. The - ScopedReadWriteLock, - ScopedTryReadWriteLock, and - ScopedTimedReadWriteLock - concepts formalize the requirements. - - Read/write lock objects are constructed with a reference to a - read/write mutex object and typically acquire ownership of the - read/write mutex object by setting its state to locked. They also - ensure ownership is relinquished in the destructor. Lock objects - also expose functions to query the lock status and to manually lock - and unlock the read/write mutex object. - - Read/write lock objects are meant to be short lived, expected - to be used at block scope only. The read/write lock objects are not - thread-safe. - Read/write lock objects must maintain state to indicate whether or - not they've been locked and this state is not protected by any - synchronization concepts. For this reason a read/write lock object - should never be shared between multiple threads. - -
- ReadWriteLock Concept - - For a read/write lock type L - and an object lk - and const object clk of that type, - the following expressions must be well-formed - and have the indicated effects. - - - ReadWriteLock Expressions - - - - - Expression - Effects - - - - - - (&lk)->~L(); - if (locked()) unlock(); - - - (&clk)->operator const void*() - Returns type void*, non-zero if the associated read/write - mutex object has been either read-locked or write-locked by - clk, otherwise 0. - - - clk.locked() - Returns a bool, (&clk)->operator - const void*() != 0 - - - clk.state() - Returns an enumeration constant of type read_write_lock_state: - read_write_lock_state::read_locked if the associated read/write mutex object has been - read-locked by clk, read_write_lock_state::write_locked if it - has been write-locked by clk, and read_write_lock_state::unlocked - if has not been locked by clk. - - - clk.read_locked() - Returns a bool, (&clk)->state() == read_write_lock_state::read_locked. - - - clk.write_locked() - Returns a bool, (&clk)->state() == read_write_lock_state::write_locked. - - - lk.read_lock() - - Throws boost::lock_error - if locked(). - - If the associated read/write mutex - object is already read-locked by some other - thread, the effect depends on the - inter-class scheduling policy - of the associated read/write mutex: - either immediately obtains an additional - read-lock on the associated read/write - mutex, or places the current thread in the - Blocked - state until the associated read/write mutex - is unlocked, after which the current thread - is placed in the - Ready - state, eventually to be returned to the - Running - state. - - If the associated read/write mutex - object is already write-locked by some other - thread, places the current thread in the - Blocked - state until the associated read/write mutex - is unlocked, after which the current thread - is placed in the - Ready - state, eventually to be returned to the - Running - state. - - If the associated read/write mutex - object is already locked by the same thread - the behavior is dependent on the - locking strategy - of the associated read/write mutex object. - - - Postcondition: state() == read_write_lock_state::read_locked - - - - lk.write_lock() - - - Throws boost::lock_error - if locked(). - - If the associated read/write mutex - object is already locked by some other - thread, places the current thread in the - Blocked - state until the associated read/write mutex - is unlocked, after which the current thread - is placed in the - Ready - state, eventually to be returned to the - Running - state. - - If the associated read/write mutex - object is already locked by the same thread - the behavior is dependent on the - locking strategy - of the associated read/write mutex object. - - - Postcondition: state() == read_write_lock_state::write_locked - - - - lk.demote() - - Throws boost::lock_error - if state() != read_write_lock_state::write_locked. - - Converts the lock held on the associated read/write mutex - object from a write-lock to a read-lock without releasing - the lock. - - Postcondition: state() == read_write_lock_state::read_locked - - - - lk.promote() - - Throws boost::lock_error - if state() != read_write_lock_state::read_locked - or if the lock cannot be promoted because another lock - on the same mutex is already waiting to be promoted. - - Makes a blocking attempt to convert the lock held on the associated - read/write mutex object from a read-lock to a write-lock without releasing - the lock. - - - - lk.unlock() - - Throws boost::lock_error - if !locked(). - - Unlocks the associated read/write mutex. - - Postcondition: !locked() - - - - -
-
- -
- ScopedReadWriteLock Concept - - A ScopedReadWriteLock is a refinement of - ReadWriteLock. - For a ScopedReadWriteLock type L - and an object lk of that type, - and an object m of a type meeting the - ReadWriteMutex requirements, - and an object s of type read_write_lock_state, - the following expressions must be well-formed - and have the indicated effects. - - - ScopedReadWriteLock Expressions - - - - - Expression - Effects - - - - - - L lk(m,s); - Constructs an object lk and associates read/write mutex - object m with it, then: if s == read_write_lock_state::read_locked, calls - read_lock(); if s==read_write_lock_state::write_locked, - calls write_lock(). - - - -
-
- -
- TryReadWriteLock Expressions - - A TryReadWriteLock is a refinement of - ReadWriteLock. - For a TryReadWriteLock type L - and an object lk of that type, - the following expressions must be well-formed - and have the indicated effects. - - - TryReadWriteLock Expressions - - - - - Expression - Effects - - - - - - lk.try_read_lock() - - Throws boost::lock_error - if locked(). - - Makes a non-blocking attempt to read-lock the associated read/write - mutex object, returning true if the attempt is successful, - otherwise false. If the associated read/write mutex object is - already locked by the same thread the behavior is dependent on the - locking - strategy of the associated read/write mutex object. - - - - lk.try_write_lock() - - Throws boost::lock_error - if locked(). - - Makes a non-blocking attempt to write-lock the associated read/write - mutex object, returning true if the attempt is successful, - otherwise false. If the associated read/write mutex object is - already locked by the same thread the behavior is dependent on the - locking - strategy of the associated read/write mutex object. - - - - lk.try_demote() - - Throws boost::lock_error - if state() != read_write_lock_state::write_locked. - - Makes a non-blocking attempt to convert the lock held on the associated - read/write mutex object from a write-lock to a read-lock without releasing - the lock, returning true if the attempt is successful, - otherwise false. - - - - lk.try_promote() - - Throws boost::lock_error - if state() != read_write_lock_state::read_locked. - - Makes a non-blocking attempt to convert the lock held on the associated - read/write mutex object from a read-lock to a write-lock without releasing - the lock, returning true if the attempt is successful, - otherwise false. - - - - -
-
- -
- ScopedTryReadWriteLock Expressions - - A ScopedTryReadWriteLock is a refinement of - TryReadWriteLock. - For a ScopedTryReadWriteLock type L - and an object lk of that type, - and an object m of a type meeting the - TryReadWriteMutex requirements, - and an object s of type read_write_lock_state, - and an object b of type blocking_mode, - the following expressions must be well-formed - and have the indicated effects. - - - ScopedTryReadWriteLock Expressions - - - - - Expression - Effects - - - - - - L lk(m,s,b); - Constructs an object lk and associates read/write mutex - object m with it, then: if s == read_write_lock_state::read_locked, calls - read_lock() if b, otherwise try_read_lock(); - if s==read_write_lock_state::write_locked, calls write_lock() if b, - otherwise try_write_lock. - - - -
-
- -
- TimedReadWriteLock Concept - - A TimedReadWriteLock is a refinement of - TryReadWriteLock. - For a TimedReadWriteLock type L - and an object lk of that type, - and an object t of type boost::xtime, - the following expressions must be well-formed - and have the indicated effects. - - - TimedReadWriteLock Expressions - - - - - Expression - Effects - - - - - - lk.timed_read_lock(t) - - Throws boost::lock_error - if locked(). - - Makes a blocking attempt to read-lock the associated read/write mutex object, - and returns true if successful within the specified time t, - otherwise false. If the associated read/write mutex object is already - locked by the same thread the behavior is dependent on the locking - strategy of the associated read/write mutex object. - - - - lk.timed_write_lock(t) - - Throws boost::lock_error - if locked(). - - Makes a blocking attempt to write-lock the associated read/write mutex object, - and returns true if successful within the specified time t, - otherwise false. If the associated read/write mutex object is already - locked by the same thread the behavior is dependent on the locking - strategy of the associated read/write mutex object. - - - - lk.timed_demote(t) - - Throws boost::lock_error - if state() != read_write_lock_state::write_locked. - - Makes a blocking attempt to convert the lock held on the associated - read/write mutex object from a write-lock to a read-lock without releasing - the lock, returning true if the attempt is successful - in the specified time t, otherwise false. - - - - lk.timed_promote(t) - - Throws boost::lock_error - if state() != read_write_lock_state::read_locked. - - Makes a blocking attempt to convert the lock held on the associated - read/write mutex object from a read-lock to a write-lock without releasing - the lock, returning true if the attempt is successful - in the specified time t, otherwise false. - - - - -
-
- -
- ScopedTimedReadWriteLock Concept - - A ScopedTimedReadWriteLock is a refinement of - TimedReadWriteLock. - For a ScopedTimedReadWriteLock type L - and an object lk of that type, - and an object m of a type meeting the - TimedReadWriteMutex requirements, - and an object s of type read_write_lock_state, - and an object t of type boost::xtime, - and an object b of type blocking_mode, - the following expressions must be well-formed and have the - indicated effects. - - - ScopedTimedReadWriteLock Expressions - - - - - Expression - Effects - - - - - - L lk(m,s,b); - Constructs an object lk and associates read/write mutex - object m with it, then: if s == read_write_lock_state::read_locked, calls - read_lock() if b, otherwise try_read_lock(); - if s==read_write_lock_state::write_locked, calls write_lock() if b, - otherwise try_write_lock. - - - L lk(m,s,t); - Constructs an object lk and associates read/write mutex - object m with it, then: if s == read_write_lock_state::read_locked, calls - timed_read_lock(t); if s==read_write_lock_state::write_locked, - calls timed_write_lock(t). - - - -
-
-
- -
- Lock Models - - &Boost.Thread; currently supplies six models of - ReadWriteLock - and its refinements. - - - Lock Models - - - - - Concept - Refines - Models - - - - - - ReadWriteLock - - - - - ScopedReadWriteLock - ReadWriteLock - - boost::read_write_mutex::scoped_read_write_lock - boost::try_read_write_mutex::scoped_read_write_lock - boost::timed_read_write_mutex::scoped_read_write_lock - - - - TryReadWriteLock - ReadWriteLock - - - - ScopedTryReadWriteLock - TryReadWriteLock - - boost::try_read_write_mutex::scoped_try_read_write_lock - boost::timed_read_write_mutex::scoped_try_read_write_lock - - - - TimedReadWriteLock - TryReadWriteLock - - - - ScopedTimedReadWriteLock - TimedReadWriteLock - - boost::timed_read_write_mutex::scoped_timed_read_write_lock - - - - -
-
-
-
diff --git a/doc/condition-ref.xml b/doc/condition-ref.xml deleted file mode 100644 index 51b3b1d0..00000000 --- a/doc/condition-ref.xml +++ /dev/null @@ -1,196 +0,0 @@ - - - %thread.entities; -]> - -
- - - - boost::noncopyable - Exposition only - - - - An object of class condition is a - synchronization primitive used to cause a thread to wait until a - particular shared-data condition (or time) is met. - - - - A condition object is always used in - conjunction with a mutex - object (an object whose type is a model of a Mutex or one of its - refinements). The mutex object must be locked prior to waiting on the - condition, which is verified by passing a lock object (an object whose - type is a model of Lock or - one of its refinements) to the condition object's - wait functions. Upon blocking on the condition - object, the thread unlocks the mutex object. When the thread returns - from a call to one of the condition object's wait - functions the mutex object is again locked. The tricky unlock/lock - sequence is performed automatically by the - condition object's wait functions. - The condition type is often used to - implement the Monitor Object and other important patterns (see - &cite.SchmidtStalRohnertBuschmann; and &cite.Hoare74;). Monitors are one - of the most important patterns for creating reliable multithreaded - programs. - See for definitions of thread states - blocked and ready. Note that "waiting" is a synonym for blocked. - - - - Constructs a condition - object. - - - - Destroys *this. - - - - - void - If there is a thread waiting on *this, - change that thread's state to ready. Otherwise there is no - effect. - If more than one thread is waiting on *this, - it is unspecified which is made ready. After returning to a ready - state the notified thread must still acquire the mutex again (which - occurs within the call to one of the condition - object's wait functions.) - - - - void - Change the state of all threads waiting on - *this to ready. If there are no waiting threads, - notify_all() has no effect. - - - - - - - - void - - - ScopedLock& - - - ScopedLock meets the ScopedLock - requirements. - Releases the lock on the mutex object - associated with lock, blocks the current thread of execution - until readied by a call to this->notify_one() - or this->notify_all(), and then reacquires the - lock. - lock_error if - !lock.locked() - - - - - - void - - - ScopedLock& - - - - Pred - - - ScopedLock meets the ScopedLock - requirements and the return from pred() is - convertible to bool. - As if: while (!pred()) - wait(lock) - lock_error if - !lock.locked() - - - - - - bool - - - ScopedLock& - - - - const boost::xtime& - - - ScopedLock meets the ScopedLock - requirements. - Releases the lock on the mutex object - associated with lock, blocks the current thread of execution - until readied by a call to this->notify_one() - or this->notify_all(), or until time xt - is reached, and then reacquires the lock. - false if time xt is reached, - otherwise true. - lock_error if - !lock.locked() - - - - - - bool - - - ScopedLock& - - - - const boost::xtime& - - - - Pred - - - ScopedLock meets the ScopedLock - requirements and the return from pred() is - convertible to bool. - As if: while (!pred()) { if (!timed_wait(lock, - xt)) return false; } return true; - false if xt is reached, - otherwise true. - lock_error if - !lock.locked() - - - - -
diff --git a/doc/condition_variables.qbk b/doc/condition_variables.qbk new file mode 100644 index 00000000..e793a777 --- /dev/null +++ b/doc/condition_variables.qbk @@ -0,0 +1,494 @@ +[section:condvar_ref Condition Variables] + +[heading Synopsis] + +The classes `condition_variable` and `condition_variable_any` provide a +mechanism for one thread to wait for notification from another thread that a +particular condition has become true. The general usage pattern is that one +thread locks a mutex and then calls `wait` on an instance of +`condition_variable` or `condition_variable_any`. When the thread is woken from +the wait, then it checks to see if the appropriate condition is now true, and +continues if so. If the condition is not true, then the thread then calls `wait` +again to resume waiting. In the simplest case, this condition is just a boolean +variable: + + boost::condition_variable cond; + boost::mutex mut; + bool data_ready; + + void process_data(); + + void wait_for_data_to_process() + { + boost::unique_lock lock(mut); + while(!data_ready) + { + cond.wait(lock); + } + process_data(); + } + +Notice that the `lock` is passed to `wait`: `wait` will atomically add the +thread to the set of threads waiting on the condition variable, and unlock the +mutex. When the thread is woken, the mutex will be locked again before the call +to `wait` returns. This allows other threads to acquire the mutex in order to +update the shared data, and ensures that the data associated with the condition +is correctly synchronized. + +In the mean time, another thread sets the condition to `true`, and then calls +either `notify_one` or `notify_all` on the condition variable to wake one +waiting thread or all the waiting threads respectively. + + void retrieve_data(); + void prepare_data(); + + void prepare_data_for_processing() + { + retrieve_data(); + prepare_data(); + { + boost::lock_guard lock(mut); + data_ready=true; + } + cond.notify_one(); + } + +Note that the same mutex is locked before the shared data is updated, but that +the mutex does not have to be locked across the call to `notify_one`. + +This example uses an object of type `condition_variable`, but would work just as +well with an object of type `condition_variable_any`: `condition_variable_any` +is more general, and will work with any kind of lock or mutex, whereas +`condition_variable` requires that the lock passed to `wait` is an instance of +`boost::unique_lock`. This enables `condition_variable` to make +optimizations in some cases, based on the knowledge of the mutex type; +`condition_variable_any` typically has a more complex implementation than +`condition_variable`. + +[section:condition_variable Class `condition_variable`] + + namespace boost + { + class condition_variable + { + public: + condition_variable(); + ~condition_variable(); + + void wait(boost::unique_lock& lock); + + template + void wait(boost::unique_lock& lock,predicate_type predicate); + + bool timed_wait(boost::unique_lock& lock,boost::system_time const& abs_time); + + template + bool timed_wait(boost::unique_lock& lock,duration_type const& rel_time); + + template + bool timed_wait(boost::unique_lock& lock,boost::system_time const& abs_time,predicate_type predicate); + + template + bool timed_wait(boost::unique_lock& lock,duration_type const& rel_time,predicate_type predicate); + + // backwards compatibility + + bool timed_wait(boost::unique_lock& lock,boost::xtime const& abs_time); + + template + bool timed_wait(boost::unique_lock& lock,boost::xtime const& abs_time,predicate_type predicate); + }; + } + +[section:constructor `condition_variable()`] + +[variablelist + +[[Effects:] [Constructs an object of class `condition_variable`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[endsect] + +[section:destructor `~condition_variable()`] + +[variablelist + +[[Precondition:] [All threads waiting on `*this` have been notified by a call to +`notify_one` or `notify_all` (though the respective calls to `wait` or +`timed_wait` need not have returned).]] + +[[Effects:] [Destroys the object.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:notify_one `void notify_one()`] + +[variablelist + +[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call +to `wait` or `timed_wait`, unblocks one of those threads.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:notify_all `void notify_all()`] + +[variablelist + +[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call +to `wait` or `timed_wait`, unblocks all of those threads.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:wait `void wait(boost::unique_lock& lock)`] + +[variablelist + +[[Precondition:] [`lock` is locked by the current thread, and either no other +thread is currently waiting on `*this`, or the execution of the `mutex()` member +function on the `lock` objects supplied in the calls to `wait` or `timed_wait` +in all the threads currently waiting on `*this` would return the same value as +`lock->mutex()` for this call to `wait`.]] + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, or spuriously. When the thread is unblocked (for whatever +reason), the lock is reacquired by invoking `lock.lock()` before the call to +`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the +function exits with an exception.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[endsect] + +[section:wait_predicate `template void wait(boost::unique_lock& lock, predicate_type pred)`] + +[variablelist + +[[Effects:] [As-if `` +while(!pred()) +{ + wait(lock); +} +``]] + +] + +[endsect] + +[section:timed_wait `bool timed_wait(boost::unique_lock& lock,boost::system_time const& abs_time)`] + +[variablelist + +[[Precondition:] [`lock` is locked by the current thread, and either no other +thread is currently waiting on `*this`, or the execution of the `mutex()` member +function on the `lock` objects supplied in the calls to `wait` or `timed_wait` +in all the threads currently waiting on `*this` would return the same value as +`lock->mutex()` for this call to `wait`.]] + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, when the time as reported by `boost::get_system_time()` +would be equal to or later than the specified `abs_time`, or spuriously. When +the thread is unblocked (for whatever reason), the lock is reacquired by +invoking `lock.lock()` before the call to `wait` returns. The lock is also +reacquired by invoking `lock.lock()` if the function exits with an exception.]] + +[[Returns:] [`false` if the call is returning because the time specified by +`abs_time` was reached, `true` otherwise.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[endsect] + +[section:timed_wait_rel `template bool timed_wait(boost::unique_lock& lock,duration_type const& rel_time)`] + +[variablelist + +[[Precondition:] [`lock` is locked by the current thread, and either no other +thread is currently waiting on `*this`, or the execution of the `mutex()` member +function on the `lock` objects supplied in the calls to `wait` or `timed_wait` +in all the threads currently waiting on `*this` would return the same value as +`lock->mutex()` for this call to `wait`.]] + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, after the period of time indicated by the `rel_time` +argument has elapsed, or spuriously. When the thread is unblocked (for whatever +reason), the lock is reacquired by invoking `lock.lock()` before the call to +`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the +function exits with an exception.]] + +[[Returns:] [`false` if the call is returning because the time period specified +by `rel_time` has elapsed, `true` otherwise.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[note The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.] + +[endsect] + +[section:timed_wait_predicate `template bool timed_wait(boost::unique_lock& lock, boost::system_time const& abs_time, predicate_type pred)`] + +[variablelist + +[[Effects:] [As-if `` +while(!pred()) +{ + if(!timed_wait(lock,abs_time)) + { + return pred(); + } +} +return true; +``]] + +] + +[endsect] + + +[endsect] + +[section:condition_variable_any Class `condition_variable_any`] + + namespace boost + { + class condition_variable_any + { + public: + condition_variable_any(); + ~condition_variable_any(); + + template + void wait(lock_type& lock); + + template + void wait(lock_type& lock,predicate_type predicate); + + template + bool timed_wait(lock_type& lock,boost::system_time const& abs_time); + + template + bool timed_wait(lock_type& lock,duration_type const& rel_time); + + template + bool timed_wait(lock_type& lock,boost::system_time const& abs_time,predicate_type predicate); + + template + bool timed_wait(lock_type& lock,duration_type const& rel_time,predicate_type predicate); + + // backwards compatibility + + template + bool timed_wait(lock_type>& lock,boost::xtime const& abs_time); + + template + bool timed_wait(lock_type& lock,boost::xtime const& abs_time,predicate_type predicate); + }; + } + +[section:constructor `condition_variable_any()`] + +[variablelist + +[[Effects:] [Constructs an object of class `condition_variable_any`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[endsect] + +[section:destructor `~condition_variable_any()`] + +[variablelist + +[[Precondition:] [All threads waiting on `*this` have been notified by a call to +`notify_one` or `notify_all` (though the respective calls to `wait` or +`timed_wait` need not have returned).]] + +[[Effects:] [Destroys the object.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:notify_one `void notify_one()`] + +[variablelist + +[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call +to `wait` or `timed_wait`, unblocks one of those threads.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:notify_all `void notify_all()`] + +[variablelist + +[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call +to `wait` or `timed_wait`, unblocks all of those threads.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:wait `template void wait(lock_type& lock)`] + +[variablelist + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, or spuriously. When the thread is unblocked (for whatever +reason), the lock is reacquired by invoking `lock.lock()` before the call to +`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the +function exits with an exception.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[endsect] + +[section:wait_predicate `template void wait(lock_type& lock, predicate_type pred)`] + +[variablelist + +[[Effects:] [As-if `` +while(!pred()) +{ + wait(lock); +} +``]] + +] + +[endsect] + +[section:timed_wait `template bool timed_wait(lock_type& lock,boost::system_time const& abs_time)`] + +[variablelist + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, when the time as reported by `boost::get_system_time()` +would be equal to or later than the specified `abs_time`, or spuriously. When +the thread is unblocked (for whatever reason), the lock is reacquired by +invoking `lock.lock()` before the call to `wait` returns. The lock is also +reacquired by invoking `lock.lock()` if the function exits with an exception.]] + +[[Returns:] [`false` if the call is returning because the time specified by +`abs_time` was reached, `true` otherwise.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[endsect] + +[section:timed_wait_rel `template bool timed_wait(lock_type& lock,duration_type const& rel_time)`] + +[variablelist + +[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The +thread will unblock when notified by a call to `this->notify_one()` or +`this->notify_all()`, after the period of time indicated by the `rel_time` +argument has elapsed, or spuriously. When the thread is unblocked (for whatever +reason), the lock is reacquired by invoking `lock.lock()` before the call to +`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the +function exits with an exception.]] + +[[Returns:] [`false` if the call is returning because the time period specified +by `rel_time` has elapsed, `true` otherwise.]] + +[[Postcondition:] [`lock` is locked by the current thread.]] + +[[Throws:] [__thread_resource_error__ if an error +occurs. __thread_interrupted__ if the wait was interrupted by a call to +__interrupt__ on the __thread__ object associated with the current thread of execution.]] + +] + +[note The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.] + +[endsect] + +[section:timed_wait_predicate `template bool timed_wait(lock_type& lock, boost::system_time const& abs_time, predicate_type pred)`] + +[variablelist + +[[Effects:] [As-if `` +while(!pred()) +{ + if(!timed_wait(lock,abs_time)) + { + return pred(); + } +} +return true; +``]] + +] + +[endsect] + +[endsect] + +[section:condition Typedef `condition`] + + typedef condition_variable_any condition; + +The typedef `condition` is provided for backwards compatibility with previous boost releases. + +[endsect] + +[endsect] diff --git a/doc/configuration.xml b/doc/configuration.xml deleted file mode 100644 index bb42d665..00000000 --- a/doc/configuration.xml +++ /dev/null @@ -1,96 +0,0 @@ - - - %thread.entities; -]> - -
- Configuration - &Boost.Thread; uses several configuration macros in <boost/config.hpp>, - as well as configuration macros meant to be supplied by the application. These - macros are documented here. - -
- Library Defined Public Macros - - These macros are defined by &Boost.Thread; but are expected to be used - by application code. - - - - - - Macro - Meaning - - - - - BOOST_HAS_THREADS - - Indicates that threading support is available. This means both that there - is a platform specific implementation for &Boost.Thread; and that - threading support has been enabled in a platform specific manner. For instance, - on the Win32 platform there's an implementation for &Boost.Thread; - but unless the program is compiled against one of the multithreading runtimes - (often determined by the compiler predefining the macro _MT) the BOOST_HAS_THREADS - macro remains undefined. - - - - - -
-
- Library Defined Implementation Macros - - These macros are defined by &Boost.Thread; and are implementation details - of interest only to implementors. - - - - - - Macro - Meaning - - - - - BOOST_HAS_WINTHREADS - - Indicates that the platform has the Microsoft Win32 threading libraries, - and that they should be used to implement &Boost.Thread;. - - - - BOOST_HAS_PTHREADS - - Indicates that the platform has the POSIX pthreads libraries, and that - they should be used to implement &Boost.Thread;. - - - - BOOST_HAS_FTIME - - Indicates that the implementation should use GetSystemTimeAsFileTime() - and the FILETIME type to calculate the current time. This is an implementation - detail used by boost::detail::getcurtime(). - - - - BOOST_HAS_GETTTIMEOFDAY - - Indicates that the implementation should use gettimeofday() to calculate - the current time. This is an implementation detail used by boost::detail::getcurtime(). - - - - - -
-
diff --git a/doc/design.xml b/doc/design.xml deleted file mode 100644 index c9b58480..00000000 --- a/doc/design.xml +++ /dev/null @@ -1,159 +0,0 @@ - - - %thread.entities; -]> - -
- Design - With client/server and three-tier architectures becoming common place - in today's world, it's becoming increasingly important for programs to be - able to handle parallel processing. Modern day operating systems usually - provide some support for this through native thread APIs. Unfortunately, - writing portable code that makes use of parallel processing in C++ is made - very difficult by a lack of a standard interface for these native APIs. - Further, these APIs are almost universally C APIs and fail to take - advantage of C++'s strengths, or to address concepts unique to C++, such as - exceptions. - The &Boost.Thread; library is an attempt to define a portable interface - for writing parallel processes in C++. -
- Goals - The &Boost.Thread; library has several goals that should help to set - it apart from other solutions. These goals are listed in order of precedence - with full descriptions below. - - - Portability - - &Boost.Thread; was designed to be highly portable. The goal is - for the interface to be easily implemented on any platform that - supports threads, and possibly even on platforms without native thread - support. - - - - Safety - - &Boost.Thread; was designed to be as safe as possible. Writing - thread-safe - code is very difficult and successful libraries must strive to - insulate the programmer from dangerous constructs as much as - possible. This is accomplished in several ways: - - - C++ language features are used to make correct usage easy - (if possible) and error-prone usage impossible or at least more - difficult. For example, see the Mutex and Lock designs, and note - how they interact. - - - Certain traditional concurrent programming features are - considered so error-prone that they are not provided at all. For - example, see . - - - Dangerous features, or features which may be misused, are - identified as such in the documentation to make users aware of - potential pitfalls. - - - - - - Flexibility - - &Boost.Thread; was designed to be flexible. This goal is often - at odds with safety. When functionality might be - compromised by the desire to keep the interface safe, &Boost.Thread; - has been designed to provide the functionality, but to make it's use - prohibitive for general use. In other words, the interfaces have been - designed such that it's usually obvious when something is unsafe, and - the documentation is written to explain why. - - - - Efficiency - - &Boost.Thread; was designed to be as efficient as - possible. When building a library on top of another library there is - always a danger that the result will be so much slower than the - "native" API that programmers are inclined to ignore the higher level - API. &Boost.Thread; was designed to minimize the chances of this - occurring. The interfaces have been crafted to allow an implementation - the greatest chance of being as efficient as possible. This goal is - often at odds with the goal for safety. Every - effort was made to ensure efficient implementations, but when in - conflict safety has always taken - precedence. - - - -
-
- Iterative Phases - Another goal of &Boost.Thread; was to take a dynamic, iterative - approach in its development. The computing industry is still exploring the - concepts of parallel programming. Most thread libraries supply only simple - primitive concepts for thread synchronization. These concepts are very - simple, but it is very difficult to use them safely or to provide formal - proofs for constructs built on top of them. There has been a lot of research - into other concepts, such as in "Communicating Sequential Processes." - &Boost.Thread; was designed in iterative steps, with each step providing - the building blocks necessary for the next step and giving the researcher - the tools necessary to explore new concepts in a portable manner. - Given the goal of following a dynamic, iterative approach - &Boost.Thread; shall go through several growth cycles. Each phase in its - development shall be roughly documented here. -
-
- Phase 1, Synchronization Primitives - Boost is all about providing high quality libraries with - implementations for many platforms. Unfortunately, there's a big problem - faced by developers wishing to supply such high quality libraries, namely - thread-safety. The C++ standard doesn't address threads at all, but real - world programs often make use of native threading support. A portable - library that doesn't address the issue of thread-safety is therefore not - much help to a programmer who wants to use the library in his multithreaded - application. So there's a very great need for portable primitives that will - allow the library developer to create thread-safe - implementations. This need far out weighs the need for portable methods to - create and manage threads. - Because of this need, the first phase of &Boost.Thread; focuses - solely on providing portable primitive concepts for thread - synchronization. Types provided in this phase include the - boost::mutex, - boost::try_mutex, - boost::timed_mutex, - boost::recursive_mutex, - boost::recursive_try_mutex, - boost::recursive_timed_mutex, and - boost::lock_error. These are considered the "core" - synchronization primitives, though there are others that will be added in - later phases. -
-
- Phase 2, Thread Management and Thread Specific Storage - This phase addresses the creation and management of threads and - provides a mechanism for thread specific storage (data associated with a - thread instance). Thread management is a tricky issue in C++, so this - phase addresses only the basic needs of multithreaded program. Later - phases are likely to add additional functionality in this area. This - phase of &Boost.Thread; adds the boost::thread and - boost::thread_specific_ptr types. With these - additions the &Boost.Thread; library can be considered minimal but - complete. -
-
- The Next Phase - The next phase will address more advanced synchronization concepts, - such as read/write mutexes and barriers. -
-
diff --git a/doc/entities.xml b/doc/entities.xml deleted file mode 100644 index fbfa6d21..00000000 --- a/doc/entities.xml +++ /dev/null @@ -1,31 +0,0 @@ - -Boost"> -Boost.Thread"> -Boost.Build"> -"> -"> -"> -"> -"> -"> -"> -"> -"> -"> diff --git a/doc/exceptions-ref.xml b/doc/exceptions-ref.xml deleted file mode 100644 index 0ae42e8b..00000000 --- a/doc/exceptions-ref.xml +++ /dev/null @@ -1,62 +0,0 @@ - - - %thread.entities; -]> - -
- - - - The lock_error class defines an exception type thrown - to indicate a locking related error has been detected. - - - - Examples of errors indicated by a lock_error exception - include a lock operation which can be determined to result in a - deadlock, or unlock operations attempted by a thread that does - not own the lock. - - - - std::logical_error - - - - Constructs a lock_error object. - - - - - - - The thread_resource_error class - defines an exception type that is thrown by constructors in the - &Boost.Thread; library when thread-related resources can not be - acquired. - - - - thread_resource_error is used - only when thread-related resources cannot be acquired; memory - allocation failures are indicated by - std::bad_alloc. - - - - std::runtime_error - - - - Constructs a thread_resource_error - object. - - - -
diff --git a/doc/faq.xml b/doc/faq.xml deleted file mode 100644 index 482d6811..00000000 --- a/doc/faq.xml +++ /dev/null @@ -1,235 +0,0 @@ - - - %thread.entities; -]> - -
- Frequently Asked Questions - - - - Are lock objects thread safe? - - - No! Lock objects are not meant to - be shared between threads. They are meant to be short-lived objects - created on automatic storage within a code block. Any other usage is - just likely to lead to errors and won't really be of actual benefit anyway. - Share Mutexes, not - Locks. For more information see the rationale behind the - design for lock objects. - - - - - Why was &Boost.Thread; modeled after (specific library - name)? - - - It wasn't. &Boost.Thread; was designed from scratch. Extensive - design discussions involved numerous people representing a wide range of - experience across many platforms. To ensure portability, the initial - implements were done in parallel using POSIX Threads and the Win32 - threading API. But the &Boost.Thread; design is very much in the spirit - of C++, and thus doesn't model such C based APIs. - - - - - Why wasn't &Boost.Thread; modeled after (specific library - name)? - - - Existing C++ libraries either seemed dangerous (often failing to - take advantage of prior art to reduce errors) or had excessive - dependencies on library components unrelated to threading. Existing C - libraries couldn't meet our C++ requirements, and were also missing - certain features. For instance, the WIN32 thread API lacks condition - variables, even though these are critical for the important Monitor - pattern &cite.SchmidtStalRohnertBuschmann;. - - - - - Why do Mutexes - have noncopyable semantics? - - - To ensure that deadlocks don't occur. The - only logical form of copy would be to use some sort of shallow copy - semantics in which multiple mutex objects could refer to the same mutex - state. This means that if ObjA has a mutex object as part of its state - and ObjB is copy constructed from it, then when ObjB::foo() locks the - mutex it has effectively locked ObjA as well. This behavior can result - in deadlock. Other copy semantics result in similar problems (if you - think you can prove this to be wrong then supply us with an alternative - and we'll reconsider). - - - - - How can you prevent deadlock from occurring when - a thread must lock multiple mutexes? - - - Always lock them in the same order. One easy way of doing this is - to use each mutex's address to determine the order in which they are - locked. A future &Boost.Thread; concept may wrap this pattern up in a - reusable class. - - - - - Don't noncopyable Mutex semantics mean that a - class with a mutex member will be noncopyable as well? - - - No, but what it does mean is that the compiler can't generate a - copy constructor and assignment operator, so they will have to be coded - explicitly. This is a good thing, - however, since the compiler generated operations would not be thread-safe. The following - is a simple example of a class with copyable semantics and internal - synchronization through a mutex member. - -class counter -{ -public: - // Doesn't need synchronization since there can be no references to *this - // until after it's constructed! - explicit counter(int initial_value) - : m_value(initial_value) - { - } - // We only need to synchronize other for the same reason we don't have to - // synchronize on construction! - counter(const counter& other) - { - boost::mutex::scoped_lock scoped_lock(other.m_mutex); - m_value = other.m_value; - } - // For assignment we need to synchronize both objects! - const counter& operator=(const counter& other) - { - if (this == &other) - return *this; - boost::mutex::scoped_lock lock1(&m_mutex < &other.m_mutex ? m_mutex : other.m_mutex); - boost::mutex::scoped_lock lock2(&m_mutex > &other.m_mutex ? m_mutex : other.m_mutex); - m_value = other.m_value; - return *this; - } - int value() const - { - boost::mutex::scoped_lock scoped_lock(m_mutex); - return m_value; - } - int increment() - { - boost::mutex::scoped_lock scoped_lock(m_mutex); - return ++m_value; - } -private: - mutable boost::mutex m_mutex; - int m_value; -}; - - - - - - How can you lock a Mutex member in a const member - function, in order to implement the Monitor Pattern? - - - The Monitor Pattern &cite.SchmidtStalRohnertBuschmann; mutex - should simply be declared as mutable. See the example code above. The - internal state of mutex types could have been made mutable, with all - lock calls made via const functions, but this does a poor job of - documenting the actual semantics (and in fact would be incorrect since - the logical state of a locked mutex clearly differs from the logical - state of an unlocked mutex). Declaring a mutex member as mutable clearly - documents the intended semantics. - - - - - Why supply boost::condition variables rather than - event variables? - - - Condition variables result in user code much less prone to race conditions than - event variables. See - for analysis. Also see &cite.Hoare74; and &cite.SchmidtStalRohnertBuschmann;. - - - - - - Why isn't thread cancellation or termination provided? - - - There's a valid need for thread termination, so at some point - &Boost.Thread; probably will include it, but only after we can find a - truly safe (and portable) mechanism for this concept. - - - - - Is it safe for threads to share automatic storage duration (stack) - objects via pointers or references? - - - Only if you can guarantee that the lifetime of the stack object - will not end while other threads might still access the object. Thus the - safest practice is to avoid sharing stack objects, particularly in - designs where threads are created and destroyed dynamically. Restrict - sharing of stack objects to simple designs with very clear and - unchanging function and thread lifetimes. (Suggested by Darryl - Green). - - - - - Why has class semaphore disappeared? - - - Semaphore was removed as too error prone. The same effect can be - achieved with greater safety by the combination of a mutex and a - condition variable. - - - - - Why doesn't the thread's ctor take at least a void* to pass any - information along with the function? All other threading libs support - that and it makes Boost.Threads inferiour. - - - There is no need, because Boost.Threads are superiour! First - thing is that its ctor doesn't take a function but a functor. That - means that you can pass an object with an overloaded operator() and - include additional data as members in that object. Beware though that - this object is copied, use boost::ref to prevent that. Secondly, even - a boost::function<void (void)> can carry parameters, you only have to - use boost::bind() to create it from any function and bind its - parameters. - That is also why Boost.Threads are superiour, because they - don't require you to pass a type-unsafe void pointer. Rather, you can - use the flexible Boost.Functions to create a thread entry out of - anything that can be called. - - - -
diff --git a/doc/glossary.xml b/doc/glossary.xml deleted file mode 100644 index ad1c8442..00000000 --- a/doc/glossary.xml +++ /dev/null @@ -1,304 +0,0 @@ - - - %thread.entities; -]> - - - Glossary - Definitions are given in terms of the C++ Standard - &cite.ISO98;. References to the standard are in the form [1.2.3/4], which - represents the section number, with the paragraph number following the - "/". - Because the definitions are written in something akin to "standardese", - they can be difficult to understand. The intent isn't to confuse, but rather - to clarify the additional requirements &Boost.Thread; places on a C++ - implementation as defined by the C++ Standard. - - Thread - - Thread is short for "thread of execution". A thread of execution is - an execution environment [1.9/7] within the execution environment of a C++ - program [1.9]. The main() function [3.6.1] of the program is the initial - function of the initial thread. A program in a multithreading environment - always has an initial thread even if the program explicitly creates no - additional threads. - Unless otherwise specified, each thread shares all aspects of its - execution environment with other threads in the program. Shared aspects of - the execution environment include, but are not limited to, the - following: - - Static storage duration (static, extern) objects - [3.7.1]. - Dynamic storage duration (heap) objects [3.7.3]. Thus - each memory allocation will return a unique addresses, regardless of the - thread making the allocation request. - Automatic storage duration (stack) objects [3.7.2] - accessed via pointer or reference from another thread. - Resources provided by the operating system. For example, - files. - The program itself. In other words, each thread is - executing some function of the same program, not a totally different - program. - - Each thread has its own: - - Registers and current execution sequence (program - counter) [1.9/5]. - Automatic storage duration (stack) objects - [3.7.2]. - - - - - Thread-safe - - A program is thread-safe if it has no race conditions, does - not deadlock, and has - no priority - failures. - Note that thread-safety does not necessarily imply efficiency, and - than while some thread-safety violations can be determined statically at - compile time, many thread-safety errors can only only be detected at - runtime. - - - - Thread State - - During the lifetime of a thread, it shall be in one of the following - states: - - Thread States - - - - State - Description - - - - - Ready - Ready to run, but waiting for a processor. - - - Running - Currently executing on a processor. Zero or more threads - may be running at any time, with a maximum equal to the number of - processors. - - - Blocked - Waiting for some resource other than a processor which is - not currently available, or for the completion of calls to library - functions [1.9/6]. The term "waiting" is synonymous with - "blocked" - - - Terminated - Finished execution but not yet detached or joined. - - - -
- Thread state transitions shall occur only as specified: - - Thread States Transitions - - - - From - To - Cause - - - - - [none] - Ready - Thread is created by a call to a library function. - In the case of the initial thread, creation is implicit and - occurs during the startup of the main() function [3.6.1]. - - - Ready - Running - Processor becomes available. - - - Running - Ready - Thread preempted. - - - Running - Blocked - Thread calls a library function which waits for a resource or - for the completion of I/O. - - - Running - Terminated - Thread returns from its initial function, calls a thread - termination library function, or is canceled by some other thread - calling a thread termination library function. - - - Blocked - Ready - The resource being waited for becomes available, or the - blocking library function completes. - - - Terminated - [none] - Thread is detached or joined by some other thread calling the - appropriate library function, or by program termination - [3.6.3]. - - - -
- [Note: if a suspend() function is added to the threading library, - additional transitions to the blocked state will have to be added to the - above table.] -
-
- - Race Condition - - A race condition is what occurs when multiple threads read from and write - to the same memory without proper synchronization, resulting in an incorrect - value being read or written. The result of a race condition may be a bit - pattern which isn't even a valid value for the data type. A race condition - results in undefined behavior [1.3.12]. - Race conditions can be prevented by serializing memory access using - the tools provided by &Boost.Thread;. - - - - Deadlock - - Deadlock is an execution state where for some set of threads, each - thread in the set is blocked waiting for some action by one of the other - threads in the set. Since each is waiting on the others, none will ever - become ready again. - - - - Starvation - - The condition in which a thread is not making sufficient progress in - its work during a given time interval. - - - - Priority Failure - - A priority failure (such as priority inversion or infinite overtaking) - occurs when threads are executed in such a sequence that required work is not - performed in time to be useful. - - - - Undefined Behavior - - The result of certain operations in &Boost.Thread; is undefined; - this means that those operations can invoke almost any behavior when - they are executed. - - An operation whose behavior is undefined can work "correctly" - in some implementations (i.e., do what the programmer thought it - would do), while in other implementations it may exhibit almost - any "incorrect" behavior--such as returning an invalid value, - throwing an exception, generating an access violation, or terminating - the process. - - Executing a statement whose behavior is undefined is a - programming error. - - - - Memory Visibility - - An address [1.7] shall always point to the same memory byte, - regardless of the thread or processor dereferencing the address. - An object [1.8, 1.9] is accessible from multiple threads if it is of - static storage duration (static, extern) [3.7.1], or if a pointer or - reference to it is explicitly or implicitly dereferenced in multiple - threads. - For an object accessible from multiple threads, the value of the - object accessed from one thread may be indeterminate or different from the - value accessed from another thread, except under the conditions specified in - the following table. For the same row of the table, the value of an object - accessible at the indicated sequence point in thread A will be determinate - and the same if accessed at or after the indicated sequence point in thread - B, provided the object is not otherwise modified. In the table, the - "sequence point at a call" is the sequence point after the evaluation of all - function arguments [1.9/17], while the "sequence point after a call" is the - sequence point after the copying of the returned value... [1.9/17]. - - Memory Visibility - - - - Thread A - Thread B - - - - - The sequence point at a call to a library thread-creation - function. - The first sequence point of the initial function in the new - thread created by the Thread A call. - - - The sequence point at a call to a library function which - locks a mutex, directly or by waiting for a condition - variable. - The sequence point after a call to a library function which - unlocks the same mutex. - - - The last sequence point before thread termination. - The sequence point after a call to a library function which - joins the terminated thread. - - - The sequence point at a call to a library function which - signals or broadcasts a condition variable. - The sequence point after the call to the library function - which was waiting on that same condition variable or signal. - - - -
- The architecture of the execution environment and the observable - behavior of the abstract machine [1.9] shall be the same on all - processors. - The latitude granted by the C++ standard for an implementation to - alter the definition of observable behavior of the abstract machine to - include additional library I/O functions [1.9/6] is extended to include - threading library functions. - When an exception is thrown and there is no matching exception handler - in the same thread, behavior is undefined. The preferred behavior is the - same as when there is no matching exception handler in a program - [15.3/9]. That is, terminate() is called, and it is implementation-defined - whether or not the stack is unwound. -
-
-
- Acknowledgements - This document was originally written by Beman Dawes, and then much - improved by the incorporation of comments from William Kempf, who now - maintains the contents. - The visibility rules are based on &cite.Butenhof97;. -
-
diff --git a/doc/implementation_notes.xml b/doc/implementation_notes.xml deleted file mode 100644 index c097b070..00000000 --- a/doc/implementation_notes.xml +++ /dev/null @@ -1,38 +0,0 @@ - - - %thread.entities; -]> - -
- Implementation Notes -
- Win32 - - In the current Win32 implementation, creating a boost::thread object - during dll initialization will result in deadlock because the thread - class constructor causes the current thread to wait on the thread that - is being created until it signals that it has finished its initialization, - and, as stated in the - MSDN Library, "DllMain" article, "Remarks" section, - "Because DLL notifications are serialized, entry-point functions should not - attempt to communicate with other threads or processes. Deadlocks may occur as a result." - (Also see "Under the Hood", January 1996 - for a more detailed discussion of this issue). - - - The following non-exhaustive list details some of the situations that - should be avoided until this issue can be addressed: - - Creating a boost::thread object in DllMain() or in any function called by it. - Creating a boost::thread object in the constructor of a global static object or in any function called by one. - Creating a boost::thread object in MFC's CWinApp::InitInstance() function or in any function called by it. - Creating a boost::thread object in the function pointed to by MFC's _pRawDllMain function pointer or in any function called by it. - - -
-
diff --git a/doc/mutex-ref.xml b/doc/mutex-ref.xml deleted file mode 100644 index 5faa2630..00000000 --- a/doc/mutex-ref.xml +++ /dev/null @@ -1,309 +0,0 @@ - - - %thread.entities; -]> - -
- - - - The mutex class is a model of the - Mutex concept. - - - - The mutex class is a model of the - Mutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - try_mutex and timed_mutex. - - For Recursive - locking mechanics, see recursive_mutex, - recursive_try_mutex, and recursive_timed_mutex. - - - The mutex class supplies the following typedef, - which models - the specified locking strategy: - - - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - - - - - The mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - mutex models - in &Boost.Thread;, mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. - - - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - Constructs a mutex object. - - - *this is in an unlocked state. - - - - - Destroys a mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - - - - The try_mutex class is a model of the - TryMutex concept. - - - - The try_mutex class is a model of the - TryMutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - mutex and timed_mutex. - - For Recursive - locking mechanics, see recursive_mutex, - recursive_try_mutex, and recursive_timed_mutex. - - - The try_mutex class supplies the following typedefs, - which model - the specified locking strategies: - - - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - scoped_try_lock - ScopedTryLock - - - - - - - The try_mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a try_mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - mutex models - in &Boost.Thread;, try_mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. - - - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - Constructs a try_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a try_mutex object. - - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - - - - The timed_mutex class is a model of the - TimedMutex concept. - - - - The timed_mutex class is a model of the - TimedMutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - mutex and try_mutex. - - For Recursive - locking mechanics, see recursive_mutex, - recursive_try_mutex, and recursive_timed_mutex. - - - The timed_mutex class supplies the following typedefs, - which model - the specified locking strategies: - - - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - scoped_try_lock - ScopedTryLock - - - scoped_timed_lock - ScopedTimedLock - - - - - - - The timed_mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a timed_mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - mutex models - in &Boost.Thread;, timed_mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. - - - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - Constructs a timed_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a timed_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - -
diff --git a/doc/mutex_concepts.qbk b/doc/mutex_concepts.qbk new file mode 100644 index 00000000..f99acd18 --- /dev/null +++ b/doc/mutex_concepts.qbk @@ -0,0 +1,887 @@ +[section:mutex_concepts Mutex Concepts] + +A mutex object facilitates protection against data races and allows thread-safe synchronization of data between threads. A thread +obtains ownership of a mutex object by calling one of the lock functions and relinquishes ownership by calling the corresponding +unlock function. Mutexes may be either recursive or non-recursive, and may grant simultaneous ownership to one or many +threads. __boost_thread__ supplies recursive and non-recursive mutexes with exclusive ownership semantics, along with a shared +ownership (multiple-reader / single-writer) mutex. + +__boost_thread__ supports four basic concepts for lockable objects: __lockable_concept_type__, __timed_lockable_concept_type__, +__shared_lockable_concept_type__ and __upgrade_lockable_concept_type__. Each mutex type implements one or more of these concepts, as +do the various lock types. + +[section:lockable `Lockable` Concept] + +The __lockable_concept__ models exclusive ownership. A type that implements the __lockable_concept__ shall provide the following +member functions: + +* [lock_ref_link `void lock();`] +* [try_lock_ref_link `bool try_lock();`] +* [unlock_ref_link `void unlock();`] + +Lock ownership acquired through a call to __lock_ref__ or __try_lock_ref__ must be released through a call to __unlock_ref__. + +[section:lock `void lock()`] + +[variablelist + +[[Effects:] [The current thread blocks until ownership can be obtained for the current thread.]] + +[[Postcondition:] [The current thread owns `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:try_lock `bool try_lock()`] + +[variablelist + +[[Effects:] [Attempt to obtain ownership for the current thread without blocking.]] + +[[Returns:] [`true` if ownership was obtained for the current thread, `false` otherwise.]] + +[[Postcondition:] [If the call returns `true`, the current thread owns the `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:unlock `void unlock()`] + +[variablelist + +[[Precondition:] [The current thread owns `*this`.]] + +[[Effects:] [Releases ownership by the current thread.]] + +[[Postcondition:] [The current thread no longer owns `*this`.]] + +[[Throws:] [Nothing]] +] +[endsect] +[endsect] + +[section:timed_lockable `TimedLockable` Concept] + +The __timed_lockable_concept__ refines the __lockable_concept__ to add support for +timeouts when trying to acquire the lock. + +A type that implements the __timed_lockable_concept__ shall meet the requirements +of the __lockable_concept__. In addition, the following member functions must be +provided: + +* [timed_lock_ref_link `bool timed_lock(boost::system_time const& abs_time);`] +* [timed_lock_duration_ref_link `template bool timed_lock(DurationType const& rel_time);`] + +Lock ownership acquired through a call to __timed_lock_ref__ must be released through a call to __unlock_ref__. + +[section:timed_lock `bool timed_lock(boost::system_time const& abs_time)`] + +[variablelist + +[[Effects:] [Attempt to obtain ownership for the current thread. Blocks until ownership can be obtained, or the specified time is +reached. If the specified time has already passed, behaves as __try_lock_ref__.]] + +[[Returns:] [`true` if ownership was obtained for the current thread, `false` otherwise.]] + +[[Postcondition:] [If the call returns `true`, the current thread owns `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] +] +[endsect] + +[section:timed_lock_duration `template bool +timed_lock(DurationType const& rel_time)`] + +[variablelist + +[[Effects:] [As-if [timed_lock_ref_link +`timed_lock(boost::get_system_time()+rel_time)`].]] + +] +[endsect] + +[endsect] + +[section:shared_lockable `SharedLockable` Concept] + +The __shared_lockable_concept__ is a refinement of the __timed_lockable_concept__ that +allows for ['shared ownership] as well as ['exclusive ownership]. This is the +standard multiple-reader / single-write model: at most one thread can have +exclusive ownership, and if any thread does have exclusive ownership, no other threads +can have shared or exclusive ownership. Alternatively, many threads may have +shared ownership. + +For a type to implement the __shared_lockable_concept__, as well as meeting the +requirements of the __timed_lockable_concept__, it must also provide the following +member functions: + +* [lock_shared_ref_link `void lock_shared();`] +* [try_lock_shared_ref_link `bool try_lock_shared();`] +* [unlock_shared_ref_link `bool unlock_shared();`] +* [timed_lock_shared_ref_link `bool timed_lock_shared(boost::system_time const& abs_time);`] + +Lock ownership acquired through a call to __lock_shared_ref__, __try_lock_shared_ref__ or __timed_lock_shared_ref__ must be released +through a call to __unlock_shared_ref__. + +[section:lock_shared `void lock_shared()`] + +[variablelist + +[[Effects:] [The current thread blocks until shared ownership can be obtained for the current thread.]] + +[[Postcondition:] [The current thread has shared ownership of `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:try_lock_shared `bool try_lock_shared()`] + +[variablelist + +[[Effects:] [Attempt to obtain shared ownership for the current thread without blocking.]] + +[[Returns:] [`true` if shared ownership was obtained for the current thread, `false` otherwise.]] + +[[Postcondition:] [If the call returns `true`, the current thread has shared ownership of `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:timed_lock_shared `bool timed_lock_shared(boost::system_time const& abs_time)`] + +[variablelist + +[[Effects:] [Attempt to obtain shared ownership for the current thread. Blocks until shared ownership can be obtained, or the +specified time is reached. If the specified time has already passed, behaves as __try_lock_shared_ref__.]] + +[[Returns:] [`true` if shared ownership was acquired for the current thread, `false` otherwise.]] + +[[Postcondition:] [If the call returns `true`, the current thread has shared +ownership of `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:unlock_shared `void unlock_shared()`] + +[variablelist + +[[Precondition:] [The current thread has shared ownership of `*this`.]] + +[[Effects:] [Releases shared ownership of `*this` by the current thread.]] + +[[Postcondition:] [The current thread no longer has shared ownership of `*this`.]] + +[[Throws:] [Nothing]] + +] +[endsect] + + +[endsect] + +[section:upgrade_lockable `UpgradeLockable` Concept] + +The __upgrade_lockable_concept__ is a refinement of the __shared_lockable_concept__ that allows for ['upgradable ownership] as well +as ['shared ownership] and ['exclusive ownership]. This is an extension to the multiple-reader / single-write model provided by the +__shared_lockable_concept__: a single thread may have ['upgradable ownership] at the same time as others have ['shared +ownership]. The thread with ['upgradable ownership] may at any time attempt to upgrade that ownership to ['exclusive ownership]. If +no other threads have shared ownership, the upgrade is completed immediately, and the thread now has ['exclusive ownership], which +must be relinquished by a call to __unlock_ref__, just as if it had been acquired by a call to __lock_ref__. + +If a thread with ['upgradable ownership] tries to upgrade whilst other threads have ['shared ownership], the attempt will fail and +the thread will block until ['exclusive ownership] can be acquired. + +Ownership can also be ['downgraded] as well as ['upgraded]: exclusive ownership of an implementation of the +__upgrade_lockable_concept__ can be downgraded to upgradable ownership or shared ownership, and upgradable ownership can be +downgraded to plain shared ownership. + +For a type to implement the __upgrade_lockable_concept__, as well as meeting the +requirements of the __shared_lockable_concept__, it must also provide the following +member functions: + +* [lock_upgrade_ref_link `void lock_upgrade();`] +* [unlock_upgrade_ref_link `bool unlock_upgrade();`] +* [unlock_upgrade_and_lock_ref_link `void unlock_upgrade_and_lock();`] +* [unlock_and_lock_upgrade_ref_link `void unlock_and_lock_upgrade();`] +* [unlock_upgrade_and_lock_shared_ref_link `void unlock_upgrade_and_lock_shared();`] + +Lock ownership acquired through a call to __lock_upgrade_ref__ must be released through a call to __unlock_upgrade_ref__. If the +ownership type is changed through a call to one of the `unlock_xxx_and_lock_yyy()` functions, ownership must be released through a +call to the unlock function corresponding to the new level of ownership. + + +[section:lock_upgrade `void lock_upgrade()`] + +[variablelist + +[[Effects:] [The current thread blocks until upgrade ownership can be obtained for the current thread.]] + +[[Postcondition:] [The current thread has upgrade ownership of `*this`.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] +[endsect] + +[section:unlock_upgrade `void unlock_upgrade()`] + +[variablelist + +[[Precondition:] [The current thread has upgrade ownership of `*this`.]] + +[[Effects:] [Releases upgrade ownership of `*this` by the current thread.]] + +[[Postcondition:] [The current thread no longer has upgrade ownership of `*this`.]] + +[[Throws:] [Nothing]] + +] +[endsect] + +[section:unlock_upgrade_and_lock `void unlock_upgrade_and_lock()`] + +[variablelist + +[[Precondition:] [The current thread has upgrade ownership of `*this`.]] + +[[Effects:] [Atomically releases upgrade ownership of `*this` by the current thread and acquires exclusive ownership of `*this`. If +any other threads have shared ownership, blocks until exclusive ownership can be acquired.]] + +[[Postcondition:] [The current thread has exclusive ownership of `*this`.]] + +[[Throws:] [Nothing]] + +] +[endsect] + +[section:unlock_upgrade_and_lock_shared `void unlock_upgrade_and_lock_shared()`] + +[variablelist + +[[Precondition:] [The current thread has upgrade ownership of `*this`.]] + +[[Effects:] [Atomically releases upgrade ownership of `*this` by the current thread and acquires shared ownership of `*this` without +blocking.]] + +[[Postcondition:] [The current thread has shared ownership of `*this`.]] + +[[Throws:] [Nothing]] + +] +[endsect] + +[section:unlock_and_lock_upgrade `void unlock_and_lock_upgrade()`] + +[variablelist + +[[Precondition:] [The current thread has exclusive ownership of `*this`.]] + +[[Effects:] [Atomically releases exclusive ownership of `*this` by the current thread and acquires upgrade ownership of `*this` +without blocking.]] + +[[Postcondition:] [The current thread has upgrade ownership of `*this`.]] + +[[Throws:] [Nothing]] + +] +[endsect] + +[endsect] + +[endsect] + +[section:locks Lock Types] + +[section:lock_guard Class template `lock_guard`] + + template + class lock_guard + { + public: + explicit lock_guard(Lockable& m_); + lock_guard(Lockable& m_,boost::adopt_lock_t); + + ~lock_guard(); + }; + +__lock_guard__ is very simple: on construction it +acquires ownership of the implementation of the __lockable_concept__ supplied as +the constructor parameter. On destruction, the ownership is released. This +provides simple RAII-style locking of a __lockable_concept_type__ object, to facilitate exception-safe +locking and unlocking. In addition, the [link +thread.synchronization.locks.lock_guard.constructor_adopt `lock_guard(Lockable & +m,boost::adopt_lock_t)` constructor] allows the __lock_guard__ object to +take ownership of a lock already held by the current thread. + +[section:constructor `lock_guard(Lockable & m)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [lock_ref_link `m.lock()`].]] + +[[Throws:] [Any exception thrown by the call to [lock_ref_link `m.lock()`].]] + +] + +[endsect] + +[section:constructor_adopt `lock_guard(Lockable & m,boost::adopt_lock_t)`] + +[variablelist + +[[Precondition:] [The current thread owns a lock on `m` equivalent to one +obtained by a call to [lock_ref_link `m.lock()`].]] + +[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of +`m`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:destructor `~lock_guard()`] + +[variablelist + +[[Effects:] [Invokes [unlock_ref_link `m.unlock()`] on the __lockable_concept_type__ +object passed to the constructor.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[endsect] + +[section:unique_lock Class template `unique_lock`] + + template + class unique_lock + { + public: + explicit unique_lock(Lockable& m_); + unique_lock(Lockable& m_,adopt_lock_t); + unique_lock(Lockable& m_,defer_lock_t); + unique_lock(Lockable& m_,try_to_lock_t); + unique_lock(Lockable& m_,system_time const& target_time); + + ~unique_lock(); + + unique_lock(detail::thread_move_t > other); + unique_lock(detail::thread_move_t > other); + + operator detail::thread_move_t >(); + detail::thread_move_t > move(); + unique_lock& operator=(detail::thread_move_t > other); + unique_lock& operator=(detail::thread_move_t > other); + + void swap(unique_lock& other); + void swap(detail::thread_move_t > other); + + void lock(); + bool try_lock(); + + template + bool timed_lock(TimeDuration const& relative_time); + bool timed_lock(::boost::system_time const& absolute_time); + + void unlock(); + + bool owns_lock() const; + operator ``['unspecified-bool-type]``() const; + bool operator!() const; + + Lockable* mutex() const; + Lockable* release(); + }; + +__unique_lock__ is more complex than __lock_guard__: not only does it provide for RAII-style locking, it also allows for deferring +acquiring the lock until the __lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking +fashion, or with a timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the +__lockable_concept_type__ object, or otherwise adopted a lock on the __lockable_concept_type__ object. + +Specializations of __unique_lock__ model the __timed_lockable_concept__ if the supplied __lockable_concept_type__ type itself models +__timed_lockable_concept__ (e.g. `boost::unique_lock`), or the __lockable_concept__ otherwise +(e.g. `boost::unique_lock`). + +An instance of __unique_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a +pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object +is destroyed, then the destructor will invoke [unlock_ref_link `mutex()->unlock()`]. + +The member functions of __unique_lock__ are not thread-safe. In particular, __unique_lock__ is intended to model the ownership of a +__lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock state +(including the destructor) must be called by the same thread that acquired ownership of the lock state. + +[section:constructor `unique_lock(Lockable & m)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [lock_ref_link `m.lock()`].]] + +[[Postcondition:] [__owns_lock_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Any exception thrown by the call to [lock_ref_link `m.lock()`].]] + +] + +[endsect] + +[section:constructor_adopt `unique_lock(Lockable & m,boost::adopt_lock_t)`] + +[variablelist + +[[Precondition:] [The current thread owns an exclusive lock on `m`.]] + +[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of `m`.]] + +[[Postcondition:] [__owns_lock_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_defer `unique_lock(Lockable & m,boost::defer_lock_t)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`.]] + +[[Postcondition:] [__owns_lock_ref__ returns `false`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_try `unique_lock(Lockable & m,boost::try_to_lock_t)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [try_lock_ref_link +`m.try_lock()`], and takes ownership of the lock state if the call returns +`true`.]] + +[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __try_lock_ref__ +returned `true`, then __owns_lock_ref__ returns `true`, otherwise __owns_lock_ref__ +returns `false`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_abs_time `unique_lock(Lockable & m,boost::system_time const& abs_time)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [timed_lock_ref_link +`m.timed_lock(abs_time)`], and takes ownership of the lock state if the call +returns `true`.]] + +[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __timed_lock_ref__ +returned `true`, then __owns_lock_ref__ returns `true`, otherwise __owns_lock_ref__ +returns `false`.]] + +[[Throws:] [Any exceptions thrown by the call to [timed_lock_ref_link `m.timed_lock(abs_time)`].]] + +] + +[endsect] + +[section:destructor `~unique_lock()`] + +[variablelist + +[[Effects:] [Invokes __mutex_func_ref__`->`[unlock_ref_link `unlock()`] if +__owns_lock_ref__ returns `true`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:owns_lock `bool owns_lock() const`] + +[variablelist + +[[Returns:] [`true` if the `*this` owns the lock on the __lockable_concept_type__ +object associated with `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:mutex `Lockable* mutex() const`] + +[variablelist + +[[Returns:] [A pointer to the __lockable_concept_type__ object associated with +`*this`, or `NULL` if there is no such object.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:bool_conversion `operator unspecified-bool-type() const`] + +[variablelist + +[[Returns:] [If __owns_lock_ref__ would return `true`, a value that evaluates to +`true` in boolean contexts, otherwise a value that evaluates to `false` in +boolean contexts.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:operator_not `bool operator!() const`] + +[variablelist + +[[Returns:] [`!` __owns_lock_ref__.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:release `Lockable* release()`] + +[variablelist + +[[Effects:] [The association between `*this` and the __lockable_concept_type__ object is removed, without affecting the lock state +of the __lockable_concept_type__ object. If __owns_lock_ref__ would have returned `true`, it is the responsibility of the calling +code to ensure that the __lockable_concept_type__ is correctly unlocked.]] + +[[Returns:] [A pointer to the __lockable_concept_type__ object associated with `*this` at the point of the call, or `NULL` if there +is no such object.]] + +[[Throws:] [Nothing.]] + +[[Postcondition:] [`*this` is no longer associated with any __lockable_concept_type__ object. __mutex_func_ref__ returns `NULL` and +__owns_lock_ref__ returns `false`.]] + +] + +[endsect] + +[endsect] + +[section:shared_lock Class template `shared_lock`] + + template + class shared_lock + { + public: + explicit shared_lock(Lockable& m_); + shared_lock(Lockable& m_,adopt_lock_t); + shared_lock(Lockable& m_,defer_lock_t); + shared_lock(Lockable& m_,try_to_lock_t); + shared_lock(Lockable& m_,system_time const& target_time); + shared_lock(detail::thread_move_t > other); + shared_lock(detail::thread_move_t > other); + shared_lock(detail::thread_move_t > other); + + ~shared_lock(); + + operator detail::thread_move_t >(); + detail::thread_move_t > move(); + + shared_lock& operator=(detail::thread_move_t > other); + shared_lock& operator=(detail::thread_move_t > other); + shared_lock& operator=(detail::thread_move_t > other); + void swap(shared_lock& other); + + void lock(); + bool try_lock(); + bool timed_lock(boost::system_time const& target_time); + void unlock(); + + operator ``['unspecified-bool-type]``() const; + bool operator!() const; + bool owns_lock() const; + }; + +Like __unique_lock__, __shared_lock__ models the __lockable_concept__, but rather than acquiring unique ownership of the supplied +__lockable_concept_type__ object, locking an instance of __shared_lock__ acquires shared ownership. + +Like __unique_lock__, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the +__lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a +timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the __lockable_concept_type__ +object, or otherwise adopted a lock on the __lockable_concept_type__ object. + +An instance of __shared_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a +pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object +is destroyed, then the destructor will invoke [unlock_shared_ref_link `mutex()->unlock_shared()`]. + +The member functions of __shared_lock__ are not thread-safe. In particular, __shared_lock__ is intended to model the shared +ownership of a __lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock +state (including the destructor) must be called by the same thread that acquired ownership of the lock state. + +[section:constructor `shared_lock(Lockable & m)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [lock_shared_ref_link `m.lock_shared()`].]] + +[[Postcondition:] [__owns_lock_shared_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Any exception thrown by the call to [lock_shared_ref_link `m.lock_shared()`].]] + +] + +[endsect] + +[section:constructor_adopt `shared_lock(Lockable & m,boost::adopt_lock_t)`] + +[variablelist + +[[Precondition:] [The current thread owns an exclusive lock on `m`.]] + +[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of `m`.]] + +[[Postcondition:] [__owns_lock_shared_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_defer `shared_lock(Lockable & m,boost::defer_lock_t)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`.]] + +[[Postcondition:] [__owns_lock_shared_ref__ returns `false`. __mutex_func_ref__ returns `&m`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_try `shared_lock(Lockable & m,boost::try_to_lock_t)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [try_lock_shared_ref_link +`m.try_lock_shared()`], and takes ownership of the lock state if the call returns +`true`.]] + +[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __try_lock_shared_ref__ +returned `true`, then __owns_lock_shared_ref__ returns `true`, otherwise __owns_lock_shared_ref__ +returns `false`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:constructor_abs_time `shared_lock(Lockable & m,boost::system_time const& abs_time)`] + +[variablelist + +[[Effects:] [Stores a reference to `m`. Invokes [timed_lock_shared_ref_link +`m.timed_lock(abs_time)`], and takes ownership of the lock state if the call +returns `true`.]] + +[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __timed_lock_shared_ref__ +returned `true`, then __owns_lock_shared_ref__ returns `true`, otherwise __owns_lock_shared_ref__ +returns `false`.]] + +[[Throws:] [Any exceptions thrown by the call to [timed_lock_shared_ref_link `m.timed_lock(abs_time)`].]] + +] + +[endsect] + +[section:destructor `~shared_lock()`] + +[variablelist + +[[Effects:] [Invokes __mutex_func_ref__`->`[unlock_shared_ref_link `unlock_shared()`] if +__owns_lock_shared_ref__ returns `true`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:owns_lock `bool owns_lock() const`] + +[variablelist + +[[Returns:] [`true` if the `*this` owns the lock on the __lockable_concept_type__ +object associated with `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:mutex `Lockable* mutex() const`] + +[variablelist + +[[Returns:] [A pointer to the __lockable_concept_type__ object associated with +`*this`, or `NULL` if there is no such object.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:bool_conversion `operator unspecified-bool-type() const`] + +[variablelist + +[[Returns:] [If __owns_lock_shared_ref__ would return `true`, a value that evaluates to +`true` in boolean contexts, otherwise a value that evaluates to `false` in +boolean contexts.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:operator_not `bool operator!() const`] + +[variablelist + +[[Returns:] [`!` __owns_lock_shared_ref__.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:release `Lockable* release()`] + +[variablelist + +[[Effects:] [The association between `*this` and the __lockable_concept_type__ object is removed, without affecting the lock state +of the __lockable_concept_type__ object. If __owns_lock_shared_ref__ would have returned `true`, it is the responsibility of the calling +code to ensure that the __lockable_concept_type__ is correctly unlocked.]] + +[[Returns:] [A pointer to the __lockable_concept_type__ object associated with `*this` at the point of the call, or `NULL` if there +is no such object.]] + +[[Throws:] [Nothing.]] + +[[Postcondition:] [`*this` is no longer associated with any __lockable_concept_type__ object. __mutex_func_ref__ returns `NULL` and +__owns_lock_shared_ref__ returns `false`.]] + +] + +[endsect] + +[endsect] + +[section:upgrade_lock Class template `upgrade_lock`] + + template + class upgrade_lock + { + public: + explicit upgrade_lock(Lockable& m_); + + upgrade_lock(detail::thread_move_t > other); + upgrade_lock(detail::thread_move_t > other); + + ~upgrade_lock(); + + operator detail::thread_move_t >(); + detail::thread_move_t > move(); + + upgrade_lock& operator=(detail::thread_move_t > other); + upgrade_lock& operator=(detail::thread_move_t > other); + + void swap(upgrade_lock& other); + + void lock(); + void unlock(); + + operator ``['unspecified-bool-type]``() const; + bool operator!() const; + bool owns_lock() const; + }; + +Like __unique_lock__, __upgrade_lock__ models the __lockable_concept__, but rather than acquiring unique ownership of the supplied +__lockable_concept_type__ object, locking an instance of __upgrade_lock__ acquires upgrade ownership. + +Like __unique_lock__, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the +__lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a +timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the __lockable_concept_type__ +object, or otherwise adopted a lock on the __lockable_concept_type__ object. + +An instance of __upgrade_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a +pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object +is destroyed, then the destructor will invoke [unlock_upgrade_ref_link `mutex()->unlock_upgrade()`]. + +The member functions of __upgrade_lock__ are not thread-safe. In particular, __upgrade_lock__ is intended to model the upgrade +ownership of a __lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock +state (including the destructor) must be called by the same thread that acquired ownership of the lock state. + +[endsect] + +[section:upgrade_to_unique_lock Class template `upgrade_to_unique_lock`] + + template + class upgrade_to_unique_lock + { + public: + explicit upgrade_to_unique_lock(upgrade_lock& m_); + + ~upgrade_to_unique_lock(); + + upgrade_to_unique_lock(detail::thread_move_t > other); + upgrade_to_unique_lock& operator=(detail::thread_move_t > other); + void swap(upgrade_to_unique_lock& other); + + operator ``['unspecified-bool-type]``() const; + bool operator!() const; + bool owns_lock() const; + }; + +__upgrade_to_unique_lock__ allows for a temporary upgrade of an __upgrade_lock__ to exclusive ownership. When constructed with a +reference to an instance of __upgrade_lock__, if that instance has upgrade ownership on some __lockable_concept_type__ object, that +ownership is upgraded to exclusive ownership. When the __upgrade_to_unique_lock__ instance is destroyed, the ownership of the +__lockable_concept_type__ is downgraded back to ['upgrade ownership]. + +[endsect] + +[endsect] diff --git a/doc/mutexes.qbk b/doc/mutexes.qbk new file mode 100644 index 00000000..020f767f --- /dev/null +++ b/doc/mutexes.qbk @@ -0,0 +1,128 @@ +[section:mutex_types Mutex Types] + +[section:mutex Class `mutex`] + + class mutex: + boost::noncopyable + { + public: + mutex(); + ~mutex(); + + void lock(); + bool try_lock(); + void unlock(); + + typedef unique_lock scoped_lock; + typedef scoped_lock scoped_try_lock; + }; + +__mutex__ implements the __lockable_concept__ to provide an exclusive-ownership mutex. At most one thread can own the lock on a given +instance of __mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__ and __unlock_ref__ shall be permitted. + +[endsect] + +[section:try_mutex Typedef `try_mutex`] + + typedef mutex try_mutex; + +__try_mutex__ is a `typedef` to __mutex__, provided for backwards compatibility with previous releases of boost. + +[endsect] + +[section:timed_mutex Class `timed_mutex`] + + class timed_mutex: + boost::noncopyable + { + public: + timed_mutex(); + ~timed_mutex(); + + void lock(); + void unlock(); + bool try_lock(); + bool timed_lock(system_time const & abs_time); + + template + bool timed_lock(TimeDuration const & relative_time); + + typedef unique_lock scoped_timed_lock; + typedef scoped_timed_lock scoped_try_lock; + typedef scoped_timed_lock scoped_lock; + }; + +__timed_mutex__ implements the __timed_lockable_concept__ to provide an exclusive-ownership mutex. At most one thread can own the +lock on a given instance of __timed_mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__, +__timed_lock_ref__, __timed_lock_duration_ref__ and __unlock_ref__ shall be permitted. + +[endsect] + +[section:recursive_mutex Class `recursive_mutex`] + + class recursive_mutex: + boost::noncopyable + { + public: + recursive_mutex(); + ~recursive_mutex(); + + void lock(); + bool try_lock(); + void unlock(); + + typedef unique_lock scoped_lock; + typedef scoped_lock scoped_try_lock; + }; + +__recursive_mutex__ implements the __lockable_concept__ to provide an exclusive-ownership recursive mutex. At most one thread can +own the lock on a given instance of __recursive_mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__ and +__unlock_ref__ shall be permitted. A thread that already has exclusive ownership of a given __recursive_mutex__ instance can call +__lock_ref__ or __try_lock_ref__ to acquire an additional level of ownership of the mutex. __unlock_ref__ must be called once for +each level of ownership acquired by a single thread before ownership can be acquired by another thread. + +[endsect] + +[section:recursive_try_mutex Typedef `recursive_try_mutex`] + + typedef recursive_mutex recursive_try_mutex; + +__recursive_try_mutex__ is a `typedef` to __recursive_mutex__, provided for backwards compatibility with previous releases of boost. + +[endsect] + +[section:recursive_timed_mutex Class `recursive_timed_mutex`] + + class recursive_timed_mutex: + boost::noncopyable + { + public: + recursive_timed_mutex(); + ~recursive_timed_mutex(); + + void lock(); + bool try_lock(); + void unlock(); + + bool timed_lock(system_time const & abs_time); + + template + bool timed_lock(TimeDuration const & relative_time); + + typedef unique_lock scoped_lock; + typedef scoped_lock scoped_try_lock; + typedef scoped_lock scoped_timed_lock; + }; + +__recursive_timed_mutex__ implements the __timed_lockable_concept__ to provide an exclusive-ownership recursive mutex. At most one +thread can own the lock on a given instance of __recursive_timed_mutex__ at any time. Multiple concurrent calls to __lock_ref__, +__try_lock_ref__, __timed_lock_ref__, __timed_lock_duration_ref__ and __unlock_ref__ shall be permitted. A thread that already has +exclusive ownership of a given __recursive_timed_mutex__ instance can call __lock_ref__, __timed_lock_ref__, +__timed_lock_duration_ref__ or __try_lock_ref__ to acquire an additional level of ownership of the mutex. __unlock_ref__ must be +called once for each level of ownership acquired by a single thread before ownership can be acquired by another thread. + +[endsect] + +[include shared_mutex_ref.qbk] + +[endsect] diff --git a/doc/once-ref.xml b/doc/once-ref.xml deleted file mode 100644 index d6520178..00000000 --- a/doc/once-ref.xml +++ /dev/null @@ -1,88 +0,0 @@ - - - %thread.entities; -]> - -
- - The call_once function and - once_flag type (statically initialized to - BOOST_ONCE_INIT) can be used to run a - routine exactly once. This can be used to initialize data in a - thread-safe - manner. - - The implementation-defined macro - BOOST_ONCE_INIT is a constant value used to - initialize once_flag instances to indicate that the - logically associated routine has not been run yet. See - call_once for more details. - - - - - The call_once function and - once_flag type (statically initialized to - BOOST_ONCE_INIT) can be used to run a - routine exactly once. This can be used to initialize data in a - thread-safe - manner. - - The implementation-defined type once_flag - is used as a flag to insure a routine is called only once. - Instances of this type should be statically initialized to - BOOST_ONCE_INIT. See - call_once for more details. - - - implementation-defined - - - - The call_once function and - once_flag type (statically initialized to - BOOST_ONCE_INIT) can be used to run a - routine exactly once. This can be used to initialize data in a - thread-safe - manner. - - - Example usage is as follows: - -//Example usage: -boost::once_flag once = BOOST_ONCE_INIT; - -void init() -{ - //... -} - -void thread_proc() -{ - boost::call_once(once, &init); -} - - - - once_flag& - - - - Function func - - - As if (in an atomic fashion): - if (flag == BOOST_ONCE_INIT) func();. If func() throws an exception, it shall be as if this -thread never invoked call_once - - flag != BOOST_ONCE_INIT unless func() throws an exception. - - - -
diff --git a/doc/once.qbk b/doc/once.qbk new file mode 100644 index 00000000..86b0ef2b --- /dev/null +++ b/doc/once.qbk @@ -0,0 +1,45 @@ +[section:once One-time Initialization] + +`boost::call_once` provides a mechanism for ensuring that an initialization routine is run exactly once without data races or deadlocks. + +[section:once_flag Typedef `once_flag`] + + typedef platform-specific-type once_flag; + #define BOOST_ONCE_INIT platform-specific-initializer + +Objects of type `boost::once_flag` shall be initialized with `BOOST_ONCE_INIT`: + + boost::once_flag f=BOOST_ONCE_INIT; + +[endsect] + +[section:call_once Non-member function `call_once`] + + template + void call_once(once_flag& flag,Callable func); + +[variablelist + +[[Requires:] [`Callable` is `CopyConstructible`. Copying `func` shall have no side effects, and the effect of calling the copy shall +be equivalent to calling the original. ]] + +[[Effects:] [Calls to `call_once` on the same `once_flag` object are serialized. If there has been no prior effective `call_once` on +the same `once_flag` object, the argument `func` (or a copy thereof) is called as-if by invoking `func(args)`, and the invocation of +`call_once` is effective if and only if `func(args)` returns without exception. If an exception is thrown, the exception is +propagated to the caller. If there has been a prior effective `call_once` on the same `once_flag` object, the `call_once` returns +without invoking `func`. ]] + +[[Synchronization:] [The completion of an effective `call_once` invocation on a `once_flag` object, synchronizes with +all subsequent `call_once` invocations on the same `once_flag` object. ]] + +[[Throws:] [`thread_resource_error` when the effects cannot be achieved. or any exception propagated from `func`.]] + +] + + void call_once(void (*func)(),once_flag& flag); + +This second overload is provided for backwards compatibility. The effects of `call_once(func,flag)` shall be the same as those of +`call_once(flag,func)`. + +[endsect] +[endsect] diff --git a/doc/overview.qbk b/doc/overview.qbk new file mode 100644 index 00000000..63edba0f --- /dev/null +++ b/doc/overview.qbk @@ -0,0 +1,15 @@ +[section:overview Overview] + +__boost_thread__ enables the use of multiple threads of execution with shared data in portable C++ code. It provides classes and +functions for managing the threads themselves, along with others for synchronizing data between the threads or providing separate +copies of data specific to individual threads. + +The __boost_thread__ library was originally written and designed by William E. Kempf. This version is a major rewrite designed to +closely follow the proposals presented to the C++ Standards Committee, in particular +[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2497.html N2497], +[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2320.html N2320], +[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2184.html N2184], +[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2139.html N2139], and +[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.html N2094] + +[endsect] diff --git a/doc/overview.xml b/doc/overview.xml deleted file mode 100644 index 1e639597..00000000 --- a/doc/overview.xml +++ /dev/null @@ -1,206 +0,0 @@ - - - %thread.entities; -]> - -
- Overview -
- Introduction - &Boost.Thread; allows C++ programs to execute as multiple, - asynchronous, independent threads-of-execution. Each thread has its own - machine state including program instruction counter and registers. Programs - which execute as multiple threads are called multithreaded programs to - distinguish them from traditional single-threaded programs. The glossary gives a more complete description - of the multithreading execution environment. - Multithreading provides several advantages: - - - Programs which would otherwise block waiting for some external - event can continue to respond if the blocking operation is placed in a - separate thread. Multithreading is usually an absolute requirement for - these programs. - - - Well-designed multithreaded programs may execute faster than - single-threaded programs, particularly on multiprocessor hardware. - Note, however, that poorly-designed multithreaded programs are often - slower than single-threaded programs. - - - Some program designs may be easier to formulate using a - multithreaded approach. After all, the real world is - asynchronous! - - -
-
- Dangers -
- General considerations - Beyond the errors which can occur in single-threaded programs, - multithreaded programs are subject to additional errors: - - - Race - conditions - - - Deadlock - (sometimes called "deadly embrace") - - - Priority - failures (priority inversion, infinite overtaking, starvation, - etc.) - - - Every multithreaded program must be designed carefully to avoid these - errors. These aren't rare or exotic failures - they are virtually guaranteed - to occur unless multithreaded code is designed to avoid them. Priority - failures are somewhat less common, but are nonetheless serious. - The &Boost.Thread; design - attempts to minimize these errors, but they will still occur unless the - programmer proactively designs to avoid them. - Please also see - for additional, implementation-specific considerations. -
-
- Testing and debugging considerations - Multithreaded programs are non-deterministic. In other words, the - same program with the same input data may follow different execution - paths each time it is invoked. That can make testing and debugging a - nightmare: - - - Failures are often not repeatable. - - - Probe effect causes debuggers to produce very different results - from non-debug uses. - - - Debuggers require special support to show thread state. - - - Tests on a single processor system may give no indication of - serious errors which would appear on multiprocessor systems, and visa - versa. Thus test cases should include a varying number of - processors. - - - For programs which create a varying number of threads according - to workload, tests which don't span the full range of possibilities - may miss serious errors. - - -
-
- Getting a head start - Although it might appear that multithreaded programs are inherently - unreliable, many reliable multithreaded programs do exist. Multithreading - techniques are known which lead to reliable programs. - Design patterns for reliable multithreaded programs, including the - important monitor pattern, are presented in - Pattern-Oriented Software Architecture Volume 2 - Patterns for - Concurrent and Networked Objects - &cite.SchmidtStalRohnertBuschmann;. Many important multithreading programming - considerations (independent of threading library) are discussed in - Programming with POSIX Threads &cite.Butenhof97;. - Doing some reading before attempting multithreaded designs will - give you a head start toward reliable multithreaded programs. -
-
-
- C++ Standard Library usage in multithreaded programs -
- Runtime libraries - - Warning: Multithreaded programs such as - those using &Boost.Thread; must link to thread-safe versions of - all runtime libraries used by the program, including the runtime library - for the C++ Standard Library. Failure to do so will cause race conditions to occur - when multiple threads simultaneously execute runtime library functions for - new, delete, or other language features which - imply shared state. -
-
- Potentially non-thread-safe functions - Certain C++ Standard Library functions inherited from C are - particular problems because they hold internal state between - calls: - - - rand - - - strtok - - - asctime - - - ctime - - - gmtime - - - localtime - - - It is possible to write thread-safe implementations of these by - using thread specific storage (see - boost::thread_specific_ptr), and several C++ - compiler vendors do just that. The technique is well-know and is explained - in &cite.Butenhof97;. - But at least one vendor (HP-UX) does not provide thread-safe - implementations of the above functions in their otherwise thread-safe - runtime library. Instead they provide replacement functions with - different names and arguments. - Recommendation: For the most - portable, yet thread-safe code, use Boost replacements for the problem - functions. See the Boost Random Number Library - and Boost Tokenizer Library. -
-
-
- Common guarantees for all &Boost.Thread; components -
- Exceptions - &Boost.Thread; destructors never - throw exceptions. Unless otherwise specified, other - &Boost.Thread; functions that do not have - an exception-specification may throw implementation-defined - exceptions. - In particular, &Boost.Thread; - reports failure to allocate storage by throwing an exception of type - std::bad_alloc or a class derived from - std::bad_alloc, failure to obtain thread resources other than - memory by throwing an exception of type - boost::thread_resource_error, and certain lock - related failures by throwing an exception of type - boost::lock_error. - Rationale: Follows the C++ Standard - Library practice of allowing all functions except destructors or other - specified functions to throw exceptions on errors. -
-
- NonCopyable requirement - &Boost.Thread; classes documented as - meeting the NonCopyable requirement disallow copy construction and copy - assignment. For the sake of exposition, the synopsis of such classes show - private derivation from boost::noncopyable. Users - should not depend on this derivation, however, as implementations are free - to meet the NonCopyable requirement in other ways. -
-
-
diff --git a/doc/rationale.xml b/doc/rationale.xml deleted file mode 100644 index aeaf8511..00000000 --- a/doc/rationale.xml +++ /dev/null @@ -1,438 +0,0 @@ - - - %thread.entities; -]> - -
- Rationale - This page explains the rationale behind various design decisions in the - &Boost.Thread; library. Having the rationale documented here should explain - how we arrived at the current design as well as prevent future rehashing of - discussions and thought processes that have already occurred. It can also give - users a lot of insight into the design process required for this - library. -
- Rationale for the Creation of &Boost.Thread; - Processes often have a degree of "potential parallelism" and it can - often be more intuitive to design systems with this in mind. Further, these - parallel processes can result in more responsive programs. The benefits for - multithreaded programming are quite well known to most modern programmers, - yet the C++ language doesn't directly support this concept. - Many platforms support multithreaded programming despite the fact that - the language doesn't support it. They do this through external libraries, - which are, unfortunately, platform specific. POSIX has tried to address this - problem through the standardization of a "pthread" library. However, this is - a standard only on POSIX platforms, so its portability is limited. - Another problem with POSIX and other platform specific thread - libraries is that they are almost universally C based libraries. This leaves - several C++ specific issues unresolved, such as what happens when an - exception is thrown in a thread. Further, there are some C++ concepts, such - as destructors, that can make usage much easier than what's available in a C - library. - What's truly needed is C++ language support for threads. However, the - C++ standards committee needs existing practice or a good proposal as a - starting point for adding this to the standard. - The &Boost.Thread; library was developed to provide a C++ developer - with a portable interface for writing multithreaded programs on numerous - platforms. There's a hope that the library can be the basis for a more - detailed proposal for the C++ standards committee to consider for inclusion - in the next C++ standard. -
-
- Rationale for the Low Level Primitives Supported in &Boost.Thread; - The &Boost.Thread; library supplies a set of low level primitives for - writing multithreaded programs, such as mutexes and condition variables. In - fact, the first release of &Boost.Thread; supports only these low level - primitives. However, computer science research has shown that use of these - primitives is difficult since it's difficult to mathematically prove that a - usage pattern is correct, meaning it doesn't result in race conditions or - deadlocks. There are several algebras (such as CSP, CCS and Join calculus) - that have been developed to help write provably correct parallel - processes. In order to prove the correctness these processes must be coded - using higher level abstractions. So why does &Boost.Thread; support the - lower level concepts? - The reason is simple: the higher level concepts need to be implemented - using at least some of the lower level concepts. So having portable lower - level concepts makes it easier to develop the higher level concepts and will - allow researchers to experiment with various techniques. - Beyond this theoretical application of higher level concepts, however, - the fact remains that many multithreaded programs are written using only the - lower level concepts, so they are useful in and of themselves, even if it's - hard to prove that their usage is correct. Since many users will be familiar - with these lower level concepts but unfamiliar with any of the higher - level concepts, supporting the lower level concepts provides - greater accessibility. -
-
- Rationale for the Lock Design - Programmers who are used to multithreaded programming issues will - quickly note that the &Boost.Thread; design for mutex lock concepts is not - thread-safe (this is - clearly documented as well). At first this may seem like a serious design - flaw. Why have a multithreading primitive that's not thread-safe - itself? - A lock object is not a synchronization primitive. A lock object's sole - responsibility is to ensure that a mutex is both locked and unlocked in a - manner that won't result in the common error of locking a mutex and then - forgetting to unlock it. This means that instances of a lock object are only - going to be created, at least in theory, within block scope and won't be - shared between threads. Only the mutex objects will be created outside of - block scope and/or shared between threads. Though it's possible to create a - lock object outside of block scope and to share it between threads, to do so - would not be a typical usage (in fact, to do so would likely be an - error). Nor are there any cases when such usage would be required. - Lock objects must maintain some state information. In order to allow a - program to determine if a try_lock or timed_lock was successful the lock - object must retain state indicating the success or failure of the call made - in its constructor. If a lock object were to have such state and remain - thread-safe it would need to synchronize access to the state information - which would result in roughly doubling the time of most operations. Worse, - since checking the state can occur only by a call after construction, we'd - have a race condition if the lock object were shared between threads. - So, to avoid the overhead of synchronizing access to the state - information and to avoid the race condition, the &Boost.Thread; library - simply does nothing to make lock objects thread-safe. Instead, sharing a - lock object between threads results in undefined behavior. Since the only - proper usage of lock objects is within block scope this isn't a problem, and - so long as the lock object is properly used there's no danger of any - multithreading issues. -
-
- Rationale for NonCopyable Thread Type - Programmers who are used to C libraries for multithreaded programming - are likely to wonder why &Boost.Thread; uses a noncopyable design for - boost::thread. After all, the C thread types are - copyable, and you often have a need for copying them within user - code. However, careful comparison of C designs to C++ designs shows a flaw - in this logic. - All C types are copyable. It is, in fact, not possible to make a - noncopyable type in C. For this reason types that represent system resources - in C are often designed to behave very similarly to a pointer to dynamic - memory. There's an API for acquiring the resource and an API for releasing - the resource. For memory we have pointers as the type and alloc/free for - the acquisition and release APIs. For files we have FILE* as the type and - fopen/fclose for the acquisition and release APIs. You can freely copy - instances of the types but must manually manage the lifetime of the actual - resource through the acquisition and release APIs. - C++ designs recognize that the acquisition and release APIs are error - prone and try to eliminate possible errors by acquiring the resource in the - constructor and releasing it in the destructor. The best example of such a - design is the std::iostream set of classes which can represent the same - resource as the FILE* type in C. A file is opened in the std::fstream's - constructor and closed in its destructor. However, if an iostream were - copyable it could lead to a file being closed twice, an obvious error, so - the std::iostream types are noncopyable by design. This is the same design - used by boost::thread, which is a simple and easy to understand design - that's consistent with other C++ standard types. - During the design of boost::thread it was pointed out that it would be - possible to allow it to be a copyable type if some form of "reference - management" were used, such as ref-counting or ref-lists, and many argued - for a boost::thread_ref design instead. The reasoning was that copying - "thread" objects was a typical need in the C libraries, and so presumably - would be in the C++ libraries as well. It was also thought that - implementations could provide more efficient reference management than - wrappers (such as boost::shared_ptr) around a noncopyable thread - concept. Analysis of whether or not these arguments would hold true doesn't - appear to bear them out. To illustrate the analysis we'll first provide - pseudo-code illustrating the six typical usage patterns of a thread - object. -
- 1. Use case: Simple creation of a thread. - - void foo() - { - create_thread(&bar); - } - -
-
- 2. Use case: Creation of a thread that's later joined. - - void foo() - { - thread = create_thread(&bar); - join(thread); - } - -
-
- 3. Use case: Simple creation of several threads in a loop. - - void foo() - { - for (int i=0; i<NUM_THREADS; ++i) - create_thread(&bar); - } - -
-
- 4. Use case: Creation of several threads in a loop which are later joined. - - void foo() - { - for (int i=0; i<NUM_THREADS; ++i) - threads[i] = create_thread(&bar); - for (int i=0; i<NUM_THREADS; ++i) - threads[i].join(); - } - -
-
- 5. Use case: Creation of a thread whose ownership is passed to another object/method. - - void foo() - { - thread = create_thread(&bar); - manager.owns(thread); - } - -
-
- 6. Use case: Creation of a thread whose ownership is shared between multiple - objects. - - void foo() - { - thread = create_thread(&bar); - manager1.add(thread); - manager2.add(thread); - } - -
- Of these usage patterns there's only one that requires reference - management (number 6). Hopefully it's fairly obvious that this usage pattern - simply won't occur as often as the other usage patterns. So there really - isn't a "typical need" for a thread concept, though there is some - need. - Since the need isn't typical we must use different criteria for - deciding on either a thread_ref or thread design. Possible criteria include - ease of use and performance. So let's analyze both of these - carefully. - With ease of use we can look at existing experience. The standard C++ - objects that represent a system resource, such as std::iostream, are - noncopyable, so we know that C++ programmers must at least be experienced - with this design. Most C++ developers are also used to smart pointers such - as boost::shared_ptr, so we know they can at least adapt to a thread_ref - concept with little effort. So existing experience isn't going to lead us to - a choice. - The other thing we can look at is how difficult it is to use both - types for the six usage patterns above. If we find it overly difficult to - use a concept for any of the usage patterns there would be a good argument - for choosing the other design. So we'll code all six usage patterns using - both designs. -
- 1. Comparison: simple creation of a thread. - - void foo() - { - thread thrd(&bar); - } - void foo() - { - thread_ref thrd = create_thread(&bar); - } - -
-
- 2. Comparison: creation of a thread that's later joined. - - void foo() - { - thread thrd(&bar); - thrd.join(); - } - void foo() - { - thread_ref thrd = - create_thread(&bar);thrd->join(); - } - -
-
- 3. Comparison: simple creation of several threads in a loop. - - void foo() - { - for (int i=0; i<NUM_THREADS; ++i) - thread thrd(&bar); - } - void foo() - { - for (int i=0; i<NUM_THREADS; ++i) - thread_ref thrd = create_thread(&bar); - } - -
-
- 4. Comparison: creation of several threads in a loop which are later joined. - - void foo() - { - std::auto_ptr<thread> threads[NUM_THREADS]; - for (int i=0; i<NUM_THREADS; ++i) - threads[i] = std::auto_ptr<thread>(new thread(&bar)); - for (int i= 0; i<NUM_THREADS; - ++i)threads[i]->join(); - } - void foo() - { - thread_ref threads[NUM_THREADS]; - for (int i=0; i<NUM_THREADS; ++i) - threads[i] = create_thread(&bar); - for (int i= 0; i<NUM_THREADS; - ++i)threads[i]->join(); - } - -
-
- 5. Comparison: creation of a thread whose ownership is passed to another object/method. - - void foo() - { - thread thrd* = new thread(&bar); - manager.owns(thread); - } - void foo() - { - thread_ref thrd = create_thread(&bar); - manager.owns(thrd); - } - -
-
- 6. Comparison: creation of a thread whose ownership is shared - between multiple objects. - - void foo() - { - boost::shared_ptr<thread> thrd(new thread(&bar)); - manager1.add(thrd); - manager2.add(thrd); - } - void foo() - { - thread_ref thrd = create_thread(&bar); - manager1.add(thrd); - manager2.add(thrd); - } - -
- This shows the usage patterns being nearly identical in complexity for - both designs. The only actual added complexity occurs because of the use of - operator new in - (4), - (5), and - (6); - and the use of std::auto_ptr and boost::shared_ptr in - (4) and - (6) - respectively. However, that's not really - much added complexity, and C++ programmers are used to using these idioms - anyway. Some may dislike the presence of operator new in user code, but - this can be eliminated by proper design of higher level concepts, such as - the boost::thread_group class that simplifies example - (4) - down to: - - void foo() - { - thread_group threads; - for (int i=0; i<NUM_THREADS; ++i) - threads.create_thread(&bar); - threads.join_all(); - } - - So ease of use is really a wash and not much help in picking a - design. - So what about performance? Looking at the above code examples, - we can analyze the theoretical impact to performance that both designs - have. For (1) - we can see that platforms that don't have a ref-counted native - thread type (POSIX, for instance) will be impacted by a thread_ref - design. Even if the native thread type is ref-counted there may be an impact - if more state information has to be maintained for concepts foreign to the - native API, such as clean up stacks for Win32 implementations. - For (2) - and (3) - the performance impact will be identical to - (1). - For (4) - things get a little more interesting and we find that theoretically at least - the thread_ref may perform faster since the thread design requires dynamic - memory allocation/deallocation. However, in practice there may be dynamic - allocation for the thread_ref design as well, it will just be hidden from - the user. As long as the implementation has to do dynamic allocations the - thread_ref loses again because of the reference management. For - (5) we see - the same impact as we do for - (4). - For (6) - we still have a possible impact to - the thread design because of dynamic allocation but thread_ref no longer - suffers because of its reference management, and in fact, theoretically at - least, the thread_ref may do a better job of managing the references. All of - this indicates that thread wins for - (1), - (2) and - (3); with - (4) - and (5) the - winner depending on the implementation and the platform but with the thread design - probably having a better chance; and with - (6) - it will again depend on the - implementation and platform but this time we favor thread_ref - slightly. Given all of this it's a narrow margin, but the thread design - prevails. - Given this analysis, and the fact that noncopyable objects for system - resources are the normal designs that C++ programmers are used to dealing - with, the &Boost.Thread; library has gone with a noncopyable design. -
-
- Rationale for not providing <emphasis>Event Variables</emphasis> - Event variables are simply far too - error-prone. boost::condition variables are a much safer - alternative. [Note that Graphical User Interface events are - a different concept, and are not what is being discussed here.] - Event variables were one of the first synchronization primitives. They - are still used today, for example, in the native Windows multithreading - API. Yet both respected computer science researchers and experienced - multithreading practitioners believe event variables are so inherently - error-prone that they should never be used, and thus should not be part of a - multithreading library. - Per Brinch Hansen &cite.Hansen73; analyzed event variables in some - detail, pointing out [emphasis his] that "event operations force - the programmer to be aware of the relative speeds of the sending and - receiving processes". His summary: -
- We must therefore conclude that event variables of the previous type - are impractical for system design. The effect of an interaction - between two processes must be independent of the speed at which it is - carried out. -
- Experienced programmers using the Windows platform today report that - event variables are a continuing source of errors, even after previous bad - experiences caused them to be very careful in their use of event - variables. Overt problems can be avoided, for example, by teaming the event - variable with a mutex, but that may just convert a race condition into another - problem, such as excessive resource use. One of the most distressing aspects - of the experience reports is the claim that many defects are latent. That - is, the programs appear to work correctly, but contain hidden timing - dependencies which will cause them to fail when environmental factors or - usage patterns change, altering relative thread timings. - The decision to exclude event variables from &Boost.Thread; has been - surprising to some Windows programmers. They have written programs which - work using event variables, and wonder what the problem is. It seems similar - to the "goto considered harmful" controversy of 30 years ago. It isn't that - events, like gotos, can't be made to work, but rather that virtually all - programs using alternatives will be easier to write, debug, read, maintain, - and will be less likely to contain latent defects. - [Rationale provided by Beman Dawes] -
-
diff --git a/doc/read_write_mutex-ref.xml b/doc/read_write_mutex-ref.xml deleted file mode 100644 index 747960fb..00000000 --- a/doc/read_write_mutex-ref.xml +++ /dev/null @@ -1,492 +0,0 @@ - - - %thread.entities; -]> - -
- - - - - - - - - - Specifies the - inter-class sheduling policy - to use when a set of threads try to obtain different types of - locks simultaneously. - - - - The only clock type supported by &Boost.Thread; is - TIME_UTC. The epoch for TIME_UTC - is 1970-01-01 00:00:00. - - - - - - - The read_write_mutex class is a model of the - ReadWriteMutex concept. - Unfortunately it turned out that the current implementation of Read/Write Mutex has - some serious problems. So it was decided not to put this implementation into - release grade code. Also discussions on the mailing list led to the - conclusion that the current concepts need to be rethought. In particular - the schedulings - Inter-Class Scheduling Policies are deemed unnecessary. - There seems to be common belief that a fair scheme suffices. - The following documentation has been retained however, to give - readers of this document the opportunity to study the original design. - - - - - The read_write_mutex class is a model of the - ReadWriteMutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - try_read_write_mutex and timed_read_write_mutex. - - The read_write_mutex class supplies the following typedefs, - which model - the specified locking strategies: - - - - - - Lock Name - Lock Concept - - - - - scoped_read_write_lock - ScopedReadWriteLock - - - scoped_read_lock - ScopedLock - - - scoped_write_lock - ScopedLock - - - - - - - The read_write_mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a read_write_mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - read/write mutex models - in &Boost.Thread;, read_write_mutex has two types of - scheduling policies, an - inter-class sheduling policy - between threads trying to obtain different types of locks and an - intra-class sheduling policy - between threads trying to obtain the same type of lock. - The read_write_mutex class allows the - programmer to choose what - inter-class sheduling policy - will be used; however, like all read/write mutex models, - read_write_mutex leaves the - intra-class sheduling policy as - Unspecified. - - - Self-deadlock is virtually guaranteed if a thread tries to - lock the same read_write_mutex multiple times - unless all locks are read-locks (but see below) - - - - boost::noncopyable - Exposition only - - - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - - boost::read_write_scheduling_policy - - - Constructs a read_write_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a read_write_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - - - - The try_read_write_mutex class is a model of the - TryReadWriteMutex concept. - Unfortunately it turned out that the current implementation of Read/Write Mutex has - some serious problems. So it was decided not to put this implementation into - release grade code. Also discussions on the mailing list led to the - conclusion that the current concepts need to be rethought. In particular - the schedulings - Inter-Class Scheduling Policies are deemed unnecessary. - There seems to be common belief that a fair scheme suffices. - The following documentation has been retained however, to give - readers of this document the opportunity to study the original design. - - - - - The try_read_write_mutex class is a model of the - TryReadWriteMutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - read_write_mutex and timed_read_write_mutex. - - The try_read_write_mutex class supplies the following typedefs, - which model - the specified locking strategies: - - - - - - Lock Name - Lock Concept - - - - - scoped_read_write_lock - ScopedReadWriteLock - - - scoped_try_read_write_lock - ScopedTryReadWriteLock - - - scoped_read_lock - ScopedLock - - - scoped_try_read_lock - ScopedTryLock - - - scoped_write_lock - ScopedLock - - - scoped_try_write_lock - ScopedTryLock - - - - - - - The try_read_write_mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a try_read_write_mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - read/write mutex models - in &Boost.Thread;, try_read_write_mutex has two types of - scheduling policies, an - inter-class sheduling policy - between threads trying to obtain different types of locks and an - intra-class sheduling policy - between threads trying to obtain the same type of lock. - The try_read_write_mutex class allows the - programmer to choose what - inter-class sheduling policy - will be used; however, like all read/write mutex models, - try_read_write_mutex leaves the - intra-class sheduling policy as - Unspecified. - - - Self-deadlock is virtually guaranteed if a thread tries to - lock the same try_read_write_mutex multiple times - unless all locks are read-locks (but see below) - - - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - - boost::read_write_scheduling_policy - - - Constructs a try_read_write_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a try_read_write_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - - - - The timed_read_write_mutex class is a model of the - TimedReadWriteMutex concept. - Unfortunately it turned out that the current implementation of Read/Write Mutex has - some serious problems. So it was decided not to put this implementation into - release grade code. Also discussions on the mailing list led to the - conclusion that the current concepts need to be rethought. In particular - the schedulings - Inter-Class Scheduling Policies are deemed unnecessary. - There seems to be common belief that a fair scheme suffices. - The following documentation has been retained however, to give - readers of this document the opportunity to study the original design. - - - - - The timed_read_write_mutex class is a model of the - TimedReadWriteMutex concept. - It should be used to synchronize access to shared resources using - Unspecified - locking mechanics. - - For classes that model related mutex concepts, see - read_write_mutex and try_read_write_mutex. - - The timed_read_write_mutex class supplies the following typedefs, - which model - the specified locking strategies: - - - - - - Lock Name - Lock Concept - - - - - scoped_read_write_lock - ScopedReadWriteLock - - - scoped_try_read_write_lock - ScopedTryReadWriteLock - - - scoped_timed_read_write_lock - ScopedTimedReadWriteLock - - - scoped_read_lock - ScopedLock - - - scoped_try_read_lock - ScopedTryLock - - - scoped_timed_read_lock - ScopedTimedLock - - - scoped_write_lock - ScopedLock - - - scoped_try_write_lock - ScopedTryLock - - - scoped_timed_write_lock - ScopedTimedLock - - - - - - - The timed_read_write_mutex class uses an - Unspecified - locking strategy, so attempts to recursively lock a timed_read_write_mutex - object or attempts to unlock one by threads that don't own a lock on it result in - undefined behavior. - This strategy allows implementations to be as efficient as possible - on any given platform. It is, however, recommended that - implementations include debugging support to detect misuse when - NDEBUG is not defined. - - Like all - read/write mutex models - in &Boost.Thread;, timed_read_write_mutex has two types of - scheduling policies, an - inter-class sheduling policy - between threads trying to obtain different types of locks and an - intra-class sheduling policy - between threads trying to obtain the same type of lock. - The timed_read_write_mutex class allows the - programmer to choose what - inter-class sheduling policy - will be used; however, like all read/write mutex models, - timed_read_write_mutex leaves the - intra-class sheduling policy as - Unspecified. - - - Self-deadlock is virtually guaranteed if a thread tries to - lock the same timed_read_write_mutex multiple times - unless all locks are read-locks (but see below) - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - - boost::read_write_scheduling_policy - - - Constructs a timed_read_write_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a timed_read_write_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - - - -
diff --git a/doc/recursive_mutex-ref.xml b/doc/recursive_mutex-ref.xml deleted file mode 100644 index de95494a..00000000 --- a/doc/recursive_mutex-ref.xml +++ /dev/null @@ -1,306 +0,0 @@ - - - %thread.entities; -]> - -
- - - - The recursive_mutex class is a model of the - Mutex concept. - - - - The recursive_mutex class is a model of the - Mutex concept. - It should be used to synchronize access to shared resources using - Recursive - locking mechanics. - - For classes that model related mutex concepts, see - recursive_try_mutex and recursive_timed_mutex. - - For Unspecified - locking mechanics, see mutex, - try_mutex, and timed_mutex. - - - The recursive_mutex class supplies the following typedef, - which models the specified locking strategy: - - - Supported Lock Types - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - -
-
- - The recursive_mutex class uses a - Recursive - locking strategy, so attempts to recursively lock a - recursive_mutex object - succeed and an internal "lock count" is maintained. - Attempts to unlock a recursive_mutex object - by threads that don't own a lock on it result in - undefined behavior. - - Like all - mutex models - in &Boost.Thread;, recursive_mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. -
- - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - Constructs a recursive_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a recursive_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - -
- - - - The recursive_try_mutex class is a model of the - TryMutex concept. - - - - The recursive_try_mutex class is a model of the - TryMutex concept. - It should be used to synchronize access to shared resources using - Recursive - locking mechanics. - - For classes that model related mutex concepts, see - recursive_mutex and recursive_timed_mutex. - - For Unspecified - locking mechanics, see mutex, - try_mutex, and timed_mutex. - - - The recursive_try_mutex class supplies the following typedefs, - which model the specified locking strategies: - - - Supported Lock Types - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - scoped_try_lock - ScopedTryLock - - - -
-
- - The recursive_try_mutex class uses a - Recursive - locking strategy, so attempts to recursively lock a - recursive_try_mutex object - succeed and an internal "lock count" is maintained. - Attempts to unlock a recursive_mutex object - by threads that don't own a lock on it result in - undefined behavior. - - Like all - mutex models - in &Boost.Thread;, recursive_try_mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. -
- - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - Constructs a recursive_try_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a recursive_try_mutex object. - - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - -
- - - - The recursive_timed_mutex class is a model of the - TimedMutex concept. - - - - The recursive_timed_mutex class is a model of the - TimedMutex concept. - It should be used to synchronize access to shared resources using - Recursive - locking mechanics. - - For classes that model related mutex concepts, see - recursive_mutex and recursive_try_mutex. - - For Unspecified - locking mechanics, see mutex, - try_mutex, and timed_mutex. - - - The recursive_timed_mutex class supplies the following typedefs, - which model the specified locking strategies: - - - Supported Lock Types - - - - Lock Name - Lock Concept - - - - - scoped_lock - ScopedLock - - - scoped_try_lock - ScopedTryLock - - - scoped_timed_lock - ScopedTimedLock - - - -
-
- - The recursive_timed_mutex class uses a - Recursive - locking strategy, so attempts to recursively lock a - recursive_timed_mutex object - succeed and an internal "lock count" is maintained. - Attempts to unlock a recursive_mutex object - by threads that don't own a lock on it result in - undefined behavior. - - Like all - mutex models - in &Boost.Thread;, recursive_timed_mutex leaves the - scheduling policy - as Unspecified. - Programmers should make no assumptions about the order in which - waiting threads acquire a lock. -
- - - boost::noncopyable - Exposition only - - - - implementation-defined - - - - implementation-defined - - - - implementation-defined - - - - Constructs a recursive_timed_mutex object. - - - *this is in an unlocked state. - - - - - Destroys a recursive_timed_mutex object. - - *this is in an unlocked state. - - Danger: Destruction of a - locked mutex is a serious programming error resulting in undefined - behavior such as a program crash. - -
-
-
diff --git a/doc/reference.xml b/doc/reference.xml deleted file mode 100644 index d6ea4d75..00000000 --- a/doc/reference.xml +++ /dev/null @@ -1,30 +0,0 @@ - - - %thread.entities; -]> - - - - - - - - - - - - - - diff --git a/doc/release_notes.xml b/doc/release_notes.xml deleted file mode 100644 index 792f9fc4..00000000 --- a/doc/release_notes.xml +++ /dev/null @@ -1,204 +0,0 @@ - - - %thread.entities; -]> - -
- Release Notes -
- Boost 1.34.0 - -
- New team of maintainers - - - Since the original author William E. Kempf no longer is available to - maintain the &Boost.Thread; library, a new team has been formed - in an attempt to continue the work on &Boost.Thread;. - Fortunately William E. Kempf has given - - permission - to use his work under the boost license. - - - The team currently consists of - - - Anthony Williams, for the Win32 platform, - - - Roland Schwarz, for the linux platform, and various "housekeeping" tasks. - - - Volunteers for other platforms are welcome! - - - As the &Boost.Thread; was kind of orphaned over the last release, this release - attempts to fix the known bugs. Upcoming releases will bring in new things. - -
- -
- read_write_mutex still broken - - - - It has been decided not to release the Read/Write Mutex, since the current - implementation suffers from a serious bug. The documentation of the concepts - has been included though, giving the interested reader an opportunity to study the - original concepts. Please refer to the following links if you are interested - which problems led to the decision to held back this mutex type.The issue - has been discovered before 1.33 was released and the code has - been omitted from that release. A reworked mutex is expected to appear in 1.35. - Also see: - - read_write_mutex bug - and - - read_write_mutex fundamentally broken in 1.33 - - -
- -
-
- Boost 1.32.0 - -
- Documentation converted to BoostBook - - The documentation was converted to BoostBook format, - and a number of errors and inconsistencies were - fixed in the process. - Since this was a fairly large task, there are likely to be - more errors and inconsistencies remaining. If you find any, - please report them! -
- - - -
- Barrier functionality added - - A new class, boost::barrier, was added. -
- -
- Read/write mutex functionality added - - New classes, - boost::read_write_mutex, - boost::try_read_write_mutex, and - boost::timed_read_write_mutex - were added. - - Since the read/write mutex and related classes are new, - both interface and implementation are liable to change - in future releases of &Boost.Thread;. - The lock concepts and lock promotion in particular are - still under discussion and very likely to change. - -
- -
- Thread-specific pointer functionality changed - - The boost::thread_specific_ptr - constructor now takes an optional pointer to a cleanup function that - is called to release the thread-specific data that is being pointed - to by boost::thread_specific_ptr objects. - - Fixed: the number of available thread-specific storage "slots" - is too small on some platforms. - - Fixed: thread_specific_ptr::reset() - doesn't check error returned by tss::set() - (the tss::set() function now throws - if it fails instead of returning an error code). - - Fixed: calling - boost::thread_specific_ptr::reset() or - boost::thread_specific_ptr::release() - causes double-delete: once when - boost::thread_specific_ptr::reset() or - boost::thread_specific_ptr::release() - is called and once when - boost::thread_specific_ptr::~thread_specific_ptr() - is called. -
- -
- Mutex implementation changed for Win32 - - On Win32, boost::mutex, - boost::try_mutex, boost::recursive_mutex, - and boost::recursive_try_mutex now use a Win32 critical section - whenever possible; otherwise they use a Win32 mutex. As before, - boost::timed_mutex and - boost::recursive_timed_mutex use a Win32 mutex. -
- -
- Windows CE support improved - - Minor changes were made to make Boost.Thread work on Windows CE. -
-
-
diff --git a/doc/shared_mutex_ref.qbk b/doc/shared_mutex_ref.qbk new file mode 100644 index 00000000..4b2c2ec4 --- /dev/null +++ b/doc/shared_mutex_ref.qbk @@ -0,0 +1,35 @@ +[section:shared_mutex Class `shared_mutex`] + + class shared_mutex + { + public: + shared_mutex(); + ~shared_mutex(); + + void lock_shared(); + bool try_lock_shared(); + bool timed_lock_shared(system_time const& timeout); + void unlock_shared(); + + void lock(); + bool try_lock(); + bool timed_lock(system_time const& timeout); + void unlock(); + + void lock_upgrade(); + void unlock_upgrade(); + + void unlock_upgrade_and_lock(); + void unlock_and_lock_upgrade(); + void unlock_and_lock_shared(); + void unlock_upgrade_and_lock_shared(); + }; + +The class `boost::shared_mutex` provides an implementation of a multiple-reader / single-writer mutex. It implements the +__upgrade_lockable_concept__. + +Multiple concurrent calls to __lock_ref__, __try_lock_ref__, __timed_lock_ref__, __lock_shared_ref__, __try_lock_shared_ref__ and +__timed_lock_shared_ref__ shall be permitted. + + +[endsect] diff --git a/doc/thread-ref.xml b/doc/thread-ref.xml deleted file mode 100644 index 270302ca..00000000 --- a/doc/thread-ref.xml +++ /dev/null @@ -1,270 +0,0 @@ - - - %thread.entities; -]> - -
- - - - The thread class represents threads of - execution, and provides the functionality to create and manage - threads within the &Boost.Thread; library. See - for a precise description of - thread of execution, - and for definitions of threading-related terms and of thread states such as - blocked. - - - - A thread of execution - has an initial function. For the program's initial thread, the initial - function is main(). For other threads, the initial - function is operator() of the function object passed to - the thread object's constructor. - - A thread of execution is said to be "finished" - or to have "finished execution" when its initial function returns or - is terminated. This includes completion of all thread cleanup - handlers, and completion of the normal C++ function return behaviors, - such as destruction of automatic storage (stack) objects and releasing - any associated implementation resources. - - A thread object has an associated state which is either - "joinable" or "non-joinable". - - Except as described below, the policy used by an implementation - of &Boost.Thread; to schedule transitions between thread states is - unspecified. - - Just as the lifetime of a file may be different from the - lifetime of an iostream object which represents the file, the lifetime - of a thread of execution may be different from the - thread object which represents the thread of - execution. In particular, after a call to join(), - the thread of execution will no longer exist even though the - thread object continues to exist until the - end of its normal lifetime. The converse is also possible; if - a thread object is destroyed without - join() first having been called, the thread of execution - continues until its initial function completes. - - - - boost::noncopyable - Exposition only - - - - Constructs a thread object - representing the current thread of execution. - - *this is non-joinable. - - Danger: - *this is valid only within the current thread. - - - - - const boost::function0<void>& - - - - Starts a new thread of execution and constructs a - thread object representing it. - Copies threadfunc (which in turn copies - the function object wrapped by threadfunc) - to an internal location which persists for the lifetime - of the new thread of execution. Calls operator() - on the copy of the threadfunc function object - in the new thread of execution. - - - *this is joinable. - - boost::thread_resource_error if a new thread - of execution cannot be started. - - - - Destroys *this. The actual thread of - execution may continue to execute after the - thread object has been destroyed. - - - If *this is joinable the actual thread - of execution becomes "detached". Any resources used - by the thread will be reclaimed when the thread of execution - completes. To ensure such a thread of execution runs to completion - before the thread object is destroyed, call - join(). - - - - - bool - - - const thread& - - - The thread is non-terminated or *this - is joinable. - - true if *this and - rhs represent the same thread of - execution. - - - - bool - - - const thread& - - - The thread is non-terminated or *this - is joinable. - - !(*this==rhs). - - - - - - void - - *this is joinable. - - The current thread of execution blocks until the - initial function of the thread of execution represented by - *this finishes and all resources are - reclaimed. - - *this is non-joinable. - - If *this == thread() the result is - implementation-defined. If the implementation doesn't - detect this the result will be - deadlock. - - - - - - - void - - - const xtime& - - - The current thread of execution blocks until - xt is reached. - - - - void - - The current thread of execution is placed in the - ready - state. - - - Allow the current thread to give up the rest of its - time slice (or other scheduling quota) to another thread. - Particularly useful in non-preemptive implementations. - - - - - - - - The thread_group class provides a container - for easy grouping of threads to simplify several common thread - creation and management idioms. - - - - boost::noncopyable - Exposition only - - - - - Constructs an empty thread_group - container. - - - - Destroys each contained thread object. Destroys *this. - - Behavior is undefined if another thread references - *this during the execution of the destructor. - - - - - - thread* - - - const boost::function0<void>& - - - Creates a new thread object - that executes threadfunc and adds it to the - thread_group container object's list of managed - thread objects. - - Pointer to the newly created - thread object. - - - - void - - - thread* - - - Adds thrd to the - thread_group object's list of managed - thread objects. The thrd - object must have been allocated via operator new and will - be deleted when the group is destroyed. - - - - void - - - thread* - - - Removes thread from *this's - list of managed thread objects. - - ??? if - thrd is not in *this's list - of managed thread objects. - - - - void - - Calls join() on each of the managed - thread objects. - - - - -
diff --git a/doc/thread.qbk b/doc/thread.qbk new file mode 100644 index 00000000..aaf975fd --- /dev/null +++ b/doc/thread.qbk @@ -0,0 +1,150 @@ +[article Thread + [quickbook 1.4] + [authors [Williams, Anthony]] + [copyright 2007-8 Anthony Williams] + [purpose C++ Library for launching threads and synchronizing data between them] + [category text] + [license + Distributed under the Boost Software License, Version 1.0. + (See accompanying file LICENSE_1_0.txt or copy at + [@http://www.boost.org/LICENSE_1_0.txt]) + ] +] + +[template lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.lockable [link_text]]] +[def __lockable_concept__ [lockable_concept_link `Lockable` concept]] +[def __lockable_concept_type__ [lockable_concept_link `Lockable`]] + +[template timed_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable [link_text]]] +[def __timed_lockable_concept__ [timed_lockable_concept_link `TimedLockable` concept]] +[def __timed_lockable_concept_type__ [timed_lockable_concept_link `TimedLockable`]] + +[template shared_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable [link_text]]] +[def __shared_lockable_concept__ [shared_lockable_concept_link `SharedLockable` concept]] +[def __shared_lockable_concept_type__ [shared_lockable_concept_link `SharedLockable`]] + +[template upgrade_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable [link_text]]] +[def __upgrade_lockable_concept__ [upgrade_lockable_concept_link `UpgradeLockable` concept]] +[def __upgrade_lockable_concept_type__ [upgrade_lockable_concept_link `UpgradeLockable`]] + + +[template lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.lock [link_text]]] +[def __lock_ref__ [lock_ref_link `lock()`]] + +[template unlock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.unlock [link_text]]] +[def __unlock_ref__ [unlock_ref_link `unlock()`]] + +[template try_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.try_lock [link_text]]] +[def __try_lock_ref__ [try_lock_ref_link `try_lock()`]] + +[template timed_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock [link_text]]] +[def __timed_lock_ref__ [timed_lock_ref_link `timed_lock()`]] + +[template timed_lock_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock_duration [link_text]]] +[def __timed_lock_duration_ref__ [timed_lock_duration_ref_link `timed_lock()`]] + +[template lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.lock_shared [link_text]]] +[def __lock_shared_ref__ [lock_shared_ref_link `lock_shared()`]] + +[template unlock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.unlock_shared [link_text]]] +[def __unlock_shared_ref__ [unlock_shared_ref_link `unlock_shared()`]] + +[template try_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.try_lock_shared [link_text]]] +[def __try_lock_shared_ref__ [try_lock_shared_ref_link `try_lock_shared()`]] + +[template timed_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared [link_text]]] +[def __timed_lock_shared_ref__ [timed_lock_shared_ref_link `timed_lock_shared()`]] + +[template timed_lock_shared_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared_duration [link_text]]] +[def __timed_lock_shared_duration_ref__ [timed_lock_shared_duration_ref_link `timed_lock_shared()`]] + +[template lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.lock_upgrade [link_text]]] +[def __lock_upgrade_ref__ [lock_upgrade_ref_link `lock_upgrade()`]] + +[template unlock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade [link_text]]] +[def __unlock_upgrade_ref__ [unlock_upgrade_ref_link `unlock_upgrade()`]] + +[template unlock_upgrade_and_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock [link_text]]] +[def __unlock_upgrade_and_lock_ref__ [unlock_upgrade_and_lock_ref_link `unlock_upgrade_and_lock()`]] + +[template unlock_and_lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_and_lock_upgrade [link_text]]] +[def __unlock_and_lock_upgrade_ref__ [unlock_and_lock_upgrade_ref_link `unlock_and_lock_upgrade()`]] + +[template unlock_upgrade_and_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock_shared [link_text]]] +[def __unlock_upgrade_and_lock_shared_ref__ [unlock_upgrade_and_lock_shared_ref_link `unlock_upgrade_and_lock_shared()`]] + +[template owns_lock_ref_link[link_text] [link thread.synchronization.locks.unique_lock.owns_lock [link_text]]] +[def __owns_lock_ref__ [owns_lock_ref_link `owns_lock()`]] + +[template owns_lock_shared_ref_link[link_text] [link thread.synchronization.locks.shared_lock.owns_lock [link_text]]] +[def __owns_lock_shared_ref__ [owns_lock_shared_ref_link `owns_lock()`]] + +[template mutex_func_ref_link[link_text] [link thread.synchronization.locks.unique_lock.mutex [link_text]]] +[def __mutex_func_ref__ [mutex_func_ref_link `mutex()`]] + +[def __boost_thread__ [*Boost.Thread]] +[def __not_a_thread__ ['Not-a-Thread]] +[def __interruption_points__ [link interruption_points ['interruption points]]] + +[def __mutex__ [link thread.synchronization.mutex_types.mutex `boost::mutex`]] +[def __try_mutex__ [link thread.synchronization.mutex_types.try_mutex `boost::try_mutex`]] +[def __timed_mutex__ [link thread.synchronization.mutex_types.timed_mutex `boost::timed_mutex`]] +[def __recursive_mutex__ [link thread.synchronization.mutex_types.recursive_mutex `boost::recursive_mutex`]] +[def __recursive_try_mutex__ [link thread.synchronization.mutex_types.recursive_try_mutex `boost::recursive_try_mutex`]] +[def __recursive_timed_mutex__ [link thread.synchronization.mutex_types.recursive_timed_mutex `boost::recursive_timed_mutex`]] +[def __shared_mutex__ [link thread.synchronization.mutex_types.shared_mutex `boost::shared_mutex`]] + +[def __lock_guard__ [link thread.synchronization.locks.lock_guard `boost::lock_guard`]] +[def __unique_lock__ [link thread.synchronization.locks.unique_lock `boost::unique_lock`]] +[def __shared_lock__ [link thread.synchronization.locks.shared_lock `boost::shared_lock`]] +[def __upgrade_lock__ [link thread.synchronization.locks.upgrade_lock `boost::upgrade_lock`]] +[def __upgrade_to_unique_lock__ [link thread.synchronization.locks.upgrade_to_unique_lock `boost::upgrade_to_unique_lock`]] + + +[def __thread__ [link thread.thread_management.thread `boost::thread`]] +[def __thread_id__ [link thread.thread_management.thread.id `boost::thread::id`]] +[template join_link[link_text] [link thread.thread_management.thread.join [link_text]]] +[def __join__ [join_link `join()`]] +[template timed_join_link[link_text] [link thread.thread_management.thread.timed_join [link_text]]] +[def __timed_join__ [timed_join_link `timed_join()`]] +[def __detach__ [link thread.thread_management.thread.detach `detach()`]] +[def __interrupt__ [link thread.thread_management.thread.interrupt `interrupt()`]] +[def __sleep__ [link thread.thread_management.this_thread.sleep `boost::this_thread::sleep()`]] + +[def __interruption_enabled__ [link thread.thread_management.this_thread.interruption_enabled `boost::this_thread::interruption_enabled()`]] +[def __interruption_requested__ [link thread.thread_management.this_thread.interruption_requested `boost::this_thread::interruption_requested()`]] +[def __interruption_point__ [link thread.thread_management.this_thread.interruption_point `boost::this_thread::interruption_point()`]] +[def __disable_interruption__ [link thread.thread_management.this_thread.disable_interruption `boost::this_thread::disable_interruption`]] +[def __restore_interruption__ [link thread.thread_management.this_thread.restore_interruption `boost::this_thread::restore_interruption`]] + +[def __thread_resource_error__ `boost::thread_resource_error`] +[def __thread_interrupted__ `boost::thread_interrupted`] +[def __barrier__ [link thread.synchronization.barriers.barrier `boost::barrier`]] + +[template cond_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.wait [link_text]]] +[def __cond_wait__ [cond_wait_link `wait()`]] +[template cond_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.timed_wait [link_text]]] +[def __cond_timed_wait__ [cond_timed_wait_link `timed_wait()`]] +[template cond_any_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.wait [link_text]]] +[def __cond_any_wait__ [cond_any_wait_link `wait()`]] +[template cond_any_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.timed_wait [link_text]]] +[def __cond_any_timed_wait__ [cond_any_timed_wait_link `timed_wait()`]] + +[def __blocked__ ['blocked]] + +[include overview.qbk] +[include changes.qbk] + +[include thread_ref.qbk] + +[section:synchronization Synchronization] +[include mutex_concepts.qbk] +[include mutexes.qbk] +[include condition_variables.qbk] +[include once.qbk] +[include barrier.qbk] +[endsect] + +[include tss.qbk] + +[include acknowledgements.qbk] diff --git a/doc/thread.xml b/doc/thread.xml deleted file mode 100644 index f5c30883..00000000 --- a/doc/thread.xml +++ /dev/null @@ -1,48 +0,0 @@ - - - %thread.entities; -]> - - - - - William - E. - Kempf - - - 2001 - 2002 - 2003 - William E. Kempf - - - Subject to the Boost Software License, Version 1.0. - (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt) - - Portable C++ multi-threading - - Boost.Thread - - Boost.Thread - - - - - - - - - - - - - - diff --git a/doc/thread_ref.qbk b/doc/thread_ref.qbk new file mode 100644 index 00000000..caadc362 --- /dev/null +++ b/doc/thread_ref.qbk @@ -0,0 +1,933 @@ +[section:thread_management Thread Management] + +[heading Synopsis] + +The __thread__ class is responsible for launching and managing threads. Each __thread__ object represents a single thread of execution, +or __not_a_thread__, and at most one __thread__ object represents a given thread of execution: objects of type __thread__ are not +copyable. + +Objects of type __thread__ are movable, however, so they can be stored in move-aware containers, and returned from functions. This +allows the details of thread creation to be wrapped in a function. + + boost::thread make_thread(); + + void f() + { + boost::thread some_thread=make_thread(); + some_thread.join(); + } + +[heading Launching threads] + +A new thread is launched by passing an object of a callable type that can be invoked with no parameters to the constructor. The +object is then copied into internal storage, and invoked on the newly-created thread of execution. If the object must not (or +cannot) be copied, then `boost::ref` can be used to pass in a reference to the function object. In this case, the user of +__boost_thread__ must ensure that the referred-to object outlives the newly-created thread of execution. + + struct callable + { + void operator()(); + }; + + boost::thread copies_are_safe() + { + callable x; + return boost::thread(x); + } // x is destroyed, but the newly-created thread has a copy, so this is OK + + boost::thread oops() + { + callable x; + return boost::thread(boost::ref(x)); + } // x is destroyed, but the newly-created thread still has a reference + // this leads to undefined behaviour + +If you wish to construct an instance of __thread__ with a function or callable object that requires arguments to be supplied, +this can be done using `boost::bind`: + + void find_the_question(int the_answer); + + boost::thread deep_thought_2(boost::bind(find_the_question,42)); + +[heading Joining and detaching] + +When the __thread__ object that represents a thread of execution is destroyed the thread becomes ['detached]. Once a thread is +detached, it will continue executing until the invocation of the function or callable object supplied on construction has completed, +or the program is terminated. A thread can also be detached by explicitly invoking the __detach__ member function on the __thread__ +object. In this case, the __thread__ object ceases to represent the now-detached thread, and instead represents __not_a_thread__. + +In order to wait for a thread of execution to finish, the __join__ or __timed_join__ member functions of the __thread__ object must be +used. __join__ will block the calling thread until the thread represented by the __thread__ object has completed. If the thread of +execution represented by the __thread__ object has already completed, or the __thread__ object represents __not_a_thread__, then __join__ +returns immediately. __timed_join__ is similar, except that a call to __timed_join__ will also return if the thread being waited for +does not complete when the specified time has elapsed. + +[heading Interruption] + +A running thread can be ['interrupted] by invoking the __interrupt__ member function of the corresponding __thread__ object. When the +interrupted thread next executes one of the specified __interruption_points__ (or if it is currently __blocked__ whilst executing one) +with interruption enabled, then a __thread_interrupted__ exception will be thrown in the interrupted thread. If not caught, +this will cause the execution of the interrupted thread to terminate. As with any other exception, the stack will be unwound, and +destructors for objects of automatic storage duration will be executed. + +If a thread wishes to avoid being interrupted, it can create an instance of __disable_interruption__. Objects of this class disable +interruption for the thread that created them on construction, and restore the interruption state to whatever it was before on +destruction: + + void f() + { + // interruption enabled here + { + boost::this_thread::disable_interruption di; + // interruption disabled + { + boost::this_thread::disable_interruption di2; + // interruption still disabled + } // di2 destroyed, interruption state restored + // interruption still disabled + } // di destroyed, interruption state restored + // interruption now enabled + } + +The effects of an instance of __disable_interruption__ can be temporarily reversed by constructing an instance of +__restore_interruption__, passing in the __disable_interruption__ object in question. This will +restore the interruption state to what it was when the __disable_interruption__ object was constructed, and then +disable interruption again when the __restore_interruption__ object is destroyed. + + void g() + { + // interruption enabled here + { + boost::this_thread::disable_interruption di; + // interruption disabled + { + boost::this_thread::restore_interruption ri(di); + // interruption now enabled + } // ri destroyed, interruption disable again + } // di destroyed, interruption state restored + // interruption now enabled + } + +At any point, the interruption state for the current thread can be queried by calling __interruption_enabled__. + +[#interruption_points] + +[heading Predefined Interruption Points] + +The following functions are ['interruption points], which will throw __thread_interrupted__ if interruption is enabled for the +current thread, and interruption is requested for the current thread: + +* [join_link `boost::thread::join()`] +* [timed_join_link `boost::thread::timed_join()`] +* [cond_wait_link `boost::condition_variable::wait()`] +* [cond_timed_wait_link `boost::condition_variable::timed_wait()`] +* [cond_any_wait_link `boost::condition_variable_any::wait()`] +* [cond_any_timed_wait_link `boost::condition_variable_any::timed_wait()`] +* [link thread.thread_management.thread.sleep `boost::thread::sleep()`] +* __sleep__ +* __interruption_point__ + +[heading Thread IDs] + +Objects of class __thread_id__ can be used to identify threads. Each running thread of execution has a unique ID obtainable +from the corresponding __thread__ by calling the `get_id()` member function, or by calling `boost::this_thread::get_id()` from +within the thread. Objects of class __thread_id__ can be copied, and used as keys in associative containers: the full range of +comparison operators is provided. Thread IDs can also be written to an output stream using the stream insertion operator, though the +output format is unspecified. + +Each instance of __thread_id__ either refers to some thread, or __not_a_thread__. Instances that refer to __not_a_thread__ +compare equal to each other, but not equal to any instances that refer to an actual thread of execution. The comparison operators on +__thread_id__ yield a total order for every non-equal thread ID. + +[section:thread Class `thread`] + + class thread + { + public: + thread(); + ~thread(); + + template + explicit thread(F f); + + template + thread(detail::thread_move_t f); + + // move support + thread(detail::thread_move_t x); + thread& operator=(detail::thread_move_t x); + operator detail::thread_move_t(); + detail::thread_move_t move(); + + void swap(thread& x); + + class id; + id get_id() const; + + bool joinable() const; + void join(); + bool timed_join(const system_time& wait_until); + + template + bool timed_join(TimeDuration const& rel_time); + + void detach(); + + static unsigned hardware_concurrency(); + + typedef platform-specific-type native_handle_type; + native_handle_type native_handle(); + + void interrupt(); + bool interruption_requested() const; + + // backwards compatibility + bool operator==(const thread& other) const; + bool operator!=(const thread& other) const; + + static void yield(); + static void sleep(const system_time& xt); + }; + +[section:default_constructor Default Constructor] + + thread(); + +[variablelist + +[[Effects:] [Constructs a __thread__ instance that refers to __not_a_thread__.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:callable_constructor Thread Constructor] + + template + thread(Callable func); + +[variablelist + +[[Preconditions:] [`Callable` must by copyable.]] + +[[Effects:] [`func` is copied into storage managed internally by the thread library, and that copy is invoked on a newly-created thread of execution.]] + +[[Postconditions:] [`*this` refers to the newly created thread of execution.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[endsect] + +[section:destructor Thread Destructor] + + ~thread(); + +[variablelist + +[[Effects:] [If `*this` has an associated thread of execution, calls __detach__. Destroys `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:joinable Member function `joinable()`] + + bool joinable() const; + +[variablelist + +[[Returns:] [`true` if `*this` refers to a thread of execution, `false` otherwise.]] + +[[Throws:] [Nothing]] + +] + + +[endsect] + +[section:join Member function `join()`] + + void join(); + +[variablelist + +[[Preconditions:] [`this->get_id()!=boost::this_thread::get_id()`]] + +[[Effects:] [If `*this` refers to a thread of execution, waits for that thread of execution to complete.]] + +[[Postconditions:] [If `*this` refers to a thread of execution on entry, that thread of execution has completed. `*this` no longer refers to any thread of execution.]] + +[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]] + +[[Notes:] [`join()` is one of the predefined __interruption_points__.]] + +] + +[endsect] + +[section:timed_join Member function `timed_join()`] + + bool timed_join(const system_time& wait_until); + + template + bool timed_join(TimeDuration const& rel_time); + +[variablelist + +[[Preconditions:] [`this->get_id()!=boost::this_thread::get_id()`]] + +[[Effects:] [If `*this` refers to a thread of execution, waits for that thread of execution to complete, the time `wait_until` has +been reach or the specified duration `rel_time` has elapsed. If `*this` doesn't refer to a thread of execution, returns immediately.]] + +[[Returns:] [`true` if `*this` refers to a thread of execution on entry, and that thread of execution has completed before the call +times out, `false` otherwise.]] + +[[Postconditions:] [If `*this` refers to a thread of execution on entry, and `timed_join` returns `true`, that thread of execution +has completed, and `*this` no longer refers to any thread of execution. If this call to `timed_join` returns `false`, `*this` is +unchanged.]] + +[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]] + +[[Notes:] [`timed_join()` is one of the predefined __interruption_points__.]] + +] + +[endsect] + +[section:detach Member function `detach()`] + + void detach(); + +[variablelist + +[[Effects:] [If `*this` refers to a thread of execution, that thread of execution becomes detached, and longer has an associated __thread__ object.]] + +[[Postconditions:] [`*this` no longer refers to any thread of execution.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + + +[section:get_id Member function `get_id()`] + + thread::id get_id() const; + +[variablelist + +[[Returns:] [If `*this` refers to a thread of execution, an instance of __thread_id__ that represents that thread. Otherwise returns +a default-constructed __thread_id__.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:interrupt Member function `interrupt()`] + + void interrupt(); + +[variablelist + +[[Effects:] [If `*this` refers to a thread of execution, request that the thread will be interrupted the next time it enters one of +the predefined __interruption_points__ with interruption enabled, or if it is currently __blocked__ in a call to one of the +predefined __interruption_points__ with interruption enabled .]] + +[[Throws:] [Nothing]] + +] + + +[endsect] + +[section:hardware_concurrency Static member function `hardware_concurrency()`] + + unsigned hardware_concurrency(); + +[variablelist + +[[Returns:] [The number of hardware threads available on the current system (e.g. number of CPUs or cores or hyperthreading units), +or 0 if this information is not available.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:equals `operator==`] + + bool operator==(const thread& other) const; + +[variablelist + +[[Returns:] [`get_id()==other.get_id()`]] + +] + +[endsect] + +[section:not_equals `operator!=`] + + bool operator!=(const thread& other) const; + +[variablelist + +[[Returns:] [`get_id()!=other.get_id()`]] + +] + +[endsect] + +[section:sleep Static member function `sleep()`] + + void sleep(system_time const& abs_time); + +[variablelist + +[[Effects:] [Suspends the current thread until the specified time has been reached.]] + +[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]] + +[[Notes:] [`sleep()` is one of the predefined __interruption_points__.]] + +] + +[endsect] + +[section:yield Static member function `yield()`] + + void yield(); + +[variablelist + +[[Effects:] [See [link thread.thread_management.this_thread.yield `boost::this_thread::yield()`].]] + +] + +[endsect] + + +[section:id Class `boost::thread::id`] + + class thread::id + { + public: + id(); + + bool operator==(const id& y) const; + bool operator!=(const id& y) const; + bool operator<(const id& y) const; + bool operator>(const id& y) const; + bool operator<=(const id& y) const; + bool operator>=(const id& y) const; + + template + friend std::basic_ostream& + operator<<(std::basic_ostream& os, const id& x); + }; + +[section:constructor Default constructor] + + id(); + +[variablelist + +[[Effects:] [Constructs a __thread_id__ instance that represents __not_a_thread__.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:is_equal `operator==`] + + bool operator==(const id& y) const; + +[variablelist + +[[Returns:] [`true` if `*this` and `y` both represent the same thread of execution, or both represent __not_a_thread__, `false` +otherwise.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:not_equal `operator!=`] + + bool operator!=(const id& y) const; + +[variablelist + +[[Returns:] [`true` if `*this` and `y` represent the different threads of execution, or one represents a thread of execution, and +the other represent __not_a_thread__, `false` otherwise.]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:less_than `operator<`] + + bool operator<(const id& y) const; + +[variablelist + +[[Returns:] [`true` if `*this!=y` is `true` and the implementation-defined total order of __thread_id__ values places `*this` before +`y`, `false` otherwise.]] + +[[Throws:] [Nothing]] + +[[Note:] [A __thread_id__ instance representing __not_a_thread__ will always compare less than an instance representing a thread of +execution.]] + +] + +[endsect] + + +[section:greater_than `operator>`] + + bool operator>(const id& y) const; + +[variablelist + +[[Returns:] [`y<*this`]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:less_than_or_equal `operator>=`] + + bool operator<=(const id& y) const; + +[variablelist + +[[Returns:] [`!(y<*this)`]] + +[[Throws:] [Nothing]] + +] + +[endsect] + +[section:greater_than_or_equal `operator>=`] + + bool operator>=(const id& y) const; + +[variablelist + +[[Returns:] [`!(*this + friend std::basic_ostream& + operator<<(std::basic_ostream& os, const id& x); + +[variablelist + +[[Effects:] [Writes a representation of the __thread_id__ instance `x` to the stream `os`, such that the representation of two +instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `a!=b`.]] + +[[Returns:] [`os`]] + +] + +[endsect] + + +[endsect] + +[endsect] + +[section:this_thread Namespace `this_thread`] + +[section:get_id Non-member function `get_id()`] + + namespace this_thread + { + thread::id get_id(); + } + +[variablelist + +[[Returns:] [An instance of __thread_id__ that represents that currently executing thread.]] + +[[Throws:] [__thread_resource_error__ if an error occurs.]] + +] + +[endsect] + +[section:interruption_point Non-member function `interruption_point()`] + + namespace this_thread + { + void interruption_point(); + } + +[variablelist + +[[Effects:] [Check to see if the current thread has been interrupted.]] + +[[Throws:] [__thread_interrupted__ if __interruption_enabled__ and __interruption_requested__ both return `true`.]] + +] + +[endsect] + +[section:interruption_requested Non-member function `interruption_requested()`] + + namespace this_thread + { + bool interruption_requested(); + } + +[variablelist + +[[Returns:] [`true` if interruption has been requested for the current thread, `false` otherwise.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:interruption_enabled Non-member function `interruption_enabled()`] + + namespace this_thread + { + bool interruption_enabled(); + } + +[variablelist + +[[Returns:] [`true` if interruption has been enabled for the current thread, `false` otherwise.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:sleep Non-member function `sleep()`] + + namespace this_thread + { + template + void sleep(TimeDuration const& rel_time); + } + +[variablelist + +[[Effects:] [Suspends the current thread until the specified time has elapsed.]] + +[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]] + +[[Notes:] [`sleep()` is one of the predefined __interruption_points__.]] + +] + +[endsect] + +[section:yield Non-member function `yield()`] + + namespace this_thread + { + void yield(); + } + +[variablelist + +[[Effects:] [Gives up the remainder of the current thread's time slice, to allow other threads to run.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:disable_interruption Class `disable_interruption`] + + namespace this_thread + { + class disable_interruption + { + public: + disable_interruption(); + ~disable_interruption(); + }; + } + +`boost::this_thread::disable_interruption` disables interruption for the current thread on construction, and restores the prior +interruption state on destruction. Instances of `disable_interruption` cannot be copied or moved. + +[section:constructor Constructor] + + disable_interruption(); + +[variablelist + +[[Effects:] [Stores the current state of __interruption_enabled__ and disables interruption for the current thread.]] + +[[Postconditions:] [__interruption_enabled__ returns `false` for the current thread.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:destructor Destructor] + + ~disable_interruption(); + +[variablelist + +[[Preconditions:] [Must be called from the same thread from which `*this` was constructed.]] + +[[Effects:] [Restores the current state of __interruption_enabled__ for the current thread to that prior to the construction of `*this`.]] + +[[Postconditions:] [__interruption_enabled__ for the current thread returns the value stored in the constructor of `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[endsect] + +[section:restore_interruption Class `restore_interruption`] + + namespace this_thread + { + class restore_interruption + { + public: + explicit restore_interruption(disable_interruption& disabler); + ~restore_interruption(); + }; + } + +On construction of an instance of `boost::this_thread::restore_interruption`, the interruption state for the current thread is +restored to the interruption state stored by the constructor of the supplied instance of __disable_interruption__. When the instance +is destroyed, interruption is again disabled. Instances of `restore_interruption` cannot be copied or moved. + +[section:constructor Constructor] + + explicit restore_interruption(disable_interruption& disabler); + +[variablelist + +[[Preconditions:] [Must be called from the same thread from which `disabler` was constructed.]] + +[[Effects:] [Restores the current state of __interruption_enabled__ for the current thread to that prior to the construction of `disabler`.]] + +[[Postconditions:] [__interruption_enabled__ for the current thread returns the value stored in the constructor of `disabler`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:destructor Destructor] + + ~restore_interruption(); + +[variablelist + +[[Preconditions:] [Must be called from the same thread from which `*this` was constructed.]] + +[[Effects:] [Disables interruption for the current thread.]] + +[[Postconditions:] [__interruption_enabled__ for the current thread returns `false`.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[endsect] + +[section:atthreadexit Non-member function template `at_thread_exit()`] + + template + void at_thread_exit(Callable func); + +[variablelist + +[[Effects:] [A copy of `func` is taken and stored to in thread-specific storage. This copy is invoked when the current thread exits.]] + +[[Postconditions:] [A copy of `func` has been saved for invocation on thread exit.]] + +[[Throws:] [`std::bad_alloc` if memory cannot be allocated for the copy of the function, __thread_resource_error__ if any other +error occurs within the thread library. Any exception thrown whilst copying `func` into internal storage.]] + +] + +[endsect] + +[endsect] + +[section:threadgroup Class `thread_group`] + + class thread_group: + private noncopyable + { + public: + thread_group(); + ~thread_group(); + + thread* create_thread(const function0& threadfunc); + void add_thread(thread* thrd); + void remove_thread(thread* thrd); + void join_all(); + void interrupt_all(); + int size() const; + }; + +`thread_group` provides for a collection of threads that are related in some fashion. New threads can be added to the group with +`add_thread` and `create_thread` member functions. `thread_group` is not copyable or movable. + +[section:constructor Constructor] + + thread_group(); + +[variablelist + +[[Effects:] [Create a new thread group with no threads.]] + +] + +[endsect] + +[section:destructor Destructor] + + ~thread_group(); + +[variablelist + +[[Effects:] [Destroy `*this` and `delete` all __thread__ objects in the group.]] + +] + +[endsect] + +[section:create_thread Member function `create_thread()`] + + thread* create_thread(const function0& threadfunc); + +[variablelist + +[[Effects:] [Create a new __thread__ object as-if by `new thread(threadfunc)` and add it to the group.]] + +[[Postcondition:] [`this->size()` is increased by one, the new thread is running.]] + +[[Returns:] [A pointer to the new __thread__ object.]] + +] + +[endsect] + +[section:add_thread Member function `add_thread()`] + + void add_thread(thread* thrd); + +[variablelist + +[[Precondition:] [The expression `delete thrd` is well-formed and will not result in undefined behaviour.]] + +[[Effects:] [Take ownership of the __thread__ object pointed to by `thrd` and add it to the group.]] + +[[Postcondition:] [`this->size()` is increased by one.]] + +] + +[endsect] + +[section:remove_thread Member function `remove_thread()`] + + void remove_thread(thread* thrd); + +[variablelist + +[[Effects:] [If `thrd` is a member of the group, remove it without calling `delete`.]] + +[[Postcondition:] [If `thrd` was a member of the group, `this->size()` is decreased by one.]] + +] + +[endsect] + +[section:join_all Member function `join_all()`] + + void join_all(); + +[variablelist + +[[Effects:] [Call `join()` on each __thread__ object in the group.]] + +[[Postcondition:] [Every thread in the group has terminated.]] + +[[Note:] [Since __join__ is one of the predefined __interruption_points__, `join_all()` is also an interruption point.]] + +] + +[endsect] + +[section:interrupt_all Member function `interrupt_all()`] + + void interrupt_all(); + +[variablelist + +[[Effects:] [Call `interrupt()` on each __thread__ object in the group.]] + +] + +[endsect] + +[section:size Member function `size()`] + + int size(); + +[variablelist + +[[Returns:] [The number of threads in the group.]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + + +[endsect] + +[endsect] diff --git a/doc/tss-ref.xml b/doc/tss-ref.xml deleted file mode 100644 index faacad1f..00000000 --- a/doc/tss-ref.xml +++ /dev/null @@ -1,206 +0,0 @@ - - - %thread.entities; -]> - -
- - - - The thread_specific_ptr class defines - an interface for using thread specific storage. - - - - Thread specific storage is data associated with - individual threads and is often used to make operations - that rely on global data - thread-safe. - - - Template thread_specific_ptr - stores a pointer to an object obtained on a thread-by-thread - basis and calls a specified cleanup handler on the contained - pointer when the thread terminates. The cleanup handlers are - called in the reverse order of construction of the - thread_specific_ptrs, and for the - initial thread are called by the destructor, providing the - same ordering guarantees as for normal declarations. Each - thread initially stores the null pointer in each - thread_specific_ptr instance. - - The template thread_specific_ptr - is useful in the following cases: - - An interface was originally written assuming - a single thread of control and it is being ported to a - multithreaded environment. - - Each thread of control invokes sequences of - methods that share data that are physically unique - for each thread, but must be logically accessed - through a globally visible access point instead of - being explicitly passed. - - - - - - boost::noncopyable - Exposition only - - - - The expression delete get() is well - formed. - - A thread-specific data key is allocated and visible to - all threads in the process. Upon creation, the value - NULL will be associated with the new key in all - active threads. A cleanup method is registered with the key - that will call delete on the value associated - with the key for a thread when it exits. When a thread exits, - if a key has a registered cleanup method and the thread has a - non-NULL value associated with that key, the value - of the key is set to NULL and then the cleanup - method is called with the previously associated value as its - sole argument. The order in which registered cleanup methods - are called when a thread exits is undefined. If after all the - cleanup methods have been called for all non-NULL - values, there are still some non-NULL values - with associated cleanup handlers the result is undefined - behavior. - - boost::thread_resource_error if - the necessary resources can not be obtained. - - There may be an implementation specific limit to the - number of thread specific storage objects that can be created, - and this limit may be small. - - The most common need for cleanup will be to call - delete on the associated value. If other forms - of cleanup are required the overloaded constructor should be - called instead. - - - - - void (*cleanup)(void*) - - - A thread-specific data key is allocated and visible to - all threads in the process. Upon creation, the value - NULL will be associated with the new key in all - active threads. The cleanup method is registered - with the key and will be called for a thread with the value - associated with the key for that thread when it exits. When a - thread exits, if a key has a registered cleanup method and the - thread has a non-NULL value associated with that - key, the value of the key is set to NULL and then - the cleanup method is called with the previously associated - value as its sole argument. The order in which registered - cleanup methods are called when a thread exits is undefined. - If after all the cleanup methods have been called for all - non-NULL values, there are still some - non-NULL values with associated cleanup handlers - the result is undefined behavior. - - boost::thread_resource_error if - the necessary resources can not be obtained. - - There may be an implementation specific limit to the - number of thread specific storage objects that can be created, - and this limit may be small. - - There is the occasional need to register - specialized cleanup methods, or to register no cleanup method - at all (done by passing NULL to this constructor. - - - - - Deletes the thread-specific data key allocated by the - constructor. The thread-specific data values associated with - the key need not be NULL. It is the responsibility - of the application to perform any cleanup actions for data - associated with the key. - - Does not destroy any data that may be stored in any - thread's thread specific storage. For this reason you should - not destroy a thread_specific_ptr object - until you are certain there are no threads running that have - made use of its thread specific storage. - - Associated data is not cleaned up because registered - cleanup methods need to be run in the thread that allocated the - associated data to be guarranteed to work correctly. There's no - safe way to inject the call into another thread's execution - path, making it impossible to call the cleanup methods safely. - - - - - - T* - - *this holds the null pointer - for the current thread. - - this->get() prior to the call. - - This method provides a mechanism for the user to - relinquish control of the data associated with the - thread-specific key. - - - - void - - - T* - 0 - - - If this->get() != p && - this->get() != NULL then call the - associated cleanup function. - - *this holds the pointer - p for the current thread. - - - - - - T* - - The object stored in thread specific storage for - the current thread for *this. - - Each thread initially returns 0. - - - - T* - - this->get(). - - - - T& - - this->get() != 0 - - this->get(). - - - - -
diff --git a/doc/tss.qbk b/doc/tss.qbk new file mode 100644 index 00000000..b6e1ff69 --- /dev/null +++ b/doc/tss.qbk @@ -0,0 +1,175 @@ +[section Thread Local Storage] + +[heading Synopsis] + +Thread local storage allows multi-threaded applications to have a separate instance of a given data item for each thread. Where a +single-threaded application would use static or global data, this could lead to contention, deadlock or data corruption in a +multi-threaded application. One example is the C `errno` variable, used for storing the error code related to functions from the +Standard C library. It is common practice (and required by POSIX) for compilers that support multi-threaded applications to provide +a separate instance of `errno` for each thread, in order to avoid different threads competing to read or update the value. + +Though compilers often provide this facility in the form of extensions to the declaration syntax (such as `__declspec(thread)` or +`__thread` annotations on `static` or namespace-scope variable declarations), such support is non-portable, and is often limited in +some way, such as only supporting POD types. + +[heading Portable thread-local storage with `boost::thread_specific_ptr`] + +`boost::thread_specific_ptr` provides a portable mechanism for thread-local storage that works on all compilers supported by +__boost_thread__. Each instance of `boost::thread_specific_ptr` represents a pointer to an object (such as `errno`) where each +thread must have a distinct value. The value for the current thread can be obtained using the `get()` member function, or by using +the `*` and `->` pointer deference operators. Initially the pointer has a value of `NULL` in each thread, but the value for the +current thread can be set using the `reset()` member function. + +If the value of the pointer for the current thread is changed using `reset()`, then the previous value is destroyed by calling the +cleanup routine. Alternatively, the stored value can be reset to `NULL` and the prior value returned by calling the `release()` +member function, allowing the application to take back responsibility for destroying the object. + +[heading Cleanup at thread exit] + +When a thread exits, the objects associated with each `boost::thread_specific_ptr` instance are destroyed. By default, the object +pointed to by a pointer `p` is destroyed by invoking `delete p`, but this can be overridden for a specific instance of +`boost::thread_specific_ptr` by providing a cleanup routine to the constructor. In this case, the object is destroyed by invoking +`func(p)` where `func` is the cleanup routine supplied to the constructor. The cleanup functions are called in an unspecified +order. If a cleanup routine sets the value of associated with an instance of `boost::thread_specific_ptr` that has already been +cleaned up, that value is added to the cleanup list. Cleanup finishes when there are no outstanding instances of +`boost::thread_specific_ptr` with values. + + +[section:thread_specific_ptr Class `thread_specific_ptr`] + + template + class thread_specific_ptr + { + public: + thread_specific_ptr(); + explicit thread_specific_ptr(void (*cleanup_function)(T*)); + ~thread_specific_ptr(); + + T* get() const; + T* operator->() const; + T& operator*() const; + + T* release(); + void reset(T* new_value=0); + }; + +[section:default_constructor `thread_specific_ptr();`] + +[variablelist + +[[Requires:] [`delete this->get()` is well-formed.]] + +[[Effects:] [Construct a `thread_specific_ptr` object for storing a pointer to an object of type `T` specific to each thread. The +default `delete`-based cleanup function will be used to destroy any thread-local objects when `reset()` is called, or the thread +exits.]] + +[[Throws:] [`boost::thread_resource_error` if an error occurs.]] + +] + +[endsect] + +[section:constructor_with_custom_cleanup `explicit thread_specific_ptr(void (*cleanup_function)(T*));`] + +[variablelist + +[[Requires:] [`cleanup_function(this->get())` does not throw any exceptions.]] + +[[Effects:] [Construct a `thread_specific_ptr` object for storing a pointer to an object of type `T` specific to each thread. The +supplied `cleanup_function` will be used to destroy any thread-local objects when `reset()` is called, or the thread exits.]] + +[[Throws:] [`boost::thread_resource_error` if an error occurs.]] + +] + +[endsect] + +[section:destructor `~thread_specific_ptr();`] + +[variablelist + +[[Effects:] [Calls `this->reset()` to clean up the associated value for the current thread, and destroys `*this`.]] + +[[Throws:] [Nothing.]] + +] + +[note Care needs to be taken to ensure that any threads still running after an instance of `boost::thread_specific_ptr` has been +destroyed do not call any member functions on that instance.] + +[endsect] + +[section:get `T* get() const;`] + +[variablelist + +[[Returns:] [The pointer associated with the current thread.]] + +[[Throws:] [Nothing.]] + +] + +[note The initial value associated with an instance of `boost::thread_specific_ptr` is `NULL` for each thread.] + +[endsect] + +[section:operator_arrow `T* operator->() const;`] + +[variablelist + +[[Returns:] [`this->get()`]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:operator_star `T& operator*() const;`] + +[variablelist + +[[Requires:] [`this->get` is not `NULL`.]] + +[[Returns:] [`*(this->get())`]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + +[section:reset `void reset(T* new_value=0);`] + +[variablelist + +[[Effects:] [If `this->get()!=new_value` and `this->get()` is non-`NULL`, invoke `delete this->get()` or +`cleanup_function(this->get())` as appropriate. Store `new_value` as the pointer associated with the current thread.]] + +[[Postcondition:] [`this->get()==new_value`]] + +[[Throws:] [`boost::thread_resource_error` if an error occurs.]] + +] + +[endsect] + +[section:release `T* release();`] + +[variablelist + +[[Effects:] [Return `this->get()` and store `NULL` as the pointer associated with the current thread without invoking the cleanup +function.]] + +[[Postcondition:] [`this->get()==0`]] + +[[Throws:] [Nothing.]] + +] + +[endsect] + + +[endsect] + +[endsect] diff --git a/doc/xtime-ref.xml b/doc/xtime-ref.xml deleted file mode 100644 index 566a28d1..00000000 --- a/doc/xtime-ref.xml +++ /dev/null @@ -1,82 +0,0 @@ - - - %thread.entities; -]> - -
- - - - - - Specifies the clock type to use when creating - an object of type xtime. - - - - The only clock type supported by &Boost.Thread; is - TIME_UTC. The epoch for TIME_UTC - is 1970-01-01 00:00:00. - - - - - - An object of type xtime - defines a time that is used to perform high-resolution time operations. - This is a temporary solution that will be replaced by a more robust time - library once available in Boost. - - - - The xtime type is used to represent a point on - some time scale or a duration in time. This type may be proposed for the C standard by - Markus Kuhn. &Boost.Thread; provides only a very minimal implementation of this - proposal; it is expected that a full implementation (or some other time - library) will be provided in Boost as a separate library, at which time &Boost.Thread; - will deprecate its own implementation. - - Note that the resolution is - implementation specific. For many implementations the best resolution - of time is far more than one nanosecond, and even when the resolution - is reasonably good, the latency of a call to xtime_get() - may be significant. For maximum portability, avoid durations of less than - one second. - - - - - int - - - xtime* - - - - int - - - - xtp represents the current point in - time as a duration since the epoch specified by - clock_type. - - - - clock_type if successful, otherwise 0. - - - - - - platform-specific-type - - - -