2
0
mirror of https://github.com/boostorg/thread.git synced 2026-01-24 18:32:32 +00:00

New documentation for thread library imported from trunk revision 43671

[SVN r43674]
This commit is contained in:
Anthony Williams
2008-03-17 13:59:17 +00:00
parent 30bb6143c1
commit 413c29a5e4
38 changed files with 3042 additions and 6636 deletions

View File

@@ -1,11 +1,55 @@
# Copyright (C) 2001-2003
# William E. Kempf
# (C) Copyright 2008 Anthony Williams
#
# Distributed under the Boost Software License, Version 1.0. (See accompanying
# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
import toolset ;
toolset.using doxygen ;
path-constant boost-images : ../../../doc/src/images ;
boostbook thread : thread.xml ;
xml thread : thread.qbk ;
boostbook standalone
:
thread
:
# HTML options first:
# Use graphics not text for navigation:
<xsl:param>navig.graphics=1
# How far down we chunk nested sections, basically all of them:
<xsl:param>chunk.section.depth=3
# Don't put the first section on the same page as the TOC:
<xsl:param>chunk.first.sections=1
# How far down sections get TOC's
<xsl:param>toc.section.depth=10
# Max depth in each TOC:
<xsl:param>toc.max.depth=3
# How far down we go with TOC's
<xsl:param>generate.section.toc.level=10
# Path for links to Boost:
<xsl:param>boost.root=../../../..
# Path for libraries index:
<xsl:param>boost.libraries=../../../../libs/libraries.htm
# Use the main Boost stylesheet:
<xsl:param>html.stylesheet=../../../../doc/html/boostbook.css
# PDF Options:
# TOC Generation: this is needed for FOP-0.9 and later:
#<xsl:param>fop1.extensions=1
# Or enable this if you're using XEP:
<xsl:param>xep.extensions=1
# TOC generation: this is needed for FOP 0.2, but must not be set to zero for FOP-0.9!
<xsl:param>fop.extensions=0
# No indent on body text:
<xsl:param>body.start.indent=0pt
# Margin size:
<xsl:param>page.margin.inner=0.5in
# Margin size:
<xsl:param>page.margin.outer=0.5in
# Yes, we want graphics for admonishments:
<xsl:param>admon.graphics=1
# Set this one for PDF generation *only*:
# default pnd graphics are awful in PDF form,
# better use SVG's instead:
<format>pdf:<xsl:param>admon.graphics.extension=".svg"
<format>pdf:<xsl:param>admon.graphics.path=$(boost-images)/
;

16
doc/acknowledgements.qbk Normal file
View File

@@ -0,0 +1,16 @@
[section:acknowledgements Acknowledgments]
The original implementation of __boost_thread__ was written by William Kempf, with contributions from numerous others. This new
version initially grew out of an attempt to rewrite __boost_thread__ to William Kempf's design with fresh code that could be
released under the Boost Software License. However, as the C++ Standards committee have been actively discussing standardizing a
thread library for C++, this library has evolved to reflect the proposals, whilst retaining as much backwards-compatibility as
possible.
Particular thanks must be given to Roland Schwarz, who contributed a lot of time and code to the original __boost_thread__ library,
and who has been actively involved with the rewrite. The scheme for dividing the platform-specific implementations into separate
directories was devised by Roland, and his input has contributed greatly to improving the quality of the current implementation.
Thanks also must go to Peter Dimov, Howard Hinnant, Alexander Terekhov, Chris Thomasson and others for their comments on the
implementation details of the code.
[endsect]

View File

@@ -1,73 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.acknowledgements"
last-revision="$Date$">
<title>Acknowledgements</title>
<para>William E. Kempf was the architect, designer, and implementor of
&Boost.Thread;.</para>
<para>Mac OS Carbon implementation written by Mac Murrett.</para>
<para>Dave Moore provided initial submissions and further comments on the
<code>barrier</code>
,
<code>thread_pool</code>
,
<code>read_write_mutex</code>
,
<code>read_write_try_mutex</code>
and
<code>read_write_timed_mutex</code>
classes.</para>
<para>Important contributions were also made by Jeremy Siek (lots of input
on the design and on the implementation), Alexander Terekhov (lots of input
on the Win32 implementation, especially in regards to boost::condition, as
well as a lot of explanation of POSIX behavior), Greg Colvin (lots of input
on the design), Paul Mclachlan, Thomas Matelich and Iain Hanson (for help
in trying to get the build to work on other platforms), and Kevin S. Van
Horn (for several updates/corrections to the documentation).</para>
<para>Mike Glassford finished changes to &Boost.Thread; that were begun
by William Kempf and moved them into the main CVS branch.
He also addressed a number of issues that were brought up on the Boost
developer's mailing list and provided some additions and changes to the
read_write_mutex and related classes.</para>
<para>The documentation was written by William E. Kempf. Beman Dawes
provided additional documentation material and editing.
Mike Glassford finished William Kempf's conversion of the documentation to
BoostBook format and added a number of new sections.</para>
<para>Discussions on the boost.org mailing list were essential in the
development of &Boost.Thread;
. As of August 1, 2001, participants included Alan Griffiths, Albrecht
Fritzsche, Aleksey Gurtovoy, Alexander Terekhov, Andrew Green, Andy Sawyer,
Asger Alstrup Nielsen, Beman Dawes, Bill Klein, Bill Rutiser, Bill Wade,
Branko &egrave;ibej, Brent Verner, Craig Henderson, Csaba Szepesvari,
Dale Peakall, Damian Dixon, Dan Nuffer, Darryl Green, Daryle Walker, David
Abrahams, David Allan Finch, Dejan Jelovic, Dietmar Kuehl, Douglas Gregor,
Duncan Harris, Ed Brey, Eric Swanson, Eugene Karpachov, Fabrice Truillot,
Frank Gerlach, Gary Powell, Gernot Neppert, Geurt Vos, Ghazi Ramadan, Greg
Colvin, Gregory Seidman, HYS, Iain Hanson, Ian Bruntlett, J Panzer, Jeff
Garland, Jeff Paquette, Jens Maurer, Jeremy Siek, Jesse Jones, Joe Gottman,
John (EBo) David, John Bandela, John Maddock, John Max Skaller, John
Panzer, Jon Jagger , Karl Nelson, Kevlin Henney, KG Chandrasekhar, Levente
Farkas, Lie-Quan Lee, Lois Goldthwaite, Luis Pedro Coelho, Marc Girod, Mark
A. Borgerding, Mark Rodgers, Marshall Clow, Matthew Austern, Matthew Hurd,
Michael D. Crawford, Michael H. Cox , Mike Haller, Miki Jovanovic, Nathan
Myers, Paul Moore, Pavel Cisler, Peter Dimov, Petr Kocmid, Philip Nash,
Rainer Deyke, Reid Sweatman, Ross Smith, Scott McCaskill, Shalom Reich,
Steve Cleary, Steven Kirk, Thomas Holenstein, Thomas Matelich, Trevor
Perrin, Valentin Bonnard, Vesa Karvonen, Wayne Miller, and William
Kempf.</para>
<para>
As of February 2006 Anthony Williams and Roland Schwarz took over maintainance
and further development of the library after it has been in an orphaned state
for a rather long period of time.
</para>
<para>Apologies for anyone inadvertently missed.</para>
</section>

View File

@@ -1,82 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/barrier.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="barrier">
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<purpose>
<para>An object of class <classname>barrier</classname> is a synchronization
primitive used to cause a set of threads to wait until they each perform a
certain function or each reach a particular point in their execution.</para>
</purpose>
<description>
<para>When a barrier is created, it is initialized with a thread count N.
The first N-1 calls to <code>wait()</code> will all cause their threads to be blocked.
The Nth call to <code>wait()</code> will allow all of the waiting threads, including
the Nth thread, to be placed in a ready state. The Nth call will also "reset"
the barrier such that, if an additional N+1th call is made to <code>wait()</code>,
it will be as though this were the first call to <code>wait()</code>; in other
words, the N+1th to 2N-1th calls to <code>wait()</code> will cause their
threads to be blocked, and the 2Nth call to <code>wait()</code> will allow all of
the waiting threads, including the 2Nth thread, to be placed in a ready state
and reset the barrier. This functionality allows the same set of N threads to re-use
a barrier object to synchronize their execution at multiple points during their
execution.</para>
<para>See <xref linkend="thread.glossary"/> for definitions of thread
states <link linkend="thread.glossary.thread-state">blocked</link>
and <link linkend="thread.glossary.thread-state">ready</link>.
Note that "waiting" is a synonym for blocked.</para>
</description>
<constructor>
<parameter name="count">
<paramtype>size_t</paramtype>
</parameter>
<effects><simpara>Constructs a <classname>barrier</classname> object that
will cause <code>count</code> threads to block on a call to <code>wait()</code>.
</simpara></effects>
</constructor>
<destructor>
<effects><simpara>Destroys <code>*this</code>. If threads are still executing
their <code>wait()</code> operations, the behavior for these threads is undefined.
</simpara></effects>
</destructor>
<method-group name="waiting">
<method name="wait">
<type>bool</type>
<effects><simpara>Wait until N threads call <code>wait()</code>, where
N equals the <code>count</code> provided to the constructor for the
barrier object.</simpara>
<simpara><emphasis role="bold">Note</emphasis> that if the barrier is
destroyed before <code>wait()</code> can return, the behavior is
undefined.</simpara></effects>
<returns>Exactly one of the N threads will receive a return value
of <code>true</code>, the others will receive a value of <code>false</code>.
Precisely which thread receives the return value of <code>true</code> will
be implementation-defined. Applications can use this value to designate one
thread as a leader that will take a certain action, and the other threads
emerging from the barrier can wait for that action to take place.</returns>
</method>
</method-group>
</class>
</namespace>
</header>

65
doc/barrier.qbk Normal file
View File

@@ -0,0 +1,65 @@
[section:barriers Barriers]
A barrier is a simple concept. Also known as a ['rendezvous], it is a synchronization point between multiple threads. The barrier is
configured for a particular number of threads (`n`), and as threads reach the barrier they must wait until all `n` threads have
arrived. Once the `n`-th thread has reached the barrier, all the waiting threads can proceed, and the barrier is reset.
[section:barrier Class `barrier`]
#include <boost/thread/barrier.hpp>
class barrier
{
public:
barrier(unsigned int count);
~barrier();
bool wait();
};
Instances of __barrier__ are not copyable or movable.
[heading Constructor]
barrier(unsigned int count);
[variablelist
[[Effects:] [Construct a barrier for `count` threads.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[heading Destructor]
~barrier();
[variablelist
[[Precondition:] [No threads are waiting on `*this`.]]
[[Effects:] [Destroys `*this`.]]
[[Throws:] [Nothing.]]
]
[heading Member function `wait`]
bool wait();
[variablelist
[[Effects:] [Block until `count` threads have called `wait` on `*this`. When the `count`-th thread calls `wait`, all waiting threads
are unblocked, and the barrier is reset. ]]
[[Returns:] [`true` for exactly one thread from each batch of waiting threads, `false` otherwise.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[endsect]

View File

@@ -1,234 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<bibliography id="thread.bibliography"
last-revision="$Date$">
<title>Bibliography</title>
<biblioentry id="thread.bib.AndrewsSchneider83">
<abbrev id="thread.bib.AndrewsSchneider83.abbrev">AndrewsSchnieder83</abbrev>
<biblioset relation="journal">
<title>ACM Computing Surveys</title>
<volumenum>Vol. 15</volumenum>
<issuenum>No. 1</issuenum>
<date>March, 1983</date>
</biblioset>
<biblioset relation="article">
<authorgroup>
<author>
<firstname>Gregory</firstname>
<othername>R.</othername>
<surname>Andrews</surname>
</author>
<author>
<firstname>Fred</firstname>
<othername>B.</othername>
<surname>Schneider</surname>
</author>
</authorgroup>
<title>
<ulink
url="http://www.acm.org/pubs/citations/journals/surveys/1983-15-1/p3-andrews/"
>Concepts and Notations for Concurrent Programming</ulink>
</title>
</biblioset>
<para>Good general background reading. Includes descriptions of Path
Expressions, Message Passing, and Remote Procedure Call in addition to the
basics</para>
</biblioentry>
<biblioentry id="thread.bib.Boost">
<abbrev id="thread.bib.Boost.abbrev">Boost</abbrev>
<bibliomisc>The <emphasis>Boost</emphasis> world wide web site.
<ulink url="http:/www.boost.org">http://www.boost.org</ulink></bibliomisc>
<para>&Boost.Thread; is one of many Boost libraries. The Boost web
site includes a great deal of documentation and general information which
applies to all Boost libraries. Current copies of the libraries including
documentation and test programs may be downloaded from the web
site.</para>
</biblioentry>
<biblioentry id="thread.bib.Hansen73">
<abbrev id="thread.bib.Hansen73.abbrev">Hansen73</abbrev>
<biblioset relation="journal">
<title>ACM Computing Surveys</title>
<volumenum>Vol. 5</volumenum>
<issuenum>No. 4</issuenum>
<date>December, 1973</date>
</biblioset>
<biblioset relation="article">
<author>0-201-63392-2
<firstname>Per Brinch</firstname>
<lastname>Hansen</lastname>
</author>
<title>
<ulink
url="http://www.acm.org/pubs/articles/journals/surveys/1973-5-4/p223-hansen/"
>Concurrent Programming Concepts</ulink>
</title>
</biblioset>
<para>"This paper describes the evolution of language features for
multiprogramming from event queues and semaphores to critical regions and
monitors." Includes analysis of why events are considered error-prone. Also
noteworthy because of an introductory quotation from Christopher Alexander;
Brinch Hansen was years ahead of others in recognizing pattern concepts
applied to software, too.</para>
</biblioentry>
<biblioentry id="thread.bib.Butenhof97">
<abbrev id="thread.bib.Butenhof97.abbrev">Butenhof97</abbrev>
<title>
<ulink url="http://cseng.aw.com/book/0,3828,0201633922,00.html"
>Programming with POSIX Threads </ulink>
</title>
<author>
<firstname>David</firstname>
<othername>R.</othername>
<surname>Butenhof</surname>
</author>
<publisher>Addison-Wesley</publisher>
<copyright><year>1997</year></copyright>
<isbn>ISNB: 0-201-63392-2</isbn>
<para>This is a very readable explanation of threads and how to use
them. Many of the insights given apply to all multithreaded programming, not
just POSIX Threads</para>
</biblioentry>
<biblioentry id="thread.bib.Hoare74">
<abbrev id="thread.bib.Hoare74.abbrev">Hoare74</abbrev>
<biblioset relation="journal">
<title>Communications of the ACM</title>
<volumenum>Vol. 17</volumenum>
<issuenum>No. 10</issuenum>
<date>October, 1974</date>
</biblioset>
<biblioset relation="article">
<title>
<ulink url=" http://www.acm.org/classics/feb96/"
>Monitors: An Operating System Structuring Concept</ulink>
</title>
<author>
<firstname>C.A.R.</firstname>
<surname>Hoare</surname>
</author>
<pagenums>549-557</pagenums>
</biblioset>
<para>Hoare and Brinch Hansen's work on Monitors is the basis for reliable
multithreading patterns. This is one of the most often referenced papers in
all of computer science, and with good reason.</para>
</biblioentry>
<biblioentry id="thread.bib.ISO98">
<abbrev id="thread.bib.ISO98.abbrev">ISO98</abbrev>
<title>
<ulink url="http://www.ansi.org">Programming Language C++</ulink>
</title>
<orgname>ISO/IEC</orgname>
<releaseinfo>14882:1998(E)</releaseinfo>
<para>This is the official C++ Standards document. Available from the ANSI
(American National Standards Institute) Electronic Standards Store.</para>
</biblioentry>
<biblioentry id="thread.bib.McDowellHelmbold89">
<abbrev id="thread.bib.McDowellHelmbold89.abbrev">McDowellHelmbold89</abbrev>
<biblioset relation="journal">
<title>Communications of the ACM</title>
<volumenum>Vol. 21</volumenum>
<issuenum>No. 2</issuenum>
<date>December, 1989</date>
</biblioset>
<biblioset>
<author>
<firstname>Charles</firstname>
<othername>E.</othername>
<surname>McDowell</surname>
</author>
<author>
<firstname>David</firstname>
<othername>P.</othername>
<surname>Helmbold</surname>
</author>
<title>
<ulink
url="http://www.acm.org/pubs/citations/journals/surveys/1989-21-4/p593-mcdowell/"
>Debugging Concurrent Programs</ulink>
</title>
</biblioset>
<para>Identifies many of the unique failure modes and debugging difficulties
associated with concurrent programs.</para>
</biblioentry>
<biblioentry id="thread.bib.SchmidtPyarali">
<abbrev id="thread.bib.SchmidtPyarali.abbrev">SchmidtPyarali</abbrev>
<title>
<ulink url="http://www.cs.wustl.edu/~schmidt/win32-cv-1.html8"
>Strategies for Implementing POSIX Condition Variables on Win32</ulink>
</title>
<authorgroup>
<author>
<firstname>Douglas</firstname>
<othername>C.</othername>
<surname>Schmidt</surname>
</author>
<author>
<firstname>Irfan</firstname>
<surname>Pyarali</surname>
</author>
</authorgroup>
<orgname>Department of Computer Science, Washington University, St. Louis,
Missouri</orgname>
<para>Rationale for understanding &Boost.Thread; condition
variables. Note that Alexander Terekhov found some bugs in the
implementation given in this article, so pthreads-win32 and &Boost.Thread;
are even more complicated yet.</para>
</biblioentry>
<biblioentry id="thread.bib.SchmidtStalRohnertBuschmann">
<abbrev
id="thread.bib.SchmidtStalRohnertBuschmann.abbrev">SchmidtStalRohnertBuschmann</abbrev>
<title>
<ulink
url="http://www.wiley.com/Corporate/Website/Objects/Products/0,9049,104671,00.html"
>Pattern-Oriented Architecture Volume 2</ulink>
</title>
<subtitle>Patterns for Concurrent and Networked Objects</subtitle>
<titleabbrev>POSA2</titleabbrev>
<authorgroup>
<author>
<firstname>Douglas</firstname>
<othername>C.</othername>
<surname>Schmidt</surname>
</author>
<author>
<firstname>Michael</firstname>
<lastname>Stal</lastname>
</author>
<author>
<firstname>Hans</firstname>
<surname>Rohnert</surname>
</author>
<author>
<firstname>Frank</firstname>
<surname>Buschmann</surname>
</author>
</authorgroup>
<publisher>Wiley</publisher>
<copyright><year>2000</year></copyright>
<para>This is a very good explanation of how to apply several patterns
useful for concurrent programming. Among the patterns documented is the
Monitor Pattern mentioned frequently in the &Boost.Thread;
documentation.</para>
</biblioentry>
<biblioentry id="thread.bib.Stroustrup">
<abbrev id="thread.bib.Stroustrup.abbrev">Stroustrup</abbrev>
<title>
<ulink url="http://cseng.aw.com/book/0,3828,0201700735,00.html"
>The C++ Programming Language</ulink>
</title>
<edition>Special Edition</edition>
<publisher>Addison-Wesley</publisher>
<copyright><year>2000</year></copyright>
<isbn>ISBN: 0-201-70073-5</isbn>
<para>The first book a C++ programmer should own. Note that the 3rd edition
(and subsequent editions like the Special Edition) has been rewritten to
cover the ISO standard language and library.</para>
</biblioentry>
</bibliography>

View File

@@ -1,137 +0,0 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Copyright (c) 2007 Roland Schwarz
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.build" last-revision="$Date$">
<title>Build</title>
<para>
How you build the &Boost.Thread; libraries, and how you build your own applications
that use those libraries, are some of the most frequently asked questions. Build
processes are difficult to deal with in a portable manner. That's one reason
why &Boost.Thread; makes use of &Boost.Build;.
In general you should refer to the documentation for &Boost.Build;.
This document will only supply you with some simple usage examples for how to
use <emphasis>bjam</emphasis> to build and test &Boost.Thread;. In addition, this document
will try to explain the build requirements so that users may create their own
build processes (for instance, create an IDE specific project), both for building
and testing &Boost.Thread;, as well as for building their own projects using
&Boost.Thread;.
</para>
<section id="thread.build.building">
<title>Building the &Boost.Thread; Libraries</title>
<para>
Building the &Boost.Thread; Library depends on how you intend to use it. You have several options:
<itemizedlist>
<listitem>
Using as a <link linkend="thread.build.precompiled">precompiled</link> library, possibly
with auto-linking, or for use from within an IDE.
</listitem>
<listitem>
Use from a <link linkend="thread.build.bjam">&Boost.Build;</link> project.
</listitem>
<listitem>
Using in <link linkend="thread.build.source">source</link> form.
</listitem>
</itemizedlist>
</para>
<section id="thread.build.precompiled">
<title>Precompiled</title>
<para>
Using the &Boost.Thread; library in precompiled form is the way to go if you want to
install the library to a standard place, from where your linker is able to resolve code
in binary form. You also will want this option if compile time is a concern. Multiple
variants are available, for different toolsets and build variants (debug/release).
The library files are named <emphasis>{lead}boost_thread{build-specific-tags}.{extension}</emphasis>,
where the build-specific-tags indicate the toolset used to build the library, whether it's
a debug or release build, what version of &Boost; was used, etc.; and the lead and extension
are the appropriate extensions for a dynamic link library or static library for the platform
for which &Boost.Thread; is being built.
For instance, a debug build of the dynamic library built for Win32 with VC++ 7.1 using Boost 1.34 would
be named <emphasis>boost_thread-vc71-mt-gd-1_34.dll</emphasis>.
More information on this should be available from the &Boost.Build; documentation.
</para>
<para>
Building should be possible with the default configuration. If you are running into problems,
it might be wise to adjust your local settings of &Boost.Build; though. Typically you will
need to get your user-config.jam file to reflect your environment, i.e. used toolsets. Please
refer to the &Boost.Build; documentation to learn how to do this.
</para>
<para>
To create the libraries you need to open a command shell and change to the
<emphasis>boost_root</emphasis> directory. From there you give the command
<programlisting>bjam --toolset=<emphasis>mytoolset</emphasis> stage --with-thread</programlisting>
Replace <emphasis>mytoolset</emphasis> with the name of your toolset, e.g. msvc-7.1 .
This will compile and put the libraries into the <emphasis>stage</emphasis> directory which is just below the
<emphasis>boost_root</emphasis> directory. &Boost.Build; by default will generate static and
dynamic variants for debug and release.
</para>
<note>
Invoking the above command without the --with-thread switch &Boost.Build; will build all of
the Boost distribution, including &Boost.Thread;.
</note>
<para>
The next step is to copy your libraries to a place where your linker is able to pick them up.
It is also quite possible to leave them in the stage directory and instruct your IDE to take them
from there.
</para>
<para>
In your IDE you then need to add <emphasis>boost_root</emphasis>/boost to the paths where the compiler
expects to find files to be included. For toolsets that support <emphasis>auto-linking</emphasis>
it is not necessary to explicitly specify the name of the library to link against, it is sufficient
to specify the path of the stage directory. Typically this is true on Windows. For gcc you need
to specify the exact library name (including all the tags). Please don't forget that threading
support must be turned on to be able to use the library. You should be able now to build your
project from the IDE.
</para>
</section>
<section id="thread.build.bjam">
<title>&Boost.Build; Project</title>
<para>
If you have decided to use &Boost.Build; as a build environment for your application, you simply
need to add a single line to your <emphasis>Jamroot</emphasis> file:
<programlisting>use-project /boost : {path-to-boost-root} ;</programlisting>
where <emphasis>{path-to-boost-root}</emphasis> needs to be replaced with the location of
your copy of the boost tree.
Later when you specify a component that needs to link against &Boost.Thread; you specify this
as e.g.:
<programlisting>exe myapp : {myappsources} /boost//thread ;</programlisting>
and you are done.
</para>
</section>
<section id="thread.build.source">
<title>Source Form</title>
<para>
Of course it is also possible to use the &Boost.Thread; library in source form.
First you need to specify the <emphasis>boost_root</emphasis>/boost directory as
a path where your compiler expects to find files to include. It is not easy
to isolate the &Boost.Thread; include files from the rest of the boost
library though. You would also need to isolate every include file that the thread
library depends on. Next you need to copy the files from
<emphasis>boost_root</emphasis>/libs/thread/src to your project and instruct your
build system to compile them together with your project. Please look into the
<emphasis>Jamfile</emphasis> in <emphasis>boost_root</emphasis>/libs/thread/build
to find out which compiler options and defines you will need to get a clean compile.
Using the boost library in this way is the least recommended, and should only be
considered if avoiding dependency on &Boost.Build; is a requirement. Even if so
it might be a better option to use the library in it's precompiled form.
Precompiled downloads are available from the boost consulting web site, or as
part of most linux distributions.
</para>
</section>
</section>
<section id="thread.build.testing">
<title>Testing the &Boost.Thread; Libraries</title>
<para>
To test the &Boost.Thread; libraries using &Boost.Build;, simply change to the
directory <emphasis>boost_root</emphasis>/libs/thread/test and execute the command:
<programlisting>bjam --toolset=<emphasis>mytoolset</emphasis> test</programlisting>
</para>
</section>
</section>

50
doc/changes.qbk Normal file
View File

@@ -0,0 +1,50 @@
[section:changes Changes since boost 1.34]
Almost every line of code in __boost_thread__ has been changed since the 1.34 release of boost. However, most of the interface
changes have been extensions, so the new code is largely backwards-compatible with the old code. The new features and breaking
changes are described below.
[heading New Features]
* Instances of __thread__ and of the various lock types are now movable.
* Threads can be interrupted at __interruption_points__.
* Condition variables can now be used with any type that implements the __lockable_concept__, through the use of
`boost::condition_variable_any` (`boost::condition` is a `typedef` to `boost::condition_variable_any`, provided for backwards
compatibility). `boost::condition_variable` is provided as an optimization, and will only work with
`boost::unique_lock<boost::mutex>` (`boost::mutex::scoped_lock`).
* Thread IDs are separated from __thread__, so a thread can obtain it's own ID (using `boost::this_thread::get_id()`), and IDs can
be used as keys in associative containers, as they have the full set of comparison operators.
* Timeouts are now implemented using the Boost DateTime library, through a typedef `boost::system_time` for absolute timeouts, and
with support for relative timeouts in many cases. `boost::xtime` is supported for backwards compatibility only.
* Locks are implemented as publicly accessible templates `boost::lock_guard`, `boost::unique_lock`, `boost::shared_lock`, and
`boost::upgrade_lock`, which are templated on the type of the mutex. The __lockable_concept__ has been extended to include publicly
available __lock_ref__ and __unlock_ref__ member functions, which are used by the lock types.
[heading Breaking Changes]
The list below should cover all changes to the public interface which break backwards compatibility.
* __try_mutex__ has been removed, and the functionality subsumed into __mutex__. __try_mutex__ is left as a `typedef`,
but is no longer a separate class.
* __recursive_try_mutex__ has been removed, and the functionality subsumed into
__recursive_mutex__. __recursive_try_mutex__ is left as a `typedef`, but is no longer a separate class.
* `boost::detail::thread::lock_ops` has been removed. Code that relies on the `lock_ops` implementation detail will no longer work,
as this has been removed, as it is no longer necessary now that mutex types now have public __lock_ref__ and __unlock_ref__ member
functions.
* `scoped_lock` constructors with a second parameter of type `bool` are no longer provided. With previous boost releases,
``boost::mutex::scoped_lock some_lock(some_mutex,false);`` could be used to create a lock object that was associated with a mutex,
but did not lock it on construction. This facility has now been replaced with the constructor that takes a
`boost::defer_lock_type` as the second parameter: ``boost::mutex::scoped_lock some_lock(some_mutex,boost::defer_lock);``
* The broken `boost::read_write_mutex` has been replaced with __shared_mutex__.
[endsect]

File diff suppressed because it is too large Load Diff

View File

@@ -1,196 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/condition.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="condition">
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<purpose>
<para>An object of class <classname>condition</classname> is a
synchronization primitive used to cause a thread to wait until a
particular shared-data condition (or time) is met.</para>
</purpose>
<description>
<para>A <classname>condition</classname> object is always used in
conjunction with a <link linkend="thread.concepts.mutexes">mutex</link>
object (an object whose type is a model of a <link
linkend="thread.concepts.Mutex">Mutex</link> or one of its
refinements). The mutex object must be locked prior to waiting on the
condition, which is verified by passing a lock object (an object whose
type is a model of <link linkend="thread.concepts.Lock">Lock</link> or
one of its refinements) to the <classname>condition</classname> object's
wait functions. Upon blocking on the <classname>condition</classname>
object, the thread unlocks the mutex object. When the thread returns
from a call to one of the <classname>condition</classname> object's wait
functions the mutex object is again locked. The tricky unlock/lock
sequence is performed automatically by the
<classname>condition</classname> object's wait functions.</para>
<para>The <classname>condition</classname> type is often used to
implement the Monitor Object and other important patterns (see
&cite.SchmidtStalRohnertBuschmann; and &cite.Hoare74;). Monitors are one
of the most important patterns for creating reliable multithreaded
programs.</para>
<para>See <xref linkend="thread.glossary"/> for definitions of <link
linkend="thread.glossary.thread-state">thread states</link>
blocked and ready. Note that "waiting" is a synonym for blocked.</para>
</description>
<constructor>
<effects><simpara>Constructs a <classname>condition</classname>
object.</simpara></effects>
</constructor>
<destructor>
<effects><simpara>Destroys <code>*this</code>.</simpara></effects>
</destructor>
<method-group name="notification">
<method name="notify_one">
<type>void</type>
<effects><simpara>If there is a thread waiting on <code>*this</code>,
change that thread's state to ready. Otherwise there is no
effect.</simpara></effects>
<notes><simpara>If more than one thread is waiting on <code>*this</code>,
it is unspecified which is made ready. After returning to a ready
state the notified thread must still acquire the mutex again (which
occurs within the call to one of the <classname>condition</classname>
object's wait functions.)</simpara></notes>
</method>
<method name="notify_all">
<type>void</type>
<effects><simpara>Change the state of all threads waiting on
<code>*this</code> to ready. If there are no waiting threads,
<code>notify_all()</code> has no effect.</simpara></effects>
</method>
</method-group>
<method-group name="waiting">
<method name="wait">
<template>
<template-type-parameter name="ScopedLock"/>
</template>
<type>void</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements.</simpara></requires>
<effects><simpara>Releases the lock on the <link
linkend="thread.concepts.mutexes">mutex object</link>
associated with <code>lock</code>, blocks the current thread of execution
until readied by a call to <code>this->notify_one()</code>
or<code> this->notify_all()</code>, and then reacquires the
lock.</simpara></effects>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="wait">
<template>
<template-type-parameter name="ScopedLock"/>
<template-type-parameter name="Pred"/>
</template>
<type>void</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="pred">
<paramtype>Pred</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements and the return from <code>pred()</code> is
convertible to <code>bool</code>.</simpara></requires>
<effects><simpara>As if: <code>while (!pred())
wait(lock)</code></simpara></effects>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="timed_wait">
<template>
<template-type-parameter name="ScopedLock"/>
</template>
<type>bool</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="xt">
<paramtype>const <classname>boost::xtime</classname>&amp;</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements.</simpara></requires>
<effects><simpara>Releases the lock on the <link
linkend="thread.concepts.mutexes">mutex object</link>
associated with <code>lock</code>, blocks the current thread of execution
until readied by a call to <code>this->notify_one()</code>
or<code> this->notify_all()</code>, or until time <code>xt</code>
is reached, and then reacquires the lock.</simpara></effects>
<returns><simpara><code>false</code> if time <code>xt</code> is reached,
otherwise <code>true</code>.</simpara></returns>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="timed_wait">
<template>
<template-type-parameter name="ScopedLock"/>
<template-type-parameter name="Pred"/>
</template>
<type>bool</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="xt">
<paramtype>const <classname>boost::xtime</classname>&amp;</paramtype>
</parameter>
<parameter name="pred">
<paramtype>Pred</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements and the return from <code>pred()</code> is
convertible to <code>bool</code>.</simpara></requires>
<effects><simpara>As if: <code>while (!pred()) { if (!timed_wait(lock,
xt)) return false; } return true;</code></simpara></effects>
<returns><simpara><code>false</code> if <code>xt</code> is reached,
otherwise <code>true</code>.</simpara></returns>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
</method-group>
</class>
</namespace>
</header>

494
doc/condition_variables.qbk Normal file
View File

@@ -0,0 +1,494 @@
[section:condvar_ref Condition Variables]
[heading Synopsis]
The classes `condition_variable` and `condition_variable_any` provide a
mechanism for one thread to wait for notification from another thread that a
particular condition has become true. The general usage pattern is that one
thread locks a mutex and then calls `wait` on an instance of
`condition_variable` or `condition_variable_any`. When the thread is woken from
the wait, then it checks to see if the appropriate condition is now true, and
continues if so. If the condition is not true, then the thread then calls `wait`
again to resume waiting. In the simplest case, this condition is just a boolean
variable:
boost::condition_variable cond;
boost::mutex mut;
bool data_ready;
void process_data();
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
while(!data_ready)
{
cond.wait(lock);
}
process_data();
}
Notice that the `lock` is passed to `wait`: `wait` will atomically add the
thread to the set of threads waiting on the condition variable, and unlock the
mutex. When the thread is woken, the mutex will be locked again before the call
to `wait` returns. This allows other threads to acquire the mutex in order to
update the shared data, and ensures that the data associated with the condition
is correctly synchronized.
In the mean time, another thread sets the condition to `true`, and then calls
either `notify_one` or `notify_all` on the condition variable to wake one
waiting thread or all the waiting threads respectively.
void retrieve_data();
void prepare_data();
void prepare_data_for_processing()
{
retrieve_data();
prepare_data();
{
boost::lock_guard<boost::mutex> lock(mut);
data_ready=true;
}
cond.notify_one();
}
Note that the same mutex is locked before the shared data is updated, but that
the mutex does not have to be locked across the call to `notify_one`.
This example uses an object of type `condition_variable`, but would work just as
well with an object of type `condition_variable_any`: `condition_variable_any`
is more general, and will work with any kind of lock or mutex, whereas
`condition_variable` requires that the lock passed to `wait` is an instance of
`boost::unique_lock<boost::mutex>`. This enables `condition_variable` to make
optimizations in some cases, based on the knowledge of the mutex type;
`condition_variable_any` typically has a more complex implementation than
`condition_variable`.
[section:condition_variable Class `condition_variable`]
namespace boost
{
class condition_variable
{
public:
condition_variable();
~condition_variable();
void wait(boost::unique_lock<boost::mutex>& lock);
template<typename predicate_type>
void wait(boost::unique_lock<boost::mutex>& lock,predicate_type predicate);
bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::system_time const& abs_time);
template<typename duration_type>
bool timed_wait(boost::unique_lock<boost::mutex>& lock,duration_type const& rel_time);
template<typename predicate_type>
bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::system_time const& abs_time,predicate_type predicate);
template<typename duration_type,typename predicate_type>
bool timed_wait(boost::unique_lock<boost::mutex>& lock,duration_type const& rel_time,predicate_type predicate);
// backwards compatibility
bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::xtime const& abs_time);
template<typename predicate_type>
bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::xtime const& abs_time,predicate_type predicate);
};
}
[section:constructor `condition_variable()`]
[variablelist
[[Effects:] [Constructs an object of class `condition_variable`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:destructor `~condition_variable()`]
[variablelist
[[Precondition:] [All threads waiting on `*this` have been notified by a call to
`notify_one` or `notify_all` (though the respective calls to `wait` or
`timed_wait` need not have returned).]]
[[Effects:] [Destroys the object.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:notify_one `void notify_one()`]
[variablelist
[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call
to `wait` or `timed_wait`, unblocks one of those threads.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:notify_all `void notify_all()`]
[variablelist
[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call
to `wait` or `timed_wait`, unblocks all of those threads.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:wait `void wait(boost::unique_lock<boost::mutex>& lock)`]
[variablelist
[[Precondition:] [`lock` is locked by the current thread, and either no other
thread is currently waiting on `*this`, or the execution of the `mutex()` member
function on the `lock` objects supplied in the calls to `wait` or `timed_wait`
in all the threads currently waiting on `*this` would return the same value as
`lock->mutex()` for this call to `wait`.]]
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, or spuriously. When the thread is unblocked (for whatever
reason), the lock is reacquired by invoking `lock.lock()` before the call to
`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the
function exits with an exception.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[endsect]
[section:wait_predicate `template<typename predicate_type> void wait(boost::unique_lock<boost::mutex>& lock, predicate_type pred)`]
[variablelist
[[Effects:] [As-if ``
while(!pred())
{
wait(lock);
}
``]]
]
[endsect]
[section:timed_wait `bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::system_time const& abs_time)`]
[variablelist
[[Precondition:] [`lock` is locked by the current thread, and either no other
thread is currently waiting on `*this`, or the execution of the `mutex()` member
function on the `lock` objects supplied in the calls to `wait` or `timed_wait`
in all the threads currently waiting on `*this` would return the same value as
`lock->mutex()` for this call to `wait`.]]
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, when the time as reported by `boost::get_system_time()`
would be equal to or later than the specified `abs_time`, or spuriously. When
the thread is unblocked (for whatever reason), the lock is reacquired by
invoking `lock.lock()` before the call to `wait` returns. The lock is also
reacquired by invoking `lock.lock()` if the function exits with an exception.]]
[[Returns:] [`false` if the call is returning because the time specified by
`abs_time` was reached, `true` otherwise.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[endsect]
[section:timed_wait_rel `template<typename duration_type> bool timed_wait(boost::unique_lock<boost::mutex>& lock,duration_type const& rel_time)`]
[variablelist
[[Precondition:] [`lock` is locked by the current thread, and either no other
thread is currently waiting on `*this`, or the execution of the `mutex()` member
function on the `lock` objects supplied in the calls to `wait` or `timed_wait`
in all the threads currently waiting on `*this` would return the same value as
`lock->mutex()` for this call to `wait`.]]
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, after the period of time indicated by the `rel_time`
argument has elapsed, or spuriously. When the thread is unblocked (for whatever
reason), the lock is reacquired by invoking `lock.lock()` before the call to
`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the
function exits with an exception.]]
[[Returns:] [`false` if the call is returning because the time period specified
by `rel_time` has elapsed, `true` otherwise.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[note The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.]
[endsect]
[section:timed_wait_predicate `template<typename predicate_type> bool timed_wait(boost::unique_lock<boost::mutex>& lock, boost::system_time const& abs_time, predicate_type pred)`]
[variablelist
[[Effects:] [As-if ``
while(!pred())
{
if(!timed_wait(lock,abs_time))
{
return pred();
}
}
return true;
``]]
]
[endsect]
[endsect]
[section:condition_variable_any Class `condition_variable_any`]
namespace boost
{
class condition_variable_any
{
public:
condition_variable_any();
~condition_variable_any();
template<typename lock_type>
void wait(lock_type& lock);
template<typename lock_type,typename predicate_type>
void wait(lock_type& lock,predicate_type predicate);
template<typename lock_type>
bool timed_wait(lock_type& lock,boost::system_time const& abs_time);
template<typename lock_type,typename duration_type>
bool timed_wait(lock_type& lock,duration_type const& rel_time);
template<typename lock_type,typename predicate_type>
bool timed_wait(lock_type& lock,boost::system_time const& abs_time,predicate_type predicate);
template<typename lock_type,typename duration_type,typename predicate_type>
bool timed_wait(lock_type& lock,duration_type const& rel_time,predicate_type predicate);
// backwards compatibility
template<typename lock_type>
bool timed_wait(lock_type>& lock,boost::xtime const& abs_time);
template<typename lock_type,typename predicate_type>
bool timed_wait(lock_type& lock,boost::xtime const& abs_time,predicate_type predicate);
};
}
[section:constructor `condition_variable_any()`]
[variablelist
[[Effects:] [Constructs an object of class `condition_variable_any`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:destructor `~condition_variable_any()`]
[variablelist
[[Precondition:] [All threads waiting on `*this` have been notified by a call to
`notify_one` or `notify_all` (though the respective calls to `wait` or
`timed_wait` need not have returned).]]
[[Effects:] [Destroys the object.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:notify_one `void notify_one()`]
[variablelist
[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call
to `wait` or `timed_wait`, unblocks one of those threads.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:notify_all `void notify_all()`]
[variablelist
[[Effects:] [If any threads are currently __blocked__ waiting on `*this` in a call
to `wait` or `timed_wait`, unblocks all of those threads.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:wait `template<typename lock_type> void wait(lock_type& lock)`]
[variablelist
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, or spuriously. When the thread is unblocked (for whatever
reason), the lock is reacquired by invoking `lock.lock()` before the call to
`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the
function exits with an exception.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[endsect]
[section:wait_predicate `template<typename lock_type,typename predicate_type> void wait(lock_type& lock, predicate_type pred)`]
[variablelist
[[Effects:] [As-if ``
while(!pred())
{
wait(lock);
}
``]]
]
[endsect]
[section:timed_wait `template<typename lock_type> bool timed_wait(lock_type& lock,boost::system_time const& abs_time)`]
[variablelist
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, when the time as reported by `boost::get_system_time()`
would be equal to or later than the specified `abs_time`, or spuriously. When
the thread is unblocked (for whatever reason), the lock is reacquired by
invoking `lock.lock()` before the call to `wait` returns. The lock is also
reacquired by invoking `lock.lock()` if the function exits with an exception.]]
[[Returns:] [`false` if the call is returning because the time specified by
`abs_time` was reached, `true` otherwise.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[endsect]
[section:timed_wait_rel `template<typename lock_type,typename duration_type> bool timed_wait(lock_type& lock,duration_type const& rel_time)`]
[variablelist
[[Effects:] [Atomically call `lock.unlock()` and blocks the current thread. The
thread will unblock when notified by a call to `this->notify_one()` or
`this->notify_all()`, after the period of time indicated by the `rel_time`
argument has elapsed, or spuriously. When the thread is unblocked (for whatever
reason), the lock is reacquired by invoking `lock.lock()` before the call to
`wait` returns. The lock is also reacquired by invoking `lock.lock()` if the
function exits with an exception.]]
[[Returns:] [`false` if the call is returning because the time period specified
by `rel_time` has elapsed, `true` otherwise.]]
[[Postcondition:] [`lock` is locked by the current thread.]]
[[Throws:] [__thread_resource_error__ if an error
occurs. __thread_interrupted__ if the wait was interrupted by a call to
__interrupt__ on the __thread__ object associated with the current thread of execution.]]
]
[note The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.]
[endsect]
[section:timed_wait_predicate `template<typename lock_type,typename predicate_type> bool timed_wait(lock_type& lock, boost::system_time const& abs_time, predicate_type pred)`]
[variablelist
[[Effects:] [As-if ``
while(!pred())
{
if(!timed_wait(lock,abs_time))
{
return pred();
}
}
return true;
``]]
]
[endsect]
[endsect]
[section:condition Typedef `condition`]
typedef condition_variable_any condition;
The typedef `condition` is provided for backwards compatibility with previous boost releases.
[endsect]
[endsect]

View File

@@ -1,96 +0,0 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.configuration" last-revision="$Date$">
<title>Configuration</title>
<para>&Boost.Thread; uses several configuration macros in &lt;boost/config.hpp&gt;,
as well as configuration macros meant to be supplied by the application. These
macros are documented here.
</para>
<section id="thread.configuration.public">
<title>Library Defined Public Macros</title>
<para>
These macros are defined by &Boost.Thread; but are expected to be used
by application code.
</para>
<informaltable>
<tgroup cols="2">
<thead>
<row>
<entry>Macro</entry>
<entry>Meaning</entry>
</row>
</thead>
<tbody>
<row>
<entry>BOOST_HAS_THREADS</entry>
<entry>
Indicates that threading support is available. This means both that there
is a platform specific implementation for &Boost.Thread; and that
threading support has been enabled in a platform specific manner. For instance,
on the Win32 platform there&#39;s an implementation for &Boost.Thread;
but unless the program is compiled against one of the multithreading runtimes
(often determined by the compiler predefining the macro _MT) the BOOST_HAS_THREADS
macro remains undefined.
</entry>
</row>
</tbody>
</tgroup>
</informaltable>
</section>
<section id="thread.configuration.implementation">
<title>Library Defined Implementation Macros</title>
<para>
These macros are defined by &Boost.Thread; and are implementation details
of interest only to implementors.
</para>
<informaltable>
<tgroup cols="2">
<thead>
<row>
<entry>Macro</entry>
<entry>Meaning</entry>
</row>
</thead>
<tbody>
<row>
<entry>BOOST_HAS_WINTHREADS</entry>
<entry>
Indicates that the platform has the Microsoft Win32 threading libraries,
and that they should be used to implement &Boost.Thread;.
</entry>
</row>
<row>
<entry>BOOST_HAS_PTHREADS</entry>
<entry>
Indicates that the platform has the POSIX pthreads libraries, and that
they should be used to implement &Boost.Thread;.
</entry>
</row>
<row>
<entry>BOOST_HAS_FTIME</entry>
<entry>
Indicates that the implementation should use GetSystemTimeAsFileTime()
and the FILETIME type to calculate the current time. This is an implementation
detail used by boost::detail::getcurtime().
</entry>
</row>
<row>
<entry>BOOST_HAS_GETTTIMEOFDAY</entry>
<entry>
Indicates that the implementation should use gettimeofday() to calculate
the current time. This is an implementation detail used by boost::detail::getcurtime().
</entry>
</row>
</tbody>
</tgroup>
</informaltable>
</section>
</section>

View File

@@ -1,159 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.design" last-revision="$Date$">
<title>Design</title>
<para>With client/server and three-tier architectures becoming common place
in today's world, it's becoming increasingly important for programs to be
able to handle parallel processing. Modern day operating systems usually
provide some support for this through native thread APIs. Unfortunately,
writing portable code that makes use of parallel processing in C++ is made
very difficult by a lack of a standard interface for these native APIs.
Further, these APIs are almost universally C APIs and fail to take
advantage of C++'s strengths, or to address concepts unique to C++, such as
exceptions.</para>
<para>The &Boost.Thread; library is an attempt to define a portable interface
for writing parallel processes in C++.</para>
<section id="thread.design.goals">
<title>Goals</title>
<para>The &Boost.Thread; library has several goals that should help to set
it apart from other solutions. These goals are listed in order of precedence
with full descriptions below.
<variablelist>
<varlistentry>
<term>Portability</term>
<listitem>
<para>&Boost.Thread; was designed to be highly portable. The goal is
for the interface to be easily implemented on any platform that
supports threads, and possibly even on platforms without native thread
support.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Safety</term>
<listitem>
<para>&Boost.Thread; was designed to be as safe as possible. Writing
<link linkend="thread.glossary.thread-safe">thread-safe</link>
code is very difficult and successful libraries must strive to
insulate the programmer from dangerous constructs as much as
possible. This is accomplished in several ways:
<itemizedlist>
<listitem>
<para>C++ language features are used to make correct usage easy
(if possible) and error-prone usage impossible or at least more
difficult. For example, see the <link
linkend="thread.concepts.Mutex">Mutex</link> and <link
linkend="thread.concepts.Lock">Lock</link> designs, and note
how they interact.</para>
</listitem>
<listitem>
<para>Certain traditional concurrent programming features are
considered so error-prone that they are not provided at all. For
example, see <xref linkend="thread.rationale.events" />.</para>
</listitem>
<listitem>
<para>Dangerous features, or features which may be misused, are
identified as such in the documentation to make users aware of
potential pitfalls.</para>
</listitem>
</itemizedlist></para>
</listitem>
</varlistentry>
<varlistentry>
<term>Flexibility</term>
<listitem>
<para>&Boost.Thread; was designed to be flexible. This goal is often
at odds with <emphasis>safety</emphasis>. When functionality might be
compromised by the desire to keep the interface safe, &Boost.Thread;
has been designed to provide the functionality, but to make it's use
prohibitive for general use. In other words, the interfaces have been
designed such that it's usually obvious when something is unsafe, and
the documentation is written to explain why.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Efficiency</term>
<listitem>
<para>&Boost.Thread; was designed to be as efficient as
possible. When building a library on top of another library there is
always a danger that the result will be so much slower than the
"native" API that programmers are inclined to ignore the higher level
API. &Boost.Thread; was designed to minimize the chances of this
occurring. The interfaces have been crafted to allow an implementation
the greatest chance of being as efficient as possible. This goal is
often at odds with the goal for <emphasis>safety</emphasis>. Every
effort was made to ensure efficient implementations, but when in
conflict <emphasis>safety</emphasis> has always taken
precedence.</para>
</listitem>
</varlistentry>
</variablelist></para>
</section>
<section>
<title>Iterative Phases</title>
<para>Another goal of &Boost.Thread; was to take a dynamic, iterative
approach in its development. The computing industry is still exploring the
concepts of parallel programming. Most thread libraries supply only simple
primitive concepts for thread synchronization. These concepts are very
simple, but it is very difficult to use them safely or to provide formal
proofs for constructs built on top of them. There has been a lot of research
into other concepts, such as in "Communicating Sequential Processes."
&Boost.Thread; was designed in iterative steps, with each step providing
the building blocks necessary for the next step and giving the researcher
the tools necessary to explore new concepts in a portable manner.</para>
<para>Given the goal of following a dynamic, iterative approach
&Boost.Thread; shall go through several growth cycles. Each phase in its
development shall be roughly documented here.</para>
</section>
<section>
<title>Phase 1, Synchronization Primitives</title>
<para>Boost is all about providing high quality libraries with
implementations for many platforms. Unfortunately, there's a big problem
faced by developers wishing to supply such high quality libraries, namely
thread-safety. The C++ standard doesn't address threads at all, but real
world programs often make use of native threading support. A portable
library that doesn't address the issue of thread-safety is therefore not
much help to a programmer who wants to use the library in his multithreaded
application. So there's a very great need for portable primitives that will
allow the library developer to create <link
linkend="thread.glossary.thread-safe">thread-safe</link>
implementations. This need far out weighs the need for portable methods to
create and manage threads.</para>
<para>Because of this need, the first phase of &Boost.Thread; focuses
solely on providing portable primitive concepts for thread
synchronization. Types provided in this phase include the
<classname>boost::mutex</classname>,
<classname>boost::try_mutex</classname>,
<classname>boost::timed_mutex</classname>,
<classname>boost::recursive_mutex</classname>,
<classname>boost::recursive_try_mutex</classname>,
<classname>boost::recursive_timed_mutex</classname>, and
<classname>boost::lock_error</classname>. These are considered the "core"
synchronization primitives, though there are others that will be added in
later phases.</para>
</section>
<section id="thread.design.phase2">
<title>Phase 2, Thread Management and Thread Specific Storage</title>
<para>This phase addresses the creation and management of threads and
provides a mechanism for thread specific storage (data associated with a
thread instance). Thread management is a tricky issue in C++, so this
phase addresses only the basic needs of multithreaded program. Later
phases are likely to add additional functionality in this area. This
phase of &Boost.Thread; adds the <classname>boost::thread</classname> and
<classname>boost::thread_specific_ptr</classname> types. With these
additions the &Boost.Thread; library can be considered minimal but
complete.</para>
</section>
<section>
<title>The Next Phase</title>
<para>The next phase will address more advanced synchronization concepts,
such as read/write mutexes and barriers.</para>
</section>
</section>

View File

@@ -1,31 +0,0 @@
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<!ENTITY Boost "<emphasis role='bold'>Boost</emphasis>">
<!ENTITY Boost.Thread "<emphasis role='bold'>Boost.Thread</emphasis>">
<!ENTITY Boost.Build "<emphasis role='bold'>Boost.Build</emphasis>">
<!ENTITY cite.AndrewsSchneider83 "<citation><xref
linkend='thread.bib.AndrewsSchneider83'
endterm='thread.bib.AndrewsSchneider83.abbrev'/></citation>">
<!ENTITY cite.Boost "<citation><xref linkend='thread.bib.Boost'
endterm='thread.bib.Boost.abbrev'/></citation>">
<!ENTITY cite.Hansen73 "<citation><xref linkend='thread.bib.Hansen73'
endterm='thread.bib.Hansen73.abbrev'/></citation>">
<!ENTITY cite.Butenhof97 "<citation><xref linkend='thread.bib.Butenhof97'
endterm='thread.bib.Butenhof97.abbrev'/></citation>">
<!ENTITY cite.Hoare74 "<citation><xref linkend='thread.bib.Hoare74'
endterm='thread.bib.Hoare74.abbrev'/></citation>">
<!ENTITY cite.ISO98 "<citation><xref linkend='thread.bib.ISO98'
endterm='thread.bib.ISO98.abbrev'/></citation>">
<!ENTITY cite.McDowellHelmbold89 "<citation><xref
linkend='thread.bib.McDowellHelmbold89'
endterm='thread.bib.McDowellHelmbold89.abbrev'/></citation>">
<!ENTITY cite.SchmidtPyarali "<citation><xref
linkend='thread.bib.SchmidtPyarali'
endterm='thread.bib.SchmidtPyarali.abbrev'/></citation>">
<!ENTITY cite.SchmidtStalRohnertBuschmann "<citation><xref
linkend='thread.bib.SchmidtStalRohnertBuschmann'
endterm='thread.bib.SchmidtStalRohnertBuschmann.abbrev'/></citation>">
<!ENTITY cite.Stroustrup "<citation><xref linkend='thread.bib.Stroustrup'
endterm='thread.bib.Stroustrup.abbrev'/></citation>">

View File

@@ -1,62 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/exceptions.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="lock_error">
<purpose>
<simpara>The lock_error class defines an exception type thrown
to indicate a locking related error has been detected.</simpara>
</purpose>
<description>
<simpara>Examples of errors indicated by a lock_error exception
include a lock operation which can be determined to result in a
deadlock, or unlock operations attempted by a thread that does
not own the lock.</simpara>
</description>
<inherit access="public">
<type><classname>std::logical_error</classname></type>
</inherit>
<constructor>
<effects><simpara>Constructs a <code>lock_error</code> object.
</simpara></effects>
</constructor>
</class>
<class name="thread_resource_error">
<purpose>
<simpara>The <classname>thread_resource_error</classname> class
defines an exception type that is thrown by constructors in the
&Boost.Thread; library when thread-related resources can not be
acquired.</simpara>
</purpose>
<description>
<simpara><classname>thread_resource_error</classname> is used
only when thread-related resources cannot be acquired; memory
allocation failures are indicated by
<classname>std::bad_alloc</classname>.</simpara>
</description>
<inherit access="public">
<type><classname>std::runtime_error</classname></type>
</inherit>
<constructor>
<effects><simpara>Constructs a <code>thread_resource_error</code>
object.</simpara></effects>
</constructor>
</class>
</namespace>
</header>

View File

@@ -1,235 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.faq" last-revision="$Date$">
<title>Frequently Asked Questions</title>
<qandaset>
<qandaentry>
<question>
<para>Are lock objects <link
linkend="thread.glossary.thread-safe">thread safe</link>?</para>
</question>
<answer>
<para><emphasis role="bold">No!</emphasis> Lock objects are not meant to
be shared between threads. They are meant to be short-lived objects
created on automatic storage within a code block. Any other usage is
just likely to lead to errors and won't really be of actual benefit anyway.
Share <link linkend="thread.concepts.mutexes">Mutexes</link>, not
Locks. For more information see the <link
linkend="thread.rationale.locks">rationale</link> behind the
design for lock objects.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why was &Boost.Thread; modeled after (specific library
name)?</para>
</question>
<answer>
<para>It wasn't. &Boost.Thread; was designed from scratch. Extensive
design discussions involved numerous people representing a wide range of
experience across many platforms. To ensure portability, the initial
implements were done in parallel using POSIX Threads and the Win32
threading API. But the &Boost.Thread; design is very much in the spirit
of C++, and thus doesn't model such C based APIs.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why wasn't &Boost.Thread; modeled after (specific library
name)?</para>
</question>
<answer>
<para>Existing C++ libraries either seemed dangerous (often failing to
take advantage of prior art to reduce errors) or had excessive
dependencies on library components unrelated to threading. Existing C
libraries couldn't meet our C++ requirements, and were also missing
certain features. For instance, the WIN32 thread API lacks condition
variables, even though these are critical for the important Monitor
pattern &cite.SchmidtStalRohnertBuschmann;.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why do <link linkend="thread.concepts.mutexes">Mutexes</link>
have noncopyable semantics?</para>
</question>
<answer>
<para>To ensure that <link
linkend="thread.glossary.deadlock">deadlocks</link> don't occur. The
only logical form of copy would be to use some sort of shallow copy
semantics in which multiple mutex objects could refer to the same mutex
state. This means that if ObjA has a mutex object as part of its state
and ObjB is copy constructed from it, then when ObjB::foo() locks the
mutex it has effectively locked ObjA as well. This behavior can result
in deadlock. Other copy semantics result in similar problems (if you
think you can prove this to be wrong then supply us with an alternative
and we'll reconsider).</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>How can you prevent <link
linkend="thread.glossary.deadlock">deadlock</link> from occurring when
a thread must lock multiple mutexes?</para>
</question>
<answer>
<para>Always lock them in the same order. One easy way of doing this is
to use each mutex's address to determine the order in which they are
locked. A future &Boost.Thread; concept may wrap this pattern up in a
reusable class.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Don't noncopyable <link
linkend="thread.concepts.mutexes">Mutex</link> semantics mean that a
class with a mutex member will be noncopyable as well?</para>
</question>
<answer>
<para>No, but what it does mean is that the compiler can't generate a
copy constructor and assignment operator, so they will have to be coded
explicitly. This is a <emphasis role="bold">good thing</emphasis>,
however, since the compiler generated operations would not be <link
linkend="thread.glossary.thread-safe">thread-safe</link>. The following
is a simple example of a class with copyable semantics and internal
synchronization through a mutex member.</para>
<programlisting>
class counter
{
public:
// Doesn't need synchronization since there can be no references to *this
// until after it's constructed!
explicit counter(int initial_value)
: m_value(initial_value)
{
}
// We only need to synchronize other for the same reason we don't have to
// synchronize on construction!
counter(const counter&amp; other)
{
boost::mutex::scoped_lock scoped_lock(other.m_mutex);
m_value = other.m_value;
}
// For assignment we need to synchronize both objects!
const counter&amp; operator=(const counter&amp; other)
{
if (this == &amp;other)
return *this;
boost::mutex::scoped_lock lock1(&amp;m_mutex &lt; &amp;other.m_mutex ? m_mutex : other.m_mutex);
boost::mutex::scoped_lock lock2(&amp;m_mutex &gt; &amp;other.m_mutex ? m_mutex : other.m_mutex);
m_value = other.m_value;
return *this;
}
int value() const
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
return m_value;
}
int increment()
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
return ++m_value;
}
private:
mutable boost::mutex m_mutex;
int m_value;
};
</programlisting>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>How can you lock a <link
linkend="thread.concepts.mutexes">Mutex</link> member in a const member
function, in order to implement the Monitor Pattern?</para>
</question>
<answer>
<para>The Monitor Pattern &cite.SchmidtStalRohnertBuschmann; mutex
should simply be declared as mutable. See the example code above. The
internal state of mutex types could have been made mutable, with all
lock calls made via const functions, but this does a poor job of
documenting the actual semantics (and in fact would be incorrect since
the logical state of a locked mutex clearly differs from the logical
state of an unlocked mutex). Declaring a mutex member as mutable clearly
documents the intended semantics.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why supply <classname>boost::condition</classname> variables rather than
event variables?</para>
</question>
<answer>
<para>Condition variables result in user code much less prone to <link
linkend="thread.glossary.race-condition">race conditions</link> than
event variables. See <xref linkend="thread.rationale.events" />
for analysis. Also see &cite.Hoare74; and &cite.SchmidtStalRohnertBuschmann;.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why isn't thread cancellation or termination provided?</para>
</question>
<answer>
<para>There's a valid need for thread termination, so at some point
&Boost.Thread; probably will include it, but only after we can find a
truly safe (and portable) mechanism for this concept.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Is it safe for threads to share automatic storage duration (stack)
objects via pointers or references?</para>
</question>
<answer>
<para>Only if you can guarantee that the lifetime of the stack object
will not end while other threads might still access the object. Thus the
safest practice is to avoid sharing stack objects, particularly in
designs where threads are created and destroyed dynamically. Restrict
sharing of stack objects to simple designs with very clear and
unchanging function and thread lifetimes. (Suggested by Darryl
Green).</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why has class semaphore disappeared?</para>
</question>
<answer>
<para>Semaphore was removed as too error prone. The same effect can be
achieved with greater safety by the combination of a mutex and a
condition variable.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why doesn't the thread's ctor take at least a void* to pass any
information along with the function? All other threading libs support
that and it makes Boost.Threads inferiour. </para>
</question>
<answer>
<para>There is no need, because Boost.Threads are superiour! First
thing is that its ctor doesn't take a function but a functor. That
means that you can pass an object with an overloaded operator() and
include additional data as members in that object. Beware though that
this object is copied, use boost::ref to prevent that. Secondly, even
a boost::function&lt;void (void)&gt; can carry parameters, you only have to
use boost::bind() to create it from any function and bind its
parameters.</para>
<para>That is also why Boost.Threads are superiour, because they
don't require you to pass a type-unsafe void pointer. Rather, you can
use the flexible Boost.Functions to create a thread entry out of
anything that can be called.</para>
</answer>
</qandaentry>
</qandaset>
</section>

View File

@@ -1,304 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<glossary id="thread.glossary" last-revision="$Date$">
<title>Glossary</title>
<para>Definitions are given in terms of the C++ Standard
&cite.ISO98;. References to the standard are in the form [1.2.3/4], which
represents the section number, with the paragraph number following the
"/".</para>
<para>Because the definitions are written in something akin to "standardese",
they can be difficult to understand. The intent isn't to confuse, but rather
to clarify the additional requirements &Boost.Thread; places on a C++
implementation as defined by the C++ Standard.</para>
<glossentry id="thread.glossary.thread">
<glossterm>Thread</glossterm>
<glossdef>
<para>Thread is short for "thread of execution". A thread of execution is
an execution environment [1.9/7] within the execution environment of a C++
program [1.9]. The main() function [3.6.1] of the program is the initial
function of the initial thread. A program in a multithreading environment
always has an initial thread even if the program explicitly creates no
additional threads.</para>
<para>Unless otherwise specified, each thread shares all aspects of its
execution environment with other threads in the program. Shared aspects of
the execution environment include, but are not limited to, the
following:</para>
<itemizedlist>
<listitem><para>Static storage duration (static, extern) objects
[3.7.1].</para></listitem>
<listitem><para>Dynamic storage duration (heap) objects [3.7.3]. Thus
each memory allocation will return a unique addresses, regardless of the
thread making the allocation request.</para></listitem>
<listitem><para>Automatic storage duration (stack) objects [3.7.2]
accessed via pointer or reference from another thread.</para></listitem>
<listitem><para>Resources provided by the operating system. For example,
files.</para></listitem>
<listitem><para>The program itself. In other words, each thread is
executing some function of the same program, not a totally different
program.</para></listitem>
</itemizedlist>
<para>Each thread has its own:</para>
<itemizedlist>
<listitem><para>Registers and current execution sequence (program
counter) [1.9/5].</para></listitem>
<listitem><para>Automatic storage duration (stack) objects
[3.7.2].</para></listitem>
</itemizedlist>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.thread-safe">
<glossterm>Thread-safe</glossterm>
<glossdef>
<para>A program is thread-safe if it has no <link
linkend="thread.glossary.race-condition">race conditions</link>, does
not <link linkend="thread.glossary.deadlock">deadlock</link>, and has
no <link linkend="thread.glossary.priority-failure">priority
failures</link>.</para>
<para>Note that thread-safety does not necessarily imply efficiency, and
than while some thread-safety violations can be determined statically at
compile time, many thread-safety errors can only only be detected at
runtime.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.thread-state">
<glossterm>Thread State</glossterm>
<glossdef>
<para>During the lifetime of a thread, it shall be in one of the following
states:</para>
<table>
<title>Thread States</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>State</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>Ready</entry>
<entry>Ready to run, but waiting for a processor.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Currently executing on a processor. Zero or more threads
may be running at any time, with a maximum equal to the number of
processors.</entry>
</row>
<row>
<entry>Blocked</entry>
<entry>Waiting for some resource other than a processor which is
not currently available, or for the completion of calls to library
functions [1.9/6]. The term "waiting" is synonymous with
"blocked"</entry>
</row>
<row>
<entry>Terminated</entry>
<entry>Finished execution but not yet detached or joined.</entry>
</row>
</tbody>
</tgroup>
</table>
<para>Thread state transitions shall occur only as specified:</para>
<table>
<title>Thread States Transitions</title>
<tgroup cols="3" align="left">
<thead>
<row>
<entry>From</entry>
<entry>To</entry>
<entry>Cause</entry>
</row>
</thead>
<tbody>
<row>
<entry>[none]</entry>
<entry>Ready</entry>
<entry><para>Thread is created by a call to a library function.
In the case of the initial thread, creation is implicit and
occurs during the startup of the main() function [3.6.1].</para></entry>
</row>
<row>
<entry>Ready</entry>
<entry>Running</entry>
<entry><para>Processor becomes available.</para></entry>
</row>
<row>
<entry>Running</entry>
<entry>Ready</entry>
<entry>Thread preempted.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Blocked</entry>
<entry>Thread calls a library function which waits for a resource or
for the completion of I/O.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Terminated</entry>
<entry>Thread returns from its initial function, calls a thread
termination library function, or is canceled by some other thread
calling a thread termination library function.</entry>
</row>
<row>
<entry>Blocked</entry>
<entry>Ready</entry>
<entry>The resource being waited for becomes available, or the
blocking library function completes.</entry>
</row>
<row>
<entry>Terminated</entry>
<entry>[none]</entry>
<entry>Thread is detached or joined by some other thread calling the
appropriate library function, or by program termination
[3.6.3].</entry>
</row>
</tbody>
</tgroup>
</table>
<para>[Note: if a suspend() function is added to the threading library,
additional transitions to the blocked state will have to be added to the
above table.]</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.race-condition">
<glossterm>Race Condition</glossterm>
<glossdef>
<para>A race condition is what occurs when multiple threads read from and write
to the same memory without proper synchronization, resulting in an incorrect
value being read or written. The result of a race condition may be a bit
pattern which isn't even a valid value for the data type. A race condition
results in undefined behavior [1.3.12].</para>
<para>Race conditions can be prevented by serializing memory access using
the tools provided by &Boost.Thread;.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.deadlock">
<glossterm>Deadlock</glossterm>
<glossdef>
<para>Deadlock is an execution state where for some set of threads, each
thread in the set is blocked waiting for some action by one of the other
threads in the set. Since each is waiting on the others, none will ever
become ready again.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.starvation">
<glossterm>Starvation</glossterm>
<glossdef>
<para>The condition in which a thread is not making sufficient progress in
its work during a given time interval.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.priority-failure">
<glossterm>Priority Failure</glossterm>
<glossdef>
<para>A priority failure (such as priority inversion or infinite overtaking)
occurs when threads are executed in such a sequence that required work is not
performed in time to be useful.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.undefined-behavior">
<glossterm>Undefined Behavior</glossterm>
<glossdef>
<para>The result of certain operations in &Boost.Thread; is undefined;
this means that those operations can invoke almost any behavior when
they are executed.</para>
<para>An operation whose behavior is undefined can work "correctly"
in some implementations (i.e., do what the programmer thought it
would do), while in other implementations it may exhibit almost
any "incorrect" behavior--such as returning an invalid value,
throwing an exception, generating an access violation, or terminating
the process.</para>
<para>Executing a statement whose behavior is undefined is a
programming error.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.memory-visibility">
<glossterm>Memory Visibility</glossterm>
<glossdef>
<para>An address [1.7] shall always point to the same memory byte,
regardless of the thread or processor dereferencing the address.</para>
<para>An object [1.8, 1.9] is accessible from multiple threads if it is of
static storage duration (static, extern) [3.7.1], or if a pointer or
reference to it is explicitly or implicitly dereferenced in multiple
threads.</para>
<para>For an object accessible from multiple threads, the value of the
object accessed from one thread may be indeterminate or different from the
value accessed from another thread, except under the conditions specified in
the following table. For the same row of the table, the value of an object
accessible at the indicated sequence point in thread A will be determinate
and the same if accessed at or after the indicated sequence point in thread
B, provided the object is not otherwise modified. In the table, the
"sequence point at a call" is the sequence point after the evaluation of all
function arguments [1.9/17], while the "sequence point after a call" is the
sequence point after the copying of the returned value... [1.9/17].</para>
<table>
<title>Memory Visibility</title>
<tgroup cols="2">
<thead>
<row>
<entry>Thread A</entry>
<entry>Thread B</entry>
</row>
</thead>
<tbody>
<row>
<entry>The sequence point at a call to a library thread-creation
function.</entry>
<entry>The first sequence point of the initial function in the new
thread created by the Thread A call.</entry>
</row>
<row>
<entry>The sequence point at a call to a library function which
locks a mutex, directly or by waiting for a condition
variable.</entry>
<entry>The sequence point after a call to a library function which
unlocks the same mutex.</entry>
</row>
<row>
<entry>The last sequence point before thread termination.</entry>
<entry>The sequence point after a call to a library function which
joins the terminated thread.</entry>
</row>
<row>
<entry>The sequence point at a call to a library function which
signals or broadcasts a condition variable.</entry>
<entry>The sequence point after the call to the library function
which was waiting on that same condition variable or signal.</entry>
</row>
</tbody>
</tgroup>
</table>
<para>The architecture of the execution environment and the observable
behavior of the abstract machine [1.9] shall be the same on all
processors.</para>
<para>The latitude granted by the C++ standard for an implementation to
alter the definition of observable behavior of the abstract machine to
include additional library I/O functions [1.9/6] is extended to include
threading library functions.</para>
<para>When an exception is thrown and there is no matching exception handler
in the same thread, behavior is undefined. The preferred behavior is the
same as when there is no matching exception handler in a program
[15.3/9]. That is, terminate() is called, and it is implementation-defined
whether or not the stack is unwound.</para>
</glossdef>
</glossentry>
<section>
<title>Acknowledgements</title>
<para>This document was originally written by Beman Dawes, and then much
improved by the incorporation of comments from William Kempf, who now
maintains the contents.</para>
<para>The visibility rules are based on &cite.Butenhof97;.</para>
</section>
</glossary>

View File

@@ -1,38 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.implementation_notes" last-revision="$Date$">
<title>Implementation Notes</title>
<section id="thread.implementation_notes.win32">
<title>Win32</title>
<para>
In the current Win32 implementation, creating a boost::thread object
during dll initialization will result in deadlock because the thread
class constructor causes the current thread to wait on the thread that
is being created until it signals that it has finished its initialization,
and, as stated in the
<ulink url="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/dllmain.asp">MSDN Library, "DllMain" article, "Remarks" section</ulink>,
"Because DLL notifications are serialized, entry-point functions should not
attempt to communicate with other threads or processes. Deadlocks may occur as a result."
(Also see <ulink url="http://www.microsoft.com/msj/archive/S220.aspx">"Under the Hood", January 1996</ulink>
for a more detailed discussion of this issue).
</para>
<para>
The following non-exhaustive list details some of the situations that
should be avoided until this issue can be addressed:
<itemizedlist>
<listitem>Creating a boost::thread object in DllMain() or in any function called by it.</listitem>
<listitem>Creating a boost::thread object in the constructor of a global static object or in any function called by one.</listitem>
<listitem>Creating a boost::thread object in MFC's CWinApp::InitInstance() function or in any function called by it.</listitem>
<listitem>Creating a boost::thread object in the function pointed to by MFC's _pRawDllMain function pointer or in any function called by it.</listitem>
</itemizedlist>
</para>
</section>
</section>

View File

@@ -1,309 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="mutex">
<purpose>
<para>The <classname>mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>try_mutex</classname> and <classname>timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>mutex</classname> class supplies the following typedef,
which <link linkend="thread.concepts.lock-models">models</link>
the specified locking strategy:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="try_mutex">
<purpose>
<para>The <classname>try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>mutex</classname> and <classname>timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>try_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>try_mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>try_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>try_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>try_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>try_mutex</classname> object.
</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="timed_mutex">
<purpose>
<para>The <classname>timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>mutex</classname> and <classname>try_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>timed_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>timed_mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>timed_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>timed_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>timed_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>timed_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

887
doc/mutex_concepts.qbk Normal file
View File

@@ -0,0 +1,887 @@
[section:mutex_concepts Mutex Concepts]
A mutex object facilitates protection against data races and allows thread-safe synchronization of data between threads. A thread
obtains ownership of a mutex object by calling one of the lock functions and relinquishes ownership by calling the corresponding
unlock function. Mutexes may be either recursive or non-recursive, and may grant simultaneous ownership to one or many
threads. __boost_thread__ supplies recursive and non-recursive mutexes with exclusive ownership semantics, along with a shared
ownership (multiple-reader / single-writer) mutex.
__boost_thread__ supports four basic concepts for lockable objects: __lockable_concept_type__, __timed_lockable_concept_type__,
__shared_lockable_concept_type__ and __upgrade_lockable_concept_type__. Each mutex type implements one or more of these concepts, as
do the various lock types.
[section:lockable `Lockable` Concept]
The __lockable_concept__ models exclusive ownership. A type that implements the __lockable_concept__ shall provide the following
member functions:
* [lock_ref_link `void lock();`]
* [try_lock_ref_link `bool try_lock();`]
* [unlock_ref_link `void unlock();`]
Lock ownership acquired through a call to __lock_ref__ or __try_lock_ref__ must be released through a call to __unlock_ref__.
[section:lock `void lock()`]
[variablelist
[[Effects:] [The current thread blocks until ownership can be obtained for the current thread.]]
[[Postcondition:] [The current thread owns `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:try_lock `bool try_lock()`]
[variablelist
[[Effects:] [Attempt to obtain ownership for the current thread without blocking.]]
[[Returns:] [`true` if ownership was obtained for the current thread, `false` otherwise.]]
[[Postcondition:] [If the call returns `true`, the current thread owns the `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:unlock `void unlock()`]
[variablelist
[[Precondition:] [The current thread owns `*this`.]]
[[Effects:] [Releases ownership by the current thread.]]
[[Postcondition:] [The current thread no longer owns `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[endsect]
[section:timed_lockable `TimedLockable` Concept]
The __timed_lockable_concept__ refines the __lockable_concept__ to add support for
timeouts when trying to acquire the lock.
A type that implements the __timed_lockable_concept__ shall meet the requirements
of the __lockable_concept__. In addition, the following member functions must be
provided:
* [timed_lock_ref_link `bool timed_lock(boost::system_time const& abs_time);`]
* [timed_lock_duration_ref_link `template<typename DurationType> bool timed_lock(DurationType const& rel_time);`]
Lock ownership acquired through a call to __timed_lock_ref__ must be released through a call to __unlock_ref__.
[section:timed_lock `bool timed_lock(boost::system_time const& abs_time)`]
[variablelist
[[Effects:] [Attempt to obtain ownership for the current thread. Blocks until ownership can be obtained, or the specified time is
reached. If the specified time has already passed, behaves as __try_lock_ref__.]]
[[Returns:] [`true` if ownership was obtained for the current thread, `false` otherwise.]]
[[Postcondition:] [If the call returns `true`, the current thread owns `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:timed_lock_duration `template<typename DurationType> bool
timed_lock(DurationType const& rel_time)`]
[variablelist
[[Effects:] [As-if [timed_lock_ref_link
`timed_lock(boost::get_system_time()+rel_time)`].]]
]
[endsect]
[endsect]
[section:shared_lockable `SharedLockable` Concept]
The __shared_lockable_concept__ is a refinement of the __timed_lockable_concept__ that
allows for ['shared ownership] as well as ['exclusive ownership]. This is the
standard multiple-reader / single-write model: at most one thread can have
exclusive ownership, and if any thread does have exclusive ownership, no other threads
can have shared or exclusive ownership. Alternatively, many threads may have
shared ownership.
For a type to implement the __shared_lockable_concept__, as well as meeting the
requirements of the __timed_lockable_concept__, it must also provide the following
member functions:
* [lock_shared_ref_link `void lock_shared();`]
* [try_lock_shared_ref_link `bool try_lock_shared();`]
* [unlock_shared_ref_link `bool unlock_shared();`]
* [timed_lock_shared_ref_link `bool timed_lock_shared(boost::system_time const& abs_time);`]
Lock ownership acquired through a call to __lock_shared_ref__, __try_lock_shared_ref__ or __timed_lock_shared_ref__ must be released
through a call to __unlock_shared_ref__.
[section:lock_shared `void lock_shared()`]
[variablelist
[[Effects:] [The current thread blocks until shared ownership can be obtained for the current thread.]]
[[Postcondition:] [The current thread has shared ownership of `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:try_lock_shared `bool try_lock_shared()`]
[variablelist
[[Effects:] [Attempt to obtain shared ownership for the current thread without blocking.]]
[[Returns:] [`true` if shared ownership was obtained for the current thread, `false` otherwise.]]
[[Postcondition:] [If the call returns `true`, the current thread has shared ownership of `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:timed_lock_shared `bool timed_lock_shared(boost::system_time const& abs_time)`]
[variablelist
[[Effects:] [Attempt to obtain shared ownership for the current thread. Blocks until shared ownership can be obtained, or the
specified time is reached. If the specified time has already passed, behaves as __try_lock_shared_ref__.]]
[[Returns:] [`true` if shared ownership was acquired for the current thread, `false` otherwise.]]
[[Postcondition:] [If the call returns `true`, the current thread has shared
ownership of `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:unlock_shared `void unlock_shared()`]
[variablelist
[[Precondition:] [The current thread has shared ownership of `*this`.]]
[[Effects:] [Releases shared ownership of `*this` by the current thread.]]
[[Postcondition:] [The current thread no longer has shared ownership of `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[endsect]
[section:upgrade_lockable `UpgradeLockable` Concept]
The __upgrade_lockable_concept__ is a refinement of the __shared_lockable_concept__ that allows for ['upgradable ownership] as well
as ['shared ownership] and ['exclusive ownership]. This is an extension to the multiple-reader / single-write model provided by the
__shared_lockable_concept__: a single thread may have ['upgradable ownership] at the same time as others have ['shared
ownership]. The thread with ['upgradable ownership] may at any time attempt to upgrade that ownership to ['exclusive ownership]. If
no other threads have shared ownership, the upgrade is completed immediately, and the thread now has ['exclusive ownership], which
must be relinquished by a call to __unlock_ref__, just as if it had been acquired by a call to __lock_ref__.
If a thread with ['upgradable ownership] tries to upgrade whilst other threads have ['shared ownership], the attempt will fail and
the thread will block until ['exclusive ownership] can be acquired.
Ownership can also be ['downgraded] as well as ['upgraded]: exclusive ownership of an implementation of the
__upgrade_lockable_concept__ can be downgraded to upgradable ownership or shared ownership, and upgradable ownership can be
downgraded to plain shared ownership.
For a type to implement the __upgrade_lockable_concept__, as well as meeting the
requirements of the __shared_lockable_concept__, it must also provide the following
member functions:
* [lock_upgrade_ref_link `void lock_upgrade();`]
* [unlock_upgrade_ref_link `bool unlock_upgrade();`]
* [unlock_upgrade_and_lock_ref_link `void unlock_upgrade_and_lock();`]
* [unlock_and_lock_upgrade_ref_link `void unlock_and_lock_upgrade();`]
* [unlock_upgrade_and_lock_shared_ref_link `void unlock_upgrade_and_lock_shared();`]
Lock ownership acquired through a call to __lock_upgrade_ref__ must be released through a call to __unlock_upgrade_ref__. If the
ownership type is changed through a call to one of the `unlock_xxx_and_lock_yyy()` functions, ownership must be released through a
call to the unlock function corresponding to the new level of ownership.
[section:lock_upgrade `void lock_upgrade()`]
[variablelist
[[Effects:] [The current thread blocks until upgrade ownership can be obtained for the current thread.]]
[[Postcondition:] [The current thread has upgrade ownership of `*this`.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:unlock_upgrade `void unlock_upgrade()`]
[variablelist
[[Precondition:] [The current thread has upgrade ownership of `*this`.]]
[[Effects:] [Releases upgrade ownership of `*this` by the current thread.]]
[[Postcondition:] [The current thread no longer has upgrade ownership of `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:unlock_upgrade_and_lock `void unlock_upgrade_and_lock()`]
[variablelist
[[Precondition:] [The current thread has upgrade ownership of `*this`.]]
[[Effects:] [Atomically releases upgrade ownership of `*this` by the current thread and acquires exclusive ownership of `*this`. If
any other threads have shared ownership, blocks until exclusive ownership can be acquired.]]
[[Postcondition:] [The current thread has exclusive ownership of `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:unlock_upgrade_and_lock_shared `void unlock_upgrade_and_lock_shared()`]
[variablelist
[[Precondition:] [The current thread has upgrade ownership of `*this`.]]
[[Effects:] [Atomically releases upgrade ownership of `*this` by the current thread and acquires shared ownership of `*this` without
blocking.]]
[[Postcondition:] [The current thread has shared ownership of `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:unlock_and_lock_upgrade `void unlock_and_lock_upgrade()`]
[variablelist
[[Precondition:] [The current thread has exclusive ownership of `*this`.]]
[[Effects:] [Atomically releases exclusive ownership of `*this` by the current thread and acquires upgrade ownership of `*this`
without blocking.]]
[[Postcondition:] [The current thread has upgrade ownership of `*this`.]]
[[Throws:] [Nothing]]
]
[endsect]
[endsect]
[endsect]
[section:locks Lock Types]
[section:lock_guard Class template `lock_guard`]
template<typename Lockable>
class lock_guard
{
public:
explicit lock_guard(Lockable& m_);
lock_guard(Lockable& m_,boost::adopt_lock_t);
~lock_guard();
};
__lock_guard__ is very simple: on construction it
acquires ownership of the implementation of the __lockable_concept__ supplied as
the constructor parameter. On destruction, the ownership is released. This
provides simple RAII-style locking of a __lockable_concept_type__ object, to facilitate exception-safe
locking and unlocking. In addition, the [link
thread.synchronization.locks.lock_guard.constructor_adopt `lock_guard(Lockable &
m,boost::adopt_lock_t)` constructor] allows the __lock_guard__ object to
take ownership of a lock already held by the current thread.
[section:constructor `lock_guard(Lockable & m)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [lock_ref_link `m.lock()`].]]
[[Throws:] [Any exception thrown by the call to [lock_ref_link `m.lock()`].]]
]
[endsect]
[section:constructor_adopt `lock_guard(Lockable & m,boost::adopt_lock_t)`]
[variablelist
[[Precondition:] [The current thread owns a lock on `m` equivalent to one
obtained by a call to [lock_ref_link `m.lock()`].]]
[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of
`m`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:destructor `~lock_guard()`]
[variablelist
[[Effects:] [Invokes [unlock_ref_link `m.unlock()`] on the __lockable_concept_type__
object passed to the constructor.]]
[[Throws:] [Nothing.]]
]
[endsect]
[endsect]
[section:unique_lock Class template `unique_lock`]
template<typename Lockable>
class unique_lock
{
public:
explicit unique_lock(Lockable& m_);
unique_lock(Lockable& m_,adopt_lock_t);
unique_lock(Lockable& m_,defer_lock_t);
unique_lock(Lockable& m_,try_to_lock_t);
unique_lock(Lockable& m_,system_time const& target_time);
~unique_lock();
unique_lock(detail::thread_move_t<unique_lock<Lockable> > other);
unique_lock(detail::thread_move_t<upgrade_lock<Lockable> > other);
operator detail::thread_move_t<unique_lock<Lockable> >();
detail::thread_move_t<unique_lock<Lockable> > move();
unique_lock& operator=(detail::thread_move_t<unique_lock<Lockable> > other);
unique_lock& operator=(detail::thread_move_t<upgrade_lock<Lockable> > other);
void swap(unique_lock& other);
void swap(detail::thread_move_t<unique_lock<Lockable> > other);
void lock();
bool try_lock();
template<typename TimeDuration>
bool timed_lock(TimeDuration const& relative_time);
bool timed_lock(::boost::system_time const& absolute_time);
void unlock();
bool owns_lock() const;
operator ``['unspecified-bool-type]``() const;
bool operator!() const;
Lockable* mutex() const;
Lockable* release();
};
__unique_lock__ is more complex than __lock_guard__: not only does it provide for RAII-style locking, it also allows for deferring
acquiring the lock until the __lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking
fashion, or with a timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the
__lockable_concept_type__ object, or otherwise adopted a lock on the __lockable_concept_type__ object.
Specializations of __unique_lock__ model the __timed_lockable_concept__ if the supplied __lockable_concept_type__ type itself models
__timed_lockable_concept__ (e.g. `boost::unique_lock<boost::timed_mutex>`), or the __lockable_concept__ otherwise
(e.g. `boost::unique_lock<boost::mutex>`).
An instance of __unique_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a
pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object
is destroyed, then the destructor will invoke [unlock_ref_link `mutex()->unlock()`].
The member functions of __unique_lock__ are not thread-safe. In particular, __unique_lock__ is intended to model the ownership of a
__lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock state
(including the destructor) must be called by the same thread that acquired ownership of the lock state.
[section:constructor `unique_lock(Lockable & m)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [lock_ref_link `m.lock()`].]]
[[Postcondition:] [__owns_lock_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Any exception thrown by the call to [lock_ref_link `m.lock()`].]]
]
[endsect]
[section:constructor_adopt `unique_lock(Lockable & m,boost::adopt_lock_t)`]
[variablelist
[[Precondition:] [The current thread owns an exclusive lock on `m`.]]
[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of `m`.]]
[[Postcondition:] [__owns_lock_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_defer `unique_lock(Lockable & m,boost::defer_lock_t)`]
[variablelist
[[Effects:] [Stores a reference to `m`.]]
[[Postcondition:] [__owns_lock_ref__ returns `false`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_try `unique_lock(Lockable & m,boost::try_to_lock_t)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [try_lock_ref_link
`m.try_lock()`], and takes ownership of the lock state if the call returns
`true`.]]
[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __try_lock_ref__
returned `true`, then __owns_lock_ref__ returns `true`, otherwise __owns_lock_ref__
returns `false`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_abs_time `unique_lock(Lockable & m,boost::system_time const& abs_time)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [timed_lock_ref_link
`m.timed_lock(abs_time)`], and takes ownership of the lock state if the call
returns `true`.]]
[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __timed_lock_ref__
returned `true`, then __owns_lock_ref__ returns `true`, otherwise __owns_lock_ref__
returns `false`.]]
[[Throws:] [Any exceptions thrown by the call to [timed_lock_ref_link `m.timed_lock(abs_time)`].]]
]
[endsect]
[section:destructor `~unique_lock()`]
[variablelist
[[Effects:] [Invokes __mutex_func_ref__`->`[unlock_ref_link `unlock()`] if
__owns_lock_ref__ returns `true`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:owns_lock `bool owns_lock() const`]
[variablelist
[[Returns:] [`true` if the `*this` owns the lock on the __lockable_concept_type__
object associated with `*this`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:mutex `Lockable* mutex() const`]
[variablelist
[[Returns:] [A pointer to the __lockable_concept_type__ object associated with
`*this`, or `NULL` if there is no such object.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:bool_conversion `operator unspecified-bool-type() const`]
[variablelist
[[Returns:] [If __owns_lock_ref__ would return `true`, a value that evaluates to
`true` in boolean contexts, otherwise a value that evaluates to `false` in
boolean contexts.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:operator_not `bool operator!() const`]
[variablelist
[[Returns:] [`!` __owns_lock_ref__.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:release `Lockable* release()`]
[variablelist
[[Effects:] [The association between `*this` and the __lockable_concept_type__ object is removed, without affecting the lock state
of the __lockable_concept_type__ object. If __owns_lock_ref__ would have returned `true`, it is the responsibility of the calling
code to ensure that the __lockable_concept_type__ is correctly unlocked.]]
[[Returns:] [A pointer to the __lockable_concept_type__ object associated with `*this` at the point of the call, or `NULL` if there
is no such object.]]
[[Throws:] [Nothing.]]
[[Postcondition:] [`*this` is no longer associated with any __lockable_concept_type__ object. __mutex_func_ref__ returns `NULL` and
__owns_lock_ref__ returns `false`.]]
]
[endsect]
[endsect]
[section:shared_lock Class template `shared_lock`]
template<typename Lockable>
class shared_lock
{
public:
explicit shared_lock(Lockable& m_);
shared_lock(Lockable& m_,adopt_lock_t);
shared_lock(Lockable& m_,defer_lock_t);
shared_lock(Lockable& m_,try_to_lock_t);
shared_lock(Lockable& m_,system_time const& target_time);
shared_lock(detail::thread_move_t<shared_lock<Lockable> > other);
shared_lock(detail::thread_move_t<unique_lock<Lockable> > other);
shared_lock(detail::thread_move_t<upgrade_lock<Lockable> > other);
~shared_lock();
operator detail::thread_move_t<shared_lock<Lockable> >();
detail::thread_move_t<shared_lock<Lockable> > move();
shared_lock& operator=(detail::thread_move_t<shared_lock<Lockable> > other);
shared_lock& operator=(detail::thread_move_t<unique_lock<Lockable> > other);
shared_lock& operator=(detail::thread_move_t<upgrade_lock<Lockable> > other);
void swap(shared_lock& other);
void lock();
bool try_lock();
bool timed_lock(boost::system_time const& target_time);
void unlock();
operator ``['unspecified-bool-type]``() const;
bool operator!() const;
bool owns_lock() const;
};
Like __unique_lock__, __shared_lock__ models the __lockable_concept__, but rather than acquiring unique ownership of the supplied
__lockable_concept_type__ object, locking an instance of __shared_lock__ acquires shared ownership.
Like __unique_lock__, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the
__lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a
timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the __lockable_concept_type__
object, or otherwise adopted a lock on the __lockable_concept_type__ object.
An instance of __shared_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a
pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object
is destroyed, then the destructor will invoke [unlock_shared_ref_link `mutex()->unlock_shared()`].
The member functions of __shared_lock__ are not thread-safe. In particular, __shared_lock__ is intended to model the shared
ownership of a __lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock
state (including the destructor) must be called by the same thread that acquired ownership of the lock state.
[section:constructor `shared_lock(Lockable & m)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [lock_shared_ref_link `m.lock_shared()`].]]
[[Postcondition:] [__owns_lock_shared_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Any exception thrown by the call to [lock_shared_ref_link `m.lock_shared()`].]]
]
[endsect]
[section:constructor_adopt `shared_lock(Lockable & m,boost::adopt_lock_t)`]
[variablelist
[[Precondition:] [The current thread owns an exclusive lock on `m`.]]
[[Effects:] [Stores a reference to `m`. Takes ownership of the lock state of `m`.]]
[[Postcondition:] [__owns_lock_shared_ref__ returns `true`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_defer `shared_lock(Lockable & m,boost::defer_lock_t)`]
[variablelist
[[Effects:] [Stores a reference to `m`.]]
[[Postcondition:] [__owns_lock_shared_ref__ returns `false`. __mutex_func_ref__ returns `&m`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_try `shared_lock(Lockable & m,boost::try_to_lock_t)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [try_lock_shared_ref_link
`m.try_lock_shared()`], and takes ownership of the lock state if the call returns
`true`.]]
[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __try_lock_shared_ref__
returned `true`, then __owns_lock_shared_ref__ returns `true`, otherwise __owns_lock_shared_ref__
returns `false`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:constructor_abs_time `shared_lock(Lockable & m,boost::system_time const& abs_time)`]
[variablelist
[[Effects:] [Stores a reference to `m`. Invokes [timed_lock_shared_ref_link
`m.timed_lock(abs_time)`], and takes ownership of the lock state if the call
returns `true`.]]
[[Postcondition:] [__mutex_func_ref__ returns `&m`. If the call to __timed_lock_shared_ref__
returned `true`, then __owns_lock_shared_ref__ returns `true`, otherwise __owns_lock_shared_ref__
returns `false`.]]
[[Throws:] [Any exceptions thrown by the call to [timed_lock_shared_ref_link `m.timed_lock(abs_time)`].]]
]
[endsect]
[section:destructor `~shared_lock()`]
[variablelist
[[Effects:] [Invokes __mutex_func_ref__`->`[unlock_shared_ref_link `unlock_shared()`] if
__owns_lock_shared_ref__ returns `true`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:owns_lock `bool owns_lock() const`]
[variablelist
[[Returns:] [`true` if the `*this` owns the lock on the __lockable_concept_type__
object associated with `*this`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:mutex `Lockable* mutex() const`]
[variablelist
[[Returns:] [A pointer to the __lockable_concept_type__ object associated with
`*this`, or `NULL` if there is no such object.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:bool_conversion `operator unspecified-bool-type() const`]
[variablelist
[[Returns:] [If __owns_lock_shared_ref__ would return `true`, a value that evaluates to
`true` in boolean contexts, otherwise a value that evaluates to `false` in
boolean contexts.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:operator_not `bool operator!() const`]
[variablelist
[[Returns:] [`!` __owns_lock_shared_ref__.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:release `Lockable* release()`]
[variablelist
[[Effects:] [The association between `*this` and the __lockable_concept_type__ object is removed, without affecting the lock state
of the __lockable_concept_type__ object. If __owns_lock_shared_ref__ would have returned `true`, it is the responsibility of the calling
code to ensure that the __lockable_concept_type__ is correctly unlocked.]]
[[Returns:] [A pointer to the __lockable_concept_type__ object associated with `*this` at the point of the call, or `NULL` if there
is no such object.]]
[[Throws:] [Nothing.]]
[[Postcondition:] [`*this` is no longer associated with any __lockable_concept_type__ object. __mutex_func_ref__ returns `NULL` and
__owns_lock_shared_ref__ returns `false`.]]
]
[endsect]
[endsect]
[section:upgrade_lock Class template `upgrade_lock`]
template<typename Lockable>
class upgrade_lock
{
public:
explicit upgrade_lock(Lockable& m_);
upgrade_lock(detail::thread_move_t<upgrade_lock<Lockable> > other);
upgrade_lock(detail::thread_move_t<unique_lock<Lockable> > other);
~upgrade_lock();
operator detail::thread_move_t<upgrade_lock<Lockable> >();
detail::thread_move_t<upgrade_lock<Lockable> > move();
upgrade_lock& operator=(detail::thread_move_t<upgrade_lock<Lockable> > other);
upgrade_lock& operator=(detail::thread_move_t<unique_lock<Lockable> > other);
void swap(upgrade_lock& other);
void lock();
void unlock();
operator ``['unspecified-bool-type]``() const;
bool operator!() const;
bool owns_lock() const;
};
Like __unique_lock__, __upgrade_lock__ models the __lockable_concept__, but rather than acquiring unique ownership of the supplied
__lockable_concept_type__ object, locking an instance of __upgrade_lock__ acquires upgrade ownership.
Like __unique_lock__, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the
__lock_ref__ member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a
timeout. Consequently, __unlock_ref__ is only called in the destructor if the lock object has locked the __lockable_concept_type__
object, or otherwise adopted a lock on the __lockable_concept_type__ object.
An instance of __upgrade_lock__ is said to ['own] the lock state of a __lockable_concept_type__ `m` if __mutex_func_ref__ returns a
pointer to `m` and __owns_lock_ref__ returns `true`. If an object that ['owns] the lock state of a __lockable_concept_type__ object
is destroyed, then the destructor will invoke [unlock_upgrade_ref_link `mutex()->unlock_upgrade()`].
The member functions of __upgrade_lock__ are not thread-safe. In particular, __upgrade_lock__ is intended to model the upgrade
ownership of a __lockable_concept_type__ object by a particular thread, and the member functions that release ownership of the lock
state (including the destructor) must be called by the same thread that acquired ownership of the lock state.
[endsect]
[section:upgrade_to_unique_lock Class template `upgrade_to_unique_lock`]
template <class Lockable>
class upgrade_to_unique_lock
{
public:
explicit upgrade_to_unique_lock(upgrade_lock<Lockable>& m_);
~upgrade_to_unique_lock();
upgrade_to_unique_lock(detail::thread_move_t<upgrade_to_unique_lock<Lockable> > other);
upgrade_to_unique_lock& operator=(detail::thread_move_t<upgrade_to_unique_lock<Lockable> > other);
void swap(upgrade_to_unique_lock& other);
operator ``['unspecified-bool-type]``() const;
bool operator!() const;
bool owns_lock() const;
};
__upgrade_to_unique_lock__ allows for a temporary upgrade of an __upgrade_lock__ to exclusive ownership. When constructed with a
reference to an instance of __upgrade_lock__, if that instance has upgrade ownership on some __lockable_concept_type__ object, that
ownership is upgraded to exclusive ownership. When the __upgrade_to_unique_lock__ instance is destroyed, the ownership of the
__lockable_concept_type__ is downgraded back to ['upgrade ownership].
[endsect]
[endsect]

128
doc/mutexes.qbk Normal file
View File

@@ -0,0 +1,128 @@
[section:mutex_types Mutex Types]
[section:mutex Class `mutex`]
class mutex:
boost::noncopyable
{
public:
mutex();
~mutex();
void lock();
bool try_lock();
void unlock();
typedef unique_lock<mutex> scoped_lock;
typedef scoped_lock scoped_try_lock;
};
__mutex__ implements the __lockable_concept__ to provide an exclusive-ownership mutex. At most one thread can own the lock on a given
instance of __mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__ and __unlock_ref__ shall be permitted.
[endsect]
[section:try_mutex Typedef `try_mutex`]
typedef mutex try_mutex;
__try_mutex__ is a `typedef` to __mutex__, provided for backwards compatibility with previous releases of boost.
[endsect]
[section:timed_mutex Class `timed_mutex`]
class timed_mutex:
boost::noncopyable
{
public:
timed_mutex();
~timed_mutex();
void lock();
void unlock();
bool try_lock();
bool timed_lock(system_time const & abs_time);
template<typename TimeDuration>
bool timed_lock(TimeDuration const & relative_time);
typedef unique_lock<timed_mutex> scoped_timed_lock;
typedef scoped_timed_lock scoped_try_lock;
typedef scoped_timed_lock scoped_lock;
};
__timed_mutex__ implements the __timed_lockable_concept__ to provide an exclusive-ownership mutex. At most one thread can own the
lock on a given instance of __timed_mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__,
__timed_lock_ref__, __timed_lock_duration_ref__ and __unlock_ref__ shall be permitted.
[endsect]
[section:recursive_mutex Class `recursive_mutex`]
class recursive_mutex:
boost::noncopyable
{
public:
recursive_mutex();
~recursive_mutex();
void lock();
bool try_lock();
void unlock();
typedef unique_lock<recursive_mutex> scoped_lock;
typedef scoped_lock scoped_try_lock;
};
__recursive_mutex__ implements the __lockable_concept__ to provide an exclusive-ownership recursive mutex. At most one thread can
own the lock on a given instance of __recursive_mutex__ at any time. Multiple concurrent calls to __lock_ref__, __try_lock_ref__ and
__unlock_ref__ shall be permitted. A thread that already has exclusive ownership of a given __recursive_mutex__ instance can call
__lock_ref__ or __try_lock_ref__ to acquire an additional level of ownership of the mutex. __unlock_ref__ must be called once for
each level of ownership acquired by a single thread before ownership can be acquired by another thread.
[endsect]
[section:recursive_try_mutex Typedef `recursive_try_mutex`]
typedef recursive_mutex recursive_try_mutex;
__recursive_try_mutex__ is a `typedef` to __recursive_mutex__, provided for backwards compatibility with previous releases of boost.
[endsect]
[section:recursive_timed_mutex Class `recursive_timed_mutex`]
class recursive_timed_mutex:
boost::noncopyable
{
public:
recursive_timed_mutex();
~recursive_timed_mutex();
void lock();
bool try_lock();
void unlock();
bool timed_lock(system_time const & abs_time);
template<typename TimeDuration>
bool timed_lock(TimeDuration const & relative_time);
typedef unique_lock<recursive_timed_mutex> scoped_lock;
typedef scoped_lock scoped_try_lock;
typedef scoped_lock scoped_timed_lock;
};
__recursive_timed_mutex__ implements the __timed_lockable_concept__ to provide an exclusive-ownership recursive mutex. At most one
thread can own the lock on a given instance of __recursive_timed_mutex__ at any time. Multiple concurrent calls to __lock_ref__,
__try_lock_ref__, __timed_lock_ref__, __timed_lock_duration_ref__ and __unlock_ref__ shall be permitted. A thread that already has
exclusive ownership of a given __recursive_timed_mutex__ instance can call __lock_ref__, __timed_lock_ref__,
__timed_lock_duration_ref__ or __try_lock_ref__ to acquire an additional level of ownership of the mutex. __unlock_ref__ must be
called once for each level of ownership acquired by a single thread before ownership can be acquired by another thread.
[endsect]
[include shared_mutex_ref.qbk]
[endsect]

View File

@@ -1,88 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/once.hpp"
last-revision="$Date$">
<macro name="BOOST_ONCE_INIT">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>The implementation-defined macro
<macroname>BOOST_ONCE_INIT</macroname> is a constant value used to
initialize <code>once_flag</code> instances to indicate that the
logically associated routine has not been run yet. See
<functionname>call_once</functionname> for more details.</description>
</macro>
<namespace name="boost">
<typedef name="once_flag">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>The implementation-defined type <code>once_flag</code>
is used as a flag to insure a routine is called only once.
Instances of this type should be statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>. See
<functionname>call_once</functionname> for more details.
</description>
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<function name="call_once">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>
<para>Example usage is as follows:</para>
<para>
<programlisting>//Example usage:
boost::once_flag once = BOOST_ONCE_INIT;
void init()
{
//...
}
void thread_proc()
{
boost::call_once(once, &amp;init);
}</programlisting>
</para></description>
<parameter name="flag">
<paramtype>once_flag&amp;</paramtype>
</parameter>
<parameter name="func">
<paramtype>Function func</paramtype>
</parameter>
<effects>As if (in an atomic fashion):
<code>if (flag == BOOST_ONCE_INIT) func();</code>. If <code>func()</code> throws an exception, it shall be as if this
thread never invoked <code>call_once</code></effects>
<postconditions><code>flag != BOOST_ONCE_INIT</code> unless <code>func()</code> throws an exception.
</postconditions>
</function>
</namespace>
</header>

45
doc/once.qbk Normal file
View File

@@ -0,0 +1,45 @@
[section:once One-time Initialization]
`boost::call_once` provides a mechanism for ensuring that an initialization routine is run exactly once without data races or deadlocks.
[section:once_flag Typedef `once_flag`]
typedef platform-specific-type once_flag;
#define BOOST_ONCE_INIT platform-specific-initializer
Objects of type `boost::once_flag` shall be initialized with `BOOST_ONCE_INIT`:
boost::once_flag f=BOOST_ONCE_INIT;
[endsect]
[section:call_once Non-member function `call_once`]
template<typename Callable>
void call_once(once_flag& flag,Callable func);
[variablelist
[[Requires:] [`Callable` is `CopyConstructible`. Copying `func` shall have no side effects, and the effect of calling the copy shall
be equivalent to calling the original. ]]
[[Effects:] [Calls to `call_once` on the same `once_flag` object are serialized. If there has been no prior effective `call_once` on
the same `once_flag` object, the argument `func` (or a copy thereof) is called as-if by invoking `func(args)`, and the invocation of
`call_once` is effective if and only if `func(args)` returns without exception. If an exception is thrown, the exception is
propagated to the caller. If there has been a prior effective `call_once` on the same `once_flag` object, the `call_once` returns
without invoking `func`. ]]
[[Synchronization:] [The completion of an effective `call_once` invocation on a `once_flag` object, synchronizes with
all subsequent `call_once` invocations on the same `once_flag` object. ]]
[[Throws:] [`thread_resource_error` when the effects cannot be achieved. or any exception propagated from `func`.]]
]
void call_once(void (*func)(),once_flag& flag);
This second overload is provided for backwards compatibility. The effects of `call_once(func,flag)` shall be the same as those of
`call_once(flag,func)`.
[endsect]
[endsect]

15
doc/overview.qbk Normal file
View File

@@ -0,0 +1,15 @@
[section:overview Overview]
__boost_thread__ enables the use of multiple threads of execution with shared data in portable C++ code. It provides classes and
functions for managing the threads themselves, along with others for synchronizing data between the threads or providing separate
copies of data specific to individual threads.
The __boost_thread__ library was originally written and designed by William E. Kempf. This version is a major rewrite designed to
closely follow the proposals presented to the C++ Standards Committee, in particular
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2497.html N2497],
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2320.html N2320],
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2184.html N2184],
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2139.html N2139], and
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.html N2094]
[endsect]

View File

@@ -1,206 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.overview" last-revision="$Date$">
<title>Overview</title>
<section id="thread.introduction">
<title>Introduction</title>
<para>&Boost.Thread; allows C++ programs to execute as multiple,
asynchronous, independent threads-of-execution. Each thread has its own
machine state including program instruction counter and registers. Programs
which execute as multiple threads are called multithreaded programs to
distinguish them from traditional single-threaded programs. The <link
linkend="thread.glossary">glossary</link> gives a more complete description
of the multithreading execution environment.</para>
<para>Multithreading provides several advantages:
<itemizedlist>
<listitem>
<para>Programs which would otherwise block waiting for some external
event can continue to respond if the blocking operation is placed in a
separate thread. Multithreading is usually an absolute requirement for
these programs.</para>
</listitem>
<listitem>
<para>Well-designed multithreaded programs may execute faster than
single-threaded programs, particularly on multiprocessor hardware.
Note, however, that poorly-designed multithreaded programs are often
slower than single-threaded programs.</para>
</listitem>
<listitem>
<para>Some program designs may be easier to formulate using a
multithreaded approach. After all, the real world is
asynchronous!</para>
</listitem>
</itemizedlist></para>
</section>
<section>
<title>Dangers</title>
<section>
<title>General considerations</title>
<para>Beyond the errors which can occur in single-threaded programs,
multithreaded programs are subject to additional errors:
<itemizedlist>
<listitem>
<para><link linkend="thread.glossary.race-condition">Race
conditions</link></para>
</listitem>
<listitem>
<para><link linkend="thread.glossary.deadlock">Deadlock</link>
(sometimes called "deadly embrace")</para>
</listitem>
<listitem>
<para><link linkend="thread.glossary.priority-failure">Priority
failures</link> (priority inversion, infinite overtaking, starvation,
etc.)</para>
</listitem>
</itemizedlist></para>
<para>Every multithreaded program must be designed carefully to avoid these
errors. These aren't rare or exotic failures - they are virtually guaranteed
to occur unless multithreaded code is designed to avoid them. Priority
failures are somewhat less common, but are nonetheless serious.</para>
<para>The <link linkend="thread.design">&Boost.Thread; design</link>
attempts to minimize these errors, but they will still occur unless the
programmer proactively designs to avoid them.</para>
<note>Please also see <xref linkend="thread.implementation_notes"/>
for additional, implementation-specific considerations.</note>
</section>
<section>
<title>Testing and debugging considerations</title>
<para>Multithreaded programs are non-deterministic. In other words, the
same program with the same input data may follow different execution
paths each time it is invoked. That can make testing and debugging a
nightmare:
<itemizedlist>
<listitem>
<para>Failures are often not repeatable.</para>
</listitem>
<listitem>
<para>Probe effect causes debuggers to produce very different results
from non-debug uses.</para>
</listitem>
<listitem>
<para>Debuggers require special support to show thread state.</para>
</listitem>
<listitem>
<para>Tests on a single processor system may give no indication of
serious errors which would appear on multiprocessor systems, and visa
versa. Thus test cases should include a varying number of
processors.</para>
</listitem>
<listitem>
<para>For programs which create a varying number of threads according
to workload, tests which don't span the full range of possibilities
may miss serious errors.</para>
</listitem>
</itemizedlist></para>
</section>
<section>
<title>Getting a head start</title>
<para>Although it might appear that multithreaded programs are inherently
unreliable, many reliable multithreaded programs do exist. Multithreading
techniques are known which lead to reliable programs.</para>
<para>Design patterns for reliable multithreaded programs, including the
important <emphasis>monitor</emphasis> pattern, are presented in
<emphasis>Pattern-Oriented Software Architecture Volume 2 - Patterns for
Concurrent and Networked Objects</emphasis>
&cite.SchmidtStalRohnertBuschmann;. Many important multithreading programming
considerations (independent of threading library) are discussed in
<emphasis>Programming with POSIX Threads</emphasis> &cite.Butenhof97;.</para>
<para>Doing some reading before attempting multithreaded designs will
give you a head start toward reliable multithreaded programs.</para>
</section>
</section>
<section>
<title>C++ Standard Library usage in multithreaded programs</title>
<section>
<title>Runtime libraries</title>
<para>
<emphasis role="bold">Warning:</emphasis> Multithreaded programs such as
those using &Boost.Thread; must link to <link
linkend="thread.glossary.thread-safe">thread-safe</link> versions of
all runtime libraries used by the program, including the runtime library
for the C++ Standard Library. Failure to do so will cause <link
linkend="thread.glossary.race-condition">race conditions</link> to occur
when multiple threads simultaneously execute runtime library functions for
<code>new</code>, <code>delete</code>, or other language features which
imply shared state.</para>
</section>
<section>
<title>Potentially non-thread-safe functions</title>
<para>Certain C++ Standard Library functions inherited from C are
particular problems because they hold internal state between
calls:
<itemizedlist>
<listitem>
<para><code>rand</code></para>
</listitem>
<listitem>
<para><code>strtok</code></para>
</listitem>
<listitem>
<para><code>asctime</code></para>
</listitem>
<listitem>
<para><code>ctime</code></para>
</listitem>
<listitem>
<para><code>gmtime</code></para>
</listitem>
<listitem>
<para><code>localtime</code></para>
</listitem>
</itemizedlist></para>
<para>It is possible to write thread-safe implementations of these by
using thread specific storage (see
<classname>boost::thread_specific_ptr</classname>), and several C++
compiler vendors do just that. The technique is well-know and is explained
in &cite.Butenhof97;.</para>
<para>But at least one vendor (HP-UX) does not provide thread-safe
implementations of the above functions in their otherwise thread-safe
runtime library. Instead they provide replacement functions with
different names and arguments.</para>
<para><emphasis role="bold">Recommendation:</emphasis> For the most
portable, yet thread-safe code, use Boost replacements for the problem
functions. See the <libraryname>Boost Random Number Library</libraryname>
and <libraryname>Boost Tokenizer Library</libraryname>.</para>
</section>
</section>
<section>
<title>Common guarantees for all &Boost.Thread; components</title>
<section>
<title>Exceptions</title>
<para>&Boost.Thread; destructors never
throw exceptions. Unless otherwise specified, other
&Boost.Thread; functions that do not have
an exception-specification may throw implementation-defined
exceptions.</para>
<para>In particular, &Boost.Thread;
reports failure to allocate storage by throwing an exception of type
<code>std::bad_alloc</code> or a class derived from
<code>std::bad_alloc</code>, failure to obtain thread resources other than
memory by throwing an exception of type
<classname>boost::thread_resource_error</classname>, and certain lock
related failures by throwing an exception of type
<classname>boost::lock_error</classname>.</para>
<para><emphasis role="bold">Rationale:</emphasis> Follows the C++ Standard
Library practice of allowing all functions except destructors or other
specified functions to throw exceptions on errors.</para>
</section>
<section>
<title>NonCopyable requirement</title>
<para>&Boost.Thread; classes documented as
meeting the NonCopyable requirement disallow copy construction and copy
assignment. For the sake of exposition, the synopsis of such classes show
private derivation from <classname>boost::noncopyable</classname>. Users
should not depend on this derivation, however, as implementations are free
to meet the NonCopyable requirement in other ways.</para>
</section>
</section>
</section>

View File

@@ -1,438 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.rationale" last-revision="$Date$">
<title>Rationale</title>
<para>This page explains the rationale behind various design decisions in the
&Boost.Thread; library. Having the rationale documented here should explain
how we arrived at the current design as well as prevent future rehashing of
discussions and thought processes that have already occurred. It can also give
users a lot of insight into the design process required for this
library.</para>
<section id="thread.rationale.Boost.Thread">
<title>Rationale for the Creation of &Boost.Thread;</title>
<para>Processes often have a degree of "potential parallelism" and it can
often be more intuitive to design systems with this in mind. Further, these
parallel processes can result in more responsive programs. The benefits for
multithreaded programming are quite well known to most modern programmers,
yet the C++ language doesn't directly support this concept.</para>
<para>Many platforms support multithreaded programming despite the fact that
the language doesn't support it. They do this through external libraries,
which are, unfortunately, platform specific. POSIX has tried to address this
problem through the standardization of a "pthread" library. However, this is
a standard only on POSIX platforms, so its portability is limited.</para>
<para>Another problem with POSIX and other platform specific thread
libraries is that they are almost universally C based libraries. This leaves
several C++ specific issues unresolved, such as what happens when an
exception is thrown in a thread. Further, there are some C++ concepts, such
as destructors, that can make usage much easier than what's available in a C
library.</para>
<para>What's truly needed is C++ language support for threads. However, the
C++ standards committee needs existing practice or a good proposal as a
starting point for adding this to the standard.</para>
<para>The &Boost.Thread; library was developed to provide a C++ developer
with a portable interface for writing multithreaded programs on numerous
platforms. There's a hope that the library can be the basis for a more
detailed proposal for the C++ standards committee to consider for inclusion
in the next C++ standard.</para>
</section>
<section id="thread.rationale.primitives">
<title>Rationale for the Low Level Primitives Supported in &Boost.Thread;</title>
<para>The &Boost.Thread; library supplies a set of low level primitives for
writing multithreaded programs, such as mutexes and condition variables. In
fact, the first release of &Boost.Thread; supports only these low level
primitives. However, computer science research has shown that use of these
primitives is difficult since it's difficult to mathematically prove that a
usage pattern is correct, meaning it doesn't result in race conditions or
deadlocks. There are several algebras (such as CSP, CCS and Join calculus)
that have been developed to help write provably correct parallel
processes. In order to prove the correctness these processes must be coded
using higher level abstractions. So why does &Boost.Thread; support the
lower level concepts?</para>
<para>The reason is simple: the higher level concepts need to be implemented
using at least some of the lower level concepts. So having portable lower
level concepts makes it easier to develop the higher level concepts and will
allow researchers to experiment with various techniques.</para>
<para>Beyond this theoretical application of higher level concepts, however,
the fact remains that many multithreaded programs are written using only the
lower level concepts, so they are useful in and of themselves, even if it's
hard to prove that their usage is correct. Since many users will be familiar
with these lower level concepts but unfamiliar with any of the higher
level concepts, supporting the lower level concepts provides
greater accessibility.</para>
</section>
<section id="thread.rationale.locks">
<title>Rationale for the Lock Design</title>
<para>Programmers who are used to multithreaded programming issues will
quickly note that the &Boost.Thread; design for mutex lock concepts is not
<link linkend="thread.glossary.thread-safe">thread-safe</link> (this is
clearly documented as well). At first this may seem like a serious design
flaw. Why have a multithreading primitive that's not thread-safe
itself?</para>
<para>A lock object is not a synchronization primitive. A lock object's sole
responsibility is to ensure that a mutex is both locked and unlocked in a
manner that won't result in the common error of locking a mutex and then
forgetting to unlock it. This means that instances of a lock object are only
going to be created, at least in theory, within block scope and won't be
shared between threads. Only the mutex objects will be created outside of
block scope and/or shared between threads. Though it's possible to create a
lock object outside of block scope and to share it between threads, to do so
would not be a typical usage (in fact, to do so would likely be an
error). Nor are there any cases when such usage would be required.</para>
<para>Lock objects must maintain some state information. In order to allow a
program to determine if a try_lock or timed_lock was successful the lock
object must retain state indicating the success or failure of the call made
in its constructor. If a lock object were to have such state and remain
thread-safe it would need to synchronize access to the state information
which would result in roughly doubling the time of most operations. Worse,
since checking the state can occur only by a call after construction, we'd
have a race condition if the lock object were shared between threads.</para>
<para>So, to avoid the overhead of synchronizing access to the state
information and to avoid the race condition, the &Boost.Thread; library
simply does nothing to make lock objects thread-safe. Instead, sharing a
lock object between threads results in undefined behavior. Since the only
proper usage of lock objects is within block scope this isn't a problem, and
so long as the lock object is properly used there's no danger of any
multithreading issues.</para>
</section>
<section id="thread.rationale.non-copyable">
<title>Rationale for NonCopyable Thread Type</title>
<para>Programmers who are used to C libraries for multithreaded programming
are likely to wonder why &Boost.Thread; uses a noncopyable design for
<classname>boost::thread</classname>. After all, the C thread types are
copyable, and you often have a need for copying them within user
code. However, careful comparison of C designs to C++ designs shows a flaw
in this logic.</para>
<para>All C types are copyable. It is, in fact, not possible to make a
noncopyable type in C. For this reason types that represent system resources
in C are often designed to behave very similarly to a pointer to dynamic
memory. There's an API for acquiring the resource and an API for releasing
the resource. For memory we have pointers as the type and alloc/free for
the acquisition and release APIs. For files we have FILE* as the type and
fopen/fclose for the acquisition and release APIs. You can freely copy
instances of the types but must manually manage the lifetime of the actual
resource through the acquisition and release APIs.</para>
<para>C++ designs recognize that the acquisition and release APIs are error
prone and try to eliminate possible errors by acquiring the resource in the
constructor and releasing it in the destructor. The best example of such a
design is the std::iostream set of classes which can represent the same
resource as the FILE* type in C. A file is opened in the std::fstream's
constructor and closed in its destructor. However, if an iostream were
copyable it could lead to a file being closed twice, an obvious error, so
the std::iostream types are noncopyable by design. This is the same design
used by boost::thread, which is a simple and easy to understand design
that's consistent with other C++ standard types.</para>
<para>During the design of boost::thread it was pointed out that it would be
possible to allow it to be a copyable type if some form of "reference
management" were used, such as ref-counting or ref-lists, and many argued
for a boost::thread_ref design instead. The reasoning was that copying
"thread" objects was a typical need in the C libraries, and so presumably
would be in the C++ libraries as well. It was also thought that
implementations could provide more efficient reference management than
wrappers (such as boost::shared_ptr) around a noncopyable thread
concept. Analysis of whether or not these arguments would hold true doesn't
appear to bear them out. To illustrate the analysis we'll first provide
pseudo-code illustrating the six typical usage patterns of a thread
object.</para>
<section id="thread.rationale.non-copyable.simple">
<title>1. Use case: Simple creation of a thread.</title>
<programlisting>
void foo()
{
create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.joined">
<title>2. Use case: Creation of a thread that's later joined.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
join(thread);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.loop">
<title>3. Use case: Simple creation of several threads in a loop.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.loop-join">
<title>4. Use case: Creation of several threads in a loop which are later joined.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = create_thread(&amp;bar);
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i].join();
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.pass">
<title>5. Use case: Creation of a thread whose ownership is passed to another object/method.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
manager.owns(thread);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.shared">
<title>6. Use case: Creation of a thread whose ownership is shared between multiple
objects.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
manager1.add(thread);
manager2.add(thread);
}
</programlisting>
</section>
<para>Of these usage patterns there's only one that requires reference
management (number 6). Hopefully it's fairly obvious that this usage pattern
simply won't occur as often as the other usage patterns. So there really
isn't a "typical need" for a thread concept, though there is some
need.</para>
<para>Since the need isn't typical we must use different criteria for
deciding on either a thread_ref or thread design. Possible criteria include
ease of use and performance. So let's analyze both of these
carefully.</para>
<para>With ease of use we can look at existing experience. The standard C++
objects that represent a system resource, such as std::iostream, are
noncopyable, so we know that C++ programmers must at least be experienced
with this design. Most C++ developers are also used to smart pointers such
as boost::shared_ptr, so we know they can at least adapt to a thread_ref
concept with little effort. So existing experience isn't going to lead us to
a choice.</para>
<para>The other thing we can look at is how difficult it is to use both
types for the six usage patterns above. If we find it overly difficult to
use a concept for any of the usage patterns there would be a good argument
for choosing the other design. So we'll code all six usage patterns using
both designs.</para>
<section id="thread.rationale_comparison.non-copyable.simple">
<title>1. Comparison: simple creation of a thread.</title>
<programlisting>
void foo()
{
thread thrd(&amp;bar);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.joined">
<title>2. Comparison: creation of a thread that's later joined.</title>
<programlisting>
void foo()
{
thread thrd(&amp;bar);
thrd.join();
}
void foo()
{
thread_ref thrd =
create_thread(&amp;bar);thrd-&gt;join();
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.loop">
<title>3. Comparison: simple creation of several threads in a loop.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
thread thrd(&amp;bar);
}
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
thread_ref thrd = create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.loop-join">
<title>4. Comparison: creation of several threads in a loop which are later joined.</title>
<programlisting>
void foo()
{
std::auto_ptr&lt;thread&gt; threads[NUM_THREADS];
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = std::auto_ptr&lt;thread&gt;(new thread(&amp;bar));
for (int i= 0; i&lt;NUM_THREADS;
++i)threads[i]-&gt;join();
}
void foo()
{
thread_ref threads[NUM_THREADS];
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = create_thread(&amp;bar);
for (int i= 0; i&lt;NUM_THREADS;
++i)threads[i]-&gt;join();
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.pass">
<title>5. Comparison: creation of a thread whose ownership is passed to another object/method.</title>
<programlisting>
void foo()
{
thread thrd* = new thread(&amp;bar);
manager.owns(thread);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
manager.owns(thrd);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.shared">
<title>6. Comparison: creation of a thread whose ownership is shared
between multiple objects.</title>
<programlisting>
void foo()
{
boost::shared_ptr&lt;thread&gt; thrd(new thread(&amp;bar));
manager1.add(thrd);
manager2.add(thrd);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
manager1.add(thrd);
manager2.add(thrd);
}
</programlisting>
</section>
<para>This shows the usage patterns being nearly identical in complexity for
both designs. The only actual added complexity occurs because of the use of
operator new in
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>,
<link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link>, and
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>;
and the use of std::auto_ptr and boost::shared_ptr in
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link> and
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
respectively. However, that's not really
much added complexity, and C++ programmers are used to using these idioms
anyway. Some may dislike the presence of operator new in user code, but
this can be eliminated by proper design of higher level concepts, such as
the boost::thread_group class that simplifies example
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
down to:</para>
<programlisting>
void foo()
{
thread_group threads;
for (int i=0; i&lt;NUM_THREADS; ++i)
threads.create_thread(&amp;bar);
threads.join_all();
}
</programlisting>
<para>So ease of use is really a wash and not much help in picking a
design.</para>
<para>So what about performance? Looking at the above code examples,
we can analyze the theoretical impact to performance that both designs
have. For <link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>
we can see that platforms that don't have a ref-counted native
thread type (POSIX, for instance) will be impacted by a thread_ref
design. Even if the native thread type is ref-counted there may be an impact
if more state information has to be maintained for concepts foreign to the
native API, such as clean up stacks for Win32 implementations.
For <link linkend="thread.rationale_comparison.non-copyable.joined">(2)</link>
and <link linkend="thread.rationale_comparison.non-copyable.loop">(3)</link>
the performance impact will be identical to
<link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>.
For <link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
things get a little more interesting and we find that theoretically at least
the thread_ref may perform faster since the thread design requires dynamic
memory allocation/deallocation. However, in practice there may be dynamic
allocation for the thread_ref design as well, it will just be hidden from
the user. As long as the implementation has to do dynamic allocations the
thread_ref loses again because of the reference management. For
<link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link> we see
the same impact as we do for
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>.
For <link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
we still have a possible impact to
the thread design because of dynamic allocation but thread_ref no longer
suffers because of its reference management, and in fact, theoretically at
least, the thread_ref may do a better job of managing the references. All of
this indicates that thread wins for
<link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>,
<link linkend="thread.rationale_comparison.non-copyable.joined">(2)</link> and
<link linkend="thread.rationale_comparison.non-copyable.loop">(3)</link>; with
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
and <link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link> the
winner depending on the implementation and the platform but with the thread design
probably having a better chance; and with
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
it will again depend on the
implementation and platform but this time we favor thread_ref
slightly. Given all of this it's a narrow margin, but the thread design
prevails.</para>
<para>Given this analysis, and the fact that noncopyable objects for system
resources are the normal designs that C++ programmers are used to dealing
with, the &Boost.Thread; library has gone with a noncopyable design.</para>
</section>
<section id="thread.rationale.events">
<title>Rationale for not providing <emphasis>Event Variables</emphasis></title>
<para><emphasis>Event variables</emphasis> are simply far too
error-prone. <classname>boost::condition</classname> variables are a much safer
alternative. [Note that Graphical User Interface <emphasis>events</emphasis> are
a different concept, and are not what is being discussed here.]</para>
<para>Event variables were one of the first synchronization primitives. They
are still used today, for example, in the native Windows multithreading
API. Yet both respected computer science researchers and experienced
multithreading practitioners believe event variables are so inherently
error-prone that they should never be used, and thus should not be part of a
multithreading library.</para>
<para>Per Brinch Hansen &cite.Hansen73; analyzed event variables in some
detail, pointing out [emphasis his] that "<emphasis>event operations force
the programmer to be aware of the relative speeds of the sending and
receiving processes</emphasis>". His summary:</para>
<blockquote>
<para>We must therefore conclude that event variables of the previous type
are impractical for system design. <emphasis>The effect of an interaction
between two processes must be independent of the speed at which it is
carried out.</emphasis></para>
</blockquote>
<para>Experienced programmers using the Windows platform today report that
event variables are a continuing source of errors, even after previous bad
experiences caused them to be very careful in their use of event
variables. Overt problems can be avoided, for example, by teaming the event
variable with a mutex, but that may just convert a <link
linkend="thread.glossary.race-condition">race condition</link> into another
problem, such as excessive resource use. One of the most distressing aspects
of the experience reports is the claim that many defects are latent. That
is, the programs appear to work correctly, but contain hidden timing
dependencies which will cause them to fail when environmental factors or
usage patterns change, altering relative thread timings.</para>
<para>The decision to exclude event variables from &Boost.Thread; has been
surprising to some Windows programmers. They have written programs which
work using event variables, and wonder what the problem is. It seems similar
to the "goto considered harmful" controversy of 30 years ago. It isn't that
events, like gotos, can't be made to work, but rather that virtually all
programs using alternatives will be easier to write, debug, read, maintain,
and will be less likely to contain latent defects.</para>
<para>[Rationale provided by Beman Dawes]</para>
</section>
</section>

View File

@@ -1,492 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/read_write_mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<namespace name="read_write_scheduling_policy">
<enum name="read_write_scheduling_policy">
<enumvalue name="writer_priority" />
<enumvalue name="reader_priority" />
<enumvalue name="alternating_many_reads" />
<enumvalue name="alternating_single_read" />
<purpose>
<para>Specifies the
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
to use when a set of threads try to obtain different types of
locks simultaneously.</para>
</purpose>
<description>
<para>The only clock type supported by &Boost.Thread; is
<code>TIME_UTC</code>. The epoch for <code>TIME_UTC</code>
is 1970-01-01 00:00:00.</para>
</description>
</enum>
</namespace>
<class name="read_write_mutex">
<purpose>
<para>The <classname>read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.ReadWriteMutex">ReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.ReadWriteMutex">ReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>try_read_write_mutex</classname> and <classname>timed_read_write_mutex</classname>.</para>
<para>The <classname>read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="try_read_write_mutex">
<purpose>
<para>The <classname>try_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryReadWriteMutex">TryReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>try_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryReadWriteMutex">TryReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>read_write_mutex</classname> and <classname>timed_read_write_mutex</classname>.</para>
<para>The <classname>try_read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_try_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryReadWriteLock">ScopedTryReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>try_read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>try_read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>try_read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>try_read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>try_read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>try_read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>try_read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>try_read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="timed_read_write_mutex">
<purpose>
<para>The <classname>timed_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedReadWriteMutex">TimedReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>timed_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedReadWriteMutex">TimedReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>read_write_mutex</classname> and <classname>try_read_write_mutex</classname>.</para>
<para>The <classname>timed_read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_try_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryReadWriteLock">ScopedTryReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_timed_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedReadWriteLock">ScopedTimedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>timed_read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>timed_read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>timed_read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>timed_read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>timed_read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>timed_read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>timed_read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>timed_read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

View File

@@ -1,306 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/recursive_mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="recursive_mutex">
<purpose>
<para>The <classname>recursive_mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_try_mutex</classname> and <classname>recursive_timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_mutex</classname> class supplies the following typedef,
which models the specified locking strategy:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="recursive_try_mutex">
<purpose>
<para>The <classname>recursive_try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_mutex</classname> and <classname>recursive_timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_try_mutex</classname> class supplies the following typedefs,
which model the specified locking strategies:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_try_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_try_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_try_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_try_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_try_mutex</classname> object.
</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="recursive_timed_mutex">
<purpose>
<para>The <classname>recursive_timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_mutex</classname> and <classname>recursive_try_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_timed_mutex</classname> class supplies the following typedefs,
which model the specified locking strategies:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_timed_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_timed_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_timed_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_timed_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_timed_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

View File

@@ -1,30 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<library-reference id="thread.reference"
last-revision="$Date$"
xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="barrier-ref.xml"/>
<xi:include href="condition-ref.xml"/>
<xi:include href="exceptions-ref.xml"/>
<xi:include href="mutex-ref.xml"/>
<xi:include href="once-ref.xml"/>
<xi:include href="recursive_mutex-ref.xml"/>
<!--
The read_write_mutex is held back from release, since the
implementation suffers from a serious, yet unresolved bug.
The implementation is likely to appear in a reworked
form in the next release.
-->
<xi:include href="read_write_mutex-ref.xml"/>
<xi:include href="thread-ref.xml"/>
<xi:include href="tss-ref.xml"/>
<xi:include href="xtime-ref.xml"/>
</library-reference>

View File

@@ -1,204 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<section id="thread.release_notes" last-revision="$Date$">
<title>Release Notes</title>
<section id="thread.release_notes.boost_1_34_0">
<title>Boost 1.34.0</title>
<section id="thread.release_notes.boost_1_34_0.change_log.maintainance">
<title>New team of maintainers</title>
<para>
Since the original author William E. Kempf no longer is available to
maintain the &Boost.Thread; library, a new team has been formed
in an attempt to continue the work on &Boost.Thread;.
Fortunately William E. Kempf has given
<ulink url="http://lists.boost.org/Archives/boost/2006/09/110143.php">
permission </ulink>
to use his work under the boost license.
</para>
<para>
The team currently consists of
<itemizedlist>
<listitem>
Anthony Williams, for the Win32 platform,
</listitem>
<listitem>
Roland Schwarz, for the linux platform, and various "housekeeping" tasks.
</listitem>
</itemizedlist>
Volunteers for other platforms are welcome!
</para>
<para>
As the &Boost.Thread; was kind of orphaned over the last release, this release
attempts to fix the known bugs. Upcoming releases will bring in new things.
</para>
</section>
<section id="thread.release_notes.boost_1_34_0.change_log.read_write_mutex">
<title>read_write_mutex still broken</title>
<para>
<note>
It has been decided not to release the Read/Write Mutex, since the current
implementation suffers from a serious bug. The documentation of the concepts
has been included though, giving the interested reader an opportunity to study the
original concepts. Please refer to the following links if you are interested
which problems led to the decision to held back this mutex type.The issue
has been discovered before 1.33 was released and the code has
been omitted from that release. A reworked mutex is expected to appear in 1.35.
Also see:
<ulink url="http://lists.boost.org/Archives/boost/2005/08/92307.php">
read_write_mutex bug</ulink>
and
<ulink url="http://lists.boost.org/Archives/boost/2005/09/93180.php">
read_write_mutex fundamentally broken in 1.33</ulink>
</note>
</para>
</section>
</section>
<section id="thread.release_notes.boost_1_32_0">
<title>Boost 1.32.0</title>
<section id="thread.release_notes.boost_1_32_0.change_log.documentation">
<title>Documentation converted to BoostBook</title>
<para>The documentation was converted to BoostBook format,
and a number of errors and inconsistencies were
fixed in the process.
Since this was a fairly large task, there are likely to be
more errors and inconsistencies remaining. If you find any,
please report them!</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.static_link">
<title>Statically-link build option added</title>
<para>The option to link &Boost.Thread; as a static
library has been added (with some limitations on Win32 platforms).
This feature was originally removed from an earlier version
of Boost because <classname>boost::thread_specific_ptr</classname>
required that &Boost.Thread; be dynamically linked in order
for its cleanup functionality to work on Win32 platforms.
Because this limitation never applied to non-Win32 platforms,
because significant progress has been made in removing
the limitation on Win32 platforms (many thanks to
Aaron LaFramboise and Roland Scwarz!), and because the lack
of static linking is one of the most common complaints of
&Boost.Thread; users, this decision was reversed.</para>
<para>On non-Win32 platforms:
To choose the dynamically linked version of &Boost.Thread;
using Boost's auto-linking feature, #define BOOST_THREAD_USE_DLL;
to choose the statically linked version,
#define BOOST_THREAD_USE_LIB.
If neither symbols is #defined, the default will be chosen.
Currently the default is the statically linked version.</para>
<para>On Win32 platforms using VC++:
Use the same #defines as for non-Win32 platforms
(BOOST_THREAD_USE_DLL and BOOST_THREAD_USE_LIB).
If neither is #defined, the default will be chosen.
Currently the default is the statically linked version
if the VC++ run-time library is set to
"Multi-threaded" or "Multi-threaded Debug", and
the dynamically linked version
if the VC++ run-time library is set to
"Multi-threaded DLL" or "Multi-threaded Debug DLL".</para>
<para>On Win32 platforms using compilers other than VC++:
Use the same #defines as for non-Win32 platforms
(BOOST_THREAD_USE_DLL and BOOST_THREAD_USE_LIB).
If neither is #defined, the default will be chosen.
Currently the default is the dynamically linked version
because it has not yet been possible to implement automatic
tss cleanup in the statically linked version for compilers
other than VC++, although it is hoped that this will be
possible in a future version of &Boost.Thread;.
Note for advanced users: &Boost.Thread; provides several "hook"
functions to allow users to experiment with the statically
linked version on Win32 with compilers other than VC++.
These functions are on_process_enter(), on_process_exit(),
on_thread_enter(), and on_thread_exit(), and are defined
in tls_hooks.cpp. See the comments in that file for more
information.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.barrier">
<title>Barrier functionality added</title>
<para>A new class, <classname>boost::barrier</classname>, was added.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.read_write_mutex">
<title>Read/write mutex functionality added</title>
<para>New classes,
<classname>boost::read_write_mutex</classname>,
<classname>boost::try_read_write_mutex</classname>, and
<classname>boost::timed_read_write_mutex</classname>
were added.
<note>Since the read/write mutex and related classes are new,
both interface and implementation are liable to change
in future releases of &Boost.Thread;.
The lock concepts and lock promotion in particular are
still under discussion and very likely to change.</note>
</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.thread_specific_ptr">
<title>Thread-specific pointer functionality changed</title>
<para>The <classname>boost::thread_specific_ptr</classname>
constructor now takes an optional pointer to a cleanup function that
is called to release the thread-specific data that is being pointed
to by <classname>boost::thread_specific_ptr</classname> objects.</para>
<para>Fixed: the number of available thread-specific storage "slots"
is too small on some platforms.</para>
<para>Fixed: <functionname>thread_specific_ptr::reset()</functionname>
doesn't check error returned by <functionname>tss::set()</functionname>
(the <functionname>tss::set()</functionname> function now throws
if it fails instead of returning an error code).</para>
<para>Fixed: calling
<functionname>boost::thread_specific_ptr::reset()</functionname> or
<functionname>boost::thread_specific_ptr::release()</functionname>
causes double-delete: once when
<functionname>boost::thread_specific_ptr::reset()</functionname> or
<functionname>boost::thread_specific_ptr::release()</functionname>
is called and once when
<functionname>boost::thread_specific_ptr::~thread_specific_ptr()</functionname>
is called.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.mutex">
<title>Mutex implementation changed for Win32</title>
<para>On Win32, <classname>boost::mutex</classname>,
<classname>boost::try_mutex</classname>, <classname>boost::recursive_mutex</classname>,
and <classname>boost::recursive_try_mutex</classname> now use a Win32 critical section
whenever possible; otherwise they use a Win32 mutex. As before,
<classname>boost::timed_mutex</classname> and
<classname>boost::recursive_timed_mutex</classname> use a Win32 mutex.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.wince">
<title>Windows CE support improved</title>
<para>Minor changes were made to make Boost.Thread work on Windows CE.</para>
</section>
</section>
</section>

35
doc/shared_mutex_ref.qbk Normal file
View File

@@ -0,0 +1,35 @@
[section:shared_mutex Class `shared_mutex`]
class shared_mutex
{
public:
shared_mutex();
~shared_mutex();
void lock_shared();
bool try_lock_shared();
bool timed_lock_shared(system_time const& timeout);
void unlock_shared();
void lock();
bool try_lock();
bool timed_lock(system_time const& timeout);
void unlock();
void lock_upgrade();
void unlock_upgrade();
void unlock_upgrade_and_lock();
void unlock_and_lock_upgrade();
void unlock_and_lock_shared();
void unlock_upgrade_and_lock_shared();
};
The class `boost::shared_mutex` provides an implementation of a multiple-reader / single-writer mutex. It implements the
__upgrade_lockable_concept__.
Multiple concurrent calls to __lock_ref__, __try_lock_ref__, __timed_lock_ref__, __lock_shared_ref__, __try_lock_shared_ref__ and
__timed_lock_shared_ref__ shall be permitted.
[endsect]

View File

@@ -1,270 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/thread.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="thread">
<purpose>
<para>The <classname>thread</classname> class represents threads of
execution, and provides the functionality to create and manage
threads within the &Boost.Thread; library. See
<xref linkend="thread.glossary"/> for a precise description of
<link linkend="thread.glossary.thread">thread of execution</link>,
and for definitions of threading-related terms and of thread states such as
<link linkend="thread.glossary.thread-state">blocked</link>.</para>
</purpose>
<description>
<para>A <link linkend="thread.glossary.thread">thread of execution</link>
has an initial function. For the program's initial thread, the initial
function is <code>main()</code>. For other threads, the initial
function is <code>operator()</code> of the function object passed to
the <classname>thread</classname> object's constructor.</para>
<para>A thread of execution is said to be &quot;finished&quot;
or to have &quot;finished execution&quot; when its initial function returns or
is terminated. This includes completion of all thread cleanup
handlers, and completion of the normal C++ function return behaviors,
such as destruction of automatic storage (stack) objects and releasing
any associated implementation resources.</para>
<para>A thread object has an associated state which is either
&quot;joinable&quot; or &quot;non-joinable&quot;.</para>
<para>Except as described below, the policy used by an implementation
of &Boost.Thread; to schedule transitions between thread states is
unspecified.</para>
<para><note>Just as the lifetime of a file may be different from the
lifetime of an <code>iostream</code> object which represents the file, the lifetime
of a thread of execution may be different from the
<classname>thread</classname> object which represents the thread of
execution. In particular, after a call to <code>join()</code>,
the thread of execution will no longer exist even though the
<classname>thread</classname> object continues to exist until the
end of its normal lifetime. The converse is also possible; if
a <classname>thread</classname> object is destroyed without
<code>join()</code> first having been called, the thread of execution
continues until its initial function completes.</note></para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<effects>Constructs a <classname>thread</classname> object
representing the current thread of execution.</effects>
<postconditions><code>*this</code> is non-joinable.</postconditions>
<notes><emphasis role="bold">Danger:</emphasis>
<code>*this</code> is valid only within the current thread.</notes>
</constructor>
<constructor specifiers="explicit">
<parameter name="threadfunc">
<paramtype>const boost::function0&lt;void&gt;&amp;</paramtype>
</parameter>
<effects>
Starts a new thread of execution and constructs a
<classname>thread</classname> object representing it.
Copies <code>threadfunc</code> (which in turn copies
the function object wrapped by <code>threadfunc</code>)
to an internal location which persists for the lifetime
of the new thread of execution. Calls <code>operator()</code>
on the copy of the <code>threadfunc</code> function object
in the new thread of execution.
</effects>
<postconditions><code>*this</code> is joinable.</postconditions>
<throws><code>boost::thread_resource_error</code> if a new thread
of execution cannot be started.</throws>
</constructor>
<destructor>
<effects>Destroys <code>*this</code>. The actual thread of
execution may continue to execute after the
<classname>thread</classname> object has been destroyed.
</effects>
<notes>If <code>*this</code> is joinable the actual thread
of execution becomes &quot;detached&quot;. Any resources used
by the thread will be reclaimed when the thread of execution
completes. To ensure such a thread of execution runs to completion
before the <classname>thread</classname> object is destroyed, call
<code>join()</code>.</notes>
</destructor>
<method-group name="comparison">
<method name="operator==" cv="const">
<type>bool</type>
<parameter name="rhs">
<type>const thread&amp;</type>
</parameter>
<requires>The thread is non-terminated or <code>*this</code>
is joinable.</requires>
<returns><code>true</code> if <code>*this</code> and
<code>rhs</code> represent the same thread of
execution.</returns>
</method>
<method name="operator!=" cv="const">
<type>bool</type>
<parameter name="rhs">
<type>const thread&amp;</type>
</parameter>
<requires>The thread is non-terminated or <code>*this</code>
is joinable.</requires>
<returns><code>!(*this==rhs)</code>.</returns>
</method>
</method-group>
<method-group name="modifier">
<method name="join">
<type>void</type>
<requires><code>*this</code> is joinable.</requires>
<effects>The current thread of execution blocks until the
initial function of the thread of execution represented by
<code>*this</code> finishes and all resources are
reclaimed.</effects>
<postcondition><code>*this</code> is non-joinable.</postcondition>
<notes>If <code>*this == thread()</code> the result is
implementation-defined. If the implementation doesn't
detect this the result will be
<link linkend="thread.glossary.deadlock">deadlock</link>.
</notes>
</method>
</method-group>
<method-group name="static">
<method name="sleep" specifiers="static">
<type>void</type>
<parameter name="xt">
<paramtype>const <classname>xtime</classname>&amp;</paramtype>
</parameter>
<effects>The current thread of execution blocks until
<code>xt</code> is reached.</effects>
</method>
<method name="yield" specifiers="static">
<type>void</type>
<effects>The current thread of execution is placed in the
<link linkend="thread.glossary.thread-state">ready</link>
state.</effects>
<notes>
<simpara>Allow the current thread to give up the rest of its
time slice (or other scheduling quota) to another thread.
Particularly useful in non-preemptive implementations.</simpara>
</notes>
</method>
</method-group>
</class>
<class name="thread_group">
<purpose>
The <classname>thread_group</classname> class provides a container
for easy grouping of threads to simplify several common thread
creation and management idioms.
</purpose>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<effects>Constructs an empty <classname>thread_group</classname>
container.</effects>
</constructor>
<destructor>
<effects>Destroys each contained thread object. Destroys <code>*this</code>.</effects>
<notes>Behavior is undefined if another thread references
<code>*this </code> during the execution of the destructor.
</notes>
</destructor>
<method-group name="modifier">
<method name="create_thread">
<type><classname>thread</classname>*</type>
<parameter name="threadfunc">
<paramtype>const boost::function0&lt;void&gt;&amp;</paramtype>
</parameter>
<effects>Creates a new <classname>thread</classname> object
that executes <code>threadfunc</code> and adds it to the
<code>thread_group</code> container object's list of managed
<classname>thread</classname> objects.</effects>
<returns>Pointer to the newly created
<classname>thread</classname> object.</returns>
</method>
<method name="add_thread">
<type>void</type>
<parameter name="thrd">
<paramtype><classname>thread</classname>*</paramtype>
</parameter>
<effects>Adds <code>thrd</code> to the
<classname>thread_group</classname> object's list of managed
<classname>thread</classname> objects. The <code>thrd</code>
object must have been allocated via <code>operator new</code> and will
be deleted when the group is destroyed.</effects>
</method>
<method name="remove_thread">
<type>void</type>
<parameter name="thrd">
<paramtype><classname>thread</classname>*</paramtype>
</parameter>
<effects>Removes <code>thread</code> from <code>*this</code>'s
list of managed <classname>thread</classname> objects.</effects>
<throws><emphasis role="bold">???</emphasis> if
<code>thrd</code> is not in <code>*this</code>'s list
of managed <classname>thread</classname> objects.</throws>
</method>
<method name="join_all">
<type>void</type>
<effects>Calls <code>join()</code> on each of the managed
<classname>thread</classname> objects.</effects>
</method>
</method-group>
</class>
</namespace>
</header>

150
doc/thread.qbk Normal file
View File

@@ -0,0 +1,150 @@
[article Thread
[quickbook 1.4]
[authors [Williams, Anthony]]
[copyright 2007-8 Anthony Williams]
[purpose C++ Library for launching threads and synchronizing data between them]
[category text]
[license
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
[@http://www.boost.org/LICENSE_1_0.txt])
]
]
[template lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.lockable [link_text]]]
[def __lockable_concept__ [lockable_concept_link `Lockable` concept]]
[def __lockable_concept_type__ [lockable_concept_link `Lockable`]]
[template timed_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable [link_text]]]
[def __timed_lockable_concept__ [timed_lockable_concept_link `TimedLockable` concept]]
[def __timed_lockable_concept_type__ [timed_lockable_concept_link `TimedLockable`]]
[template shared_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable [link_text]]]
[def __shared_lockable_concept__ [shared_lockable_concept_link `SharedLockable` concept]]
[def __shared_lockable_concept_type__ [shared_lockable_concept_link `SharedLockable`]]
[template upgrade_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable [link_text]]]
[def __upgrade_lockable_concept__ [upgrade_lockable_concept_link `UpgradeLockable` concept]]
[def __upgrade_lockable_concept_type__ [upgrade_lockable_concept_link `UpgradeLockable`]]
[template lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.lock [link_text]]]
[def __lock_ref__ [lock_ref_link `lock()`]]
[template unlock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.unlock [link_text]]]
[def __unlock_ref__ [unlock_ref_link `unlock()`]]
[template try_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.try_lock [link_text]]]
[def __try_lock_ref__ [try_lock_ref_link `try_lock()`]]
[template timed_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock [link_text]]]
[def __timed_lock_ref__ [timed_lock_ref_link `timed_lock()`]]
[template timed_lock_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock_duration [link_text]]]
[def __timed_lock_duration_ref__ [timed_lock_duration_ref_link `timed_lock()`]]
[template lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.lock_shared [link_text]]]
[def __lock_shared_ref__ [lock_shared_ref_link `lock_shared()`]]
[template unlock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.unlock_shared [link_text]]]
[def __unlock_shared_ref__ [unlock_shared_ref_link `unlock_shared()`]]
[template try_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.try_lock_shared [link_text]]]
[def __try_lock_shared_ref__ [try_lock_shared_ref_link `try_lock_shared()`]]
[template timed_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared [link_text]]]
[def __timed_lock_shared_ref__ [timed_lock_shared_ref_link `timed_lock_shared()`]]
[template timed_lock_shared_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared_duration [link_text]]]
[def __timed_lock_shared_duration_ref__ [timed_lock_shared_duration_ref_link `timed_lock_shared()`]]
[template lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.lock_upgrade [link_text]]]
[def __lock_upgrade_ref__ [lock_upgrade_ref_link `lock_upgrade()`]]
[template unlock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade [link_text]]]
[def __unlock_upgrade_ref__ [unlock_upgrade_ref_link `unlock_upgrade()`]]
[template unlock_upgrade_and_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock [link_text]]]
[def __unlock_upgrade_and_lock_ref__ [unlock_upgrade_and_lock_ref_link `unlock_upgrade_and_lock()`]]
[template unlock_and_lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_and_lock_upgrade [link_text]]]
[def __unlock_and_lock_upgrade_ref__ [unlock_and_lock_upgrade_ref_link `unlock_and_lock_upgrade()`]]
[template unlock_upgrade_and_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock_shared [link_text]]]
[def __unlock_upgrade_and_lock_shared_ref__ [unlock_upgrade_and_lock_shared_ref_link `unlock_upgrade_and_lock_shared()`]]
[template owns_lock_ref_link[link_text] [link thread.synchronization.locks.unique_lock.owns_lock [link_text]]]
[def __owns_lock_ref__ [owns_lock_ref_link `owns_lock()`]]
[template owns_lock_shared_ref_link[link_text] [link thread.synchronization.locks.shared_lock.owns_lock [link_text]]]
[def __owns_lock_shared_ref__ [owns_lock_shared_ref_link `owns_lock()`]]
[template mutex_func_ref_link[link_text] [link thread.synchronization.locks.unique_lock.mutex [link_text]]]
[def __mutex_func_ref__ [mutex_func_ref_link `mutex()`]]
[def __boost_thread__ [*Boost.Thread]]
[def __not_a_thread__ ['Not-a-Thread]]
[def __interruption_points__ [link interruption_points ['interruption points]]]
[def __mutex__ [link thread.synchronization.mutex_types.mutex `boost::mutex`]]
[def __try_mutex__ [link thread.synchronization.mutex_types.try_mutex `boost::try_mutex`]]
[def __timed_mutex__ [link thread.synchronization.mutex_types.timed_mutex `boost::timed_mutex`]]
[def __recursive_mutex__ [link thread.synchronization.mutex_types.recursive_mutex `boost::recursive_mutex`]]
[def __recursive_try_mutex__ [link thread.synchronization.mutex_types.recursive_try_mutex `boost::recursive_try_mutex`]]
[def __recursive_timed_mutex__ [link thread.synchronization.mutex_types.recursive_timed_mutex `boost::recursive_timed_mutex`]]
[def __shared_mutex__ [link thread.synchronization.mutex_types.shared_mutex `boost::shared_mutex`]]
[def __lock_guard__ [link thread.synchronization.locks.lock_guard `boost::lock_guard`]]
[def __unique_lock__ [link thread.synchronization.locks.unique_lock `boost::unique_lock`]]
[def __shared_lock__ [link thread.synchronization.locks.shared_lock `boost::shared_lock`]]
[def __upgrade_lock__ [link thread.synchronization.locks.upgrade_lock `boost::upgrade_lock`]]
[def __upgrade_to_unique_lock__ [link thread.synchronization.locks.upgrade_to_unique_lock `boost::upgrade_to_unique_lock`]]
[def __thread__ [link thread.thread_management.thread `boost::thread`]]
[def __thread_id__ [link thread.thread_management.thread.id `boost::thread::id`]]
[template join_link[link_text] [link thread.thread_management.thread.join [link_text]]]
[def __join__ [join_link `join()`]]
[template timed_join_link[link_text] [link thread.thread_management.thread.timed_join [link_text]]]
[def __timed_join__ [timed_join_link `timed_join()`]]
[def __detach__ [link thread.thread_management.thread.detach `detach()`]]
[def __interrupt__ [link thread.thread_management.thread.interrupt `interrupt()`]]
[def __sleep__ [link thread.thread_management.this_thread.sleep `boost::this_thread::sleep()`]]
[def __interruption_enabled__ [link thread.thread_management.this_thread.interruption_enabled `boost::this_thread::interruption_enabled()`]]
[def __interruption_requested__ [link thread.thread_management.this_thread.interruption_requested `boost::this_thread::interruption_requested()`]]
[def __interruption_point__ [link thread.thread_management.this_thread.interruption_point `boost::this_thread::interruption_point()`]]
[def __disable_interruption__ [link thread.thread_management.this_thread.disable_interruption `boost::this_thread::disable_interruption`]]
[def __restore_interruption__ [link thread.thread_management.this_thread.restore_interruption `boost::this_thread::restore_interruption`]]
[def __thread_resource_error__ `boost::thread_resource_error`]
[def __thread_interrupted__ `boost::thread_interrupted`]
[def __barrier__ [link thread.synchronization.barriers.barrier `boost::barrier`]]
[template cond_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.wait [link_text]]]
[def __cond_wait__ [cond_wait_link `wait()`]]
[template cond_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.timed_wait [link_text]]]
[def __cond_timed_wait__ [cond_timed_wait_link `timed_wait()`]]
[template cond_any_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.wait [link_text]]]
[def __cond_any_wait__ [cond_any_wait_link `wait()`]]
[template cond_any_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.timed_wait [link_text]]]
[def __cond_any_timed_wait__ [cond_any_timed_wait_link `timed_wait()`]]
[def __blocked__ ['blocked]]
[include overview.qbk]
[include changes.qbk]
[include thread_ref.qbk]
[section:synchronization Synchronization]
[include mutex_concepts.qbk]
[include mutexes.qbk]
[include condition_variables.qbk]
[include once.qbk]
[include barrier.qbk]
[endsect]
[include tss.qbk]
[include acknowledgements.qbk]

View File

@@ -1,48 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<library name="Thread" dirname="thread" id="thread"
last-revision="$Date$"
xmlns:xi="http://www.w3.org/2001/XInclude">
<libraryinfo>
<author>
<firstname>William</firstname>
<othername>E.</othername>
<surname>Kempf</surname>
</author>
<copyright>
<year>2001</year>
<year>2002</year>
<year>2003</year>
<holder>William E. Kempf</holder>
</copyright>
<legalnotice>
<para>Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)</para>
</legalnotice>
<librarypurpose>Portable C++ multi-threading</librarypurpose>
<librarycategory name="category:concurrent" />
<title>Boost.Thread</title>
</libraryinfo>
<title>Boost.Thread</title>
<xi:include href="overview.xml"/>
<xi:include href="design.xml"/>
<xi:include href="concepts.xml"/>
<xi:include href="rationale.xml"/>
<xi:include href="reference.xml"/>
<xi:include href="faq.xml"/>
<xi:include href="configuration.xml"/>
<xi:include href="build.xml"/>
<xi:include href="implementation_notes.xml"/>
<xi:include href="release_notes.xml"/>
<xi:include href="glossary.xml"/>
<xi:include href="acknowledgements.xml"/>
<xi:include href="bibliography.xml"/>
</library>

933
doc/thread_ref.qbk Normal file
View File

@@ -0,0 +1,933 @@
[section:thread_management Thread Management]
[heading Synopsis]
The __thread__ class is responsible for launching and managing threads. Each __thread__ object represents a single thread of execution,
or __not_a_thread__, and at most one __thread__ object represents a given thread of execution: objects of type __thread__ are not
copyable.
Objects of type __thread__ are movable, however, so they can be stored in move-aware containers, and returned from functions. This
allows the details of thread creation to be wrapped in a function.
boost::thread make_thread();
void f()
{
boost::thread some_thread=make_thread();
some_thread.join();
}
[heading Launching threads]
A new thread is launched by passing an object of a callable type that can be invoked with no parameters to the constructor. The
object is then copied into internal storage, and invoked on the newly-created thread of execution. If the object must not (or
cannot) be copied, then `boost::ref` can be used to pass in a reference to the function object. In this case, the user of
__boost_thread__ must ensure that the referred-to object outlives the newly-created thread of execution.
struct callable
{
void operator()();
};
boost::thread copies_are_safe()
{
callable x;
return boost::thread(x);
} // x is destroyed, but the newly-created thread has a copy, so this is OK
boost::thread oops()
{
callable x;
return boost::thread(boost::ref(x));
} // x is destroyed, but the newly-created thread still has a reference
// this leads to undefined behaviour
If you wish to construct an instance of __thread__ with a function or callable object that requires arguments to be supplied,
this can be done using `boost::bind`:
void find_the_question(int the_answer);
boost::thread deep_thought_2(boost::bind(find_the_question,42));
[heading Joining and detaching]
When the __thread__ object that represents a thread of execution is destroyed the thread becomes ['detached]. Once a thread is
detached, it will continue executing until the invocation of the function or callable object supplied on construction has completed,
or the program is terminated. A thread can also be detached by explicitly invoking the __detach__ member function on the __thread__
object. In this case, the __thread__ object ceases to represent the now-detached thread, and instead represents __not_a_thread__.
In order to wait for a thread of execution to finish, the __join__ or __timed_join__ member functions of the __thread__ object must be
used. __join__ will block the calling thread until the thread represented by the __thread__ object has completed. If the thread of
execution represented by the __thread__ object has already completed, or the __thread__ object represents __not_a_thread__, then __join__
returns immediately. __timed_join__ is similar, except that a call to __timed_join__ will also return if the thread being waited for
does not complete when the specified time has elapsed.
[heading Interruption]
A running thread can be ['interrupted] by invoking the __interrupt__ member function of the corresponding __thread__ object. When the
interrupted thread next executes one of the specified __interruption_points__ (or if it is currently __blocked__ whilst executing one)
with interruption enabled, then a __thread_interrupted__ exception will be thrown in the interrupted thread. If not caught,
this will cause the execution of the interrupted thread to terminate. As with any other exception, the stack will be unwound, and
destructors for objects of automatic storage duration will be executed.
If a thread wishes to avoid being interrupted, it can create an instance of __disable_interruption__. Objects of this class disable
interruption for the thread that created them on construction, and restore the interruption state to whatever it was before on
destruction:
void f()
{
// interruption enabled here
{
boost::this_thread::disable_interruption di;
// interruption disabled
{
boost::this_thread::disable_interruption di2;
// interruption still disabled
} // di2 destroyed, interruption state restored
// interruption still disabled
} // di destroyed, interruption state restored
// interruption now enabled
}
The effects of an instance of __disable_interruption__ can be temporarily reversed by constructing an instance of
__restore_interruption__, passing in the __disable_interruption__ object in question. This will
restore the interruption state to what it was when the __disable_interruption__ object was constructed, and then
disable interruption again when the __restore_interruption__ object is destroyed.
void g()
{
// interruption enabled here
{
boost::this_thread::disable_interruption di;
// interruption disabled
{
boost::this_thread::restore_interruption ri(di);
// interruption now enabled
} // ri destroyed, interruption disable again
} // di destroyed, interruption state restored
// interruption now enabled
}
At any point, the interruption state for the current thread can be queried by calling __interruption_enabled__.
[#interruption_points]
[heading Predefined Interruption Points]
The following functions are ['interruption points], which will throw __thread_interrupted__ if interruption is enabled for the
current thread, and interruption is requested for the current thread:
* [join_link `boost::thread::join()`]
* [timed_join_link `boost::thread::timed_join()`]
* [cond_wait_link `boost::condition_variable::wait()`]
* [cond_timed_wait_link `boost::condition_variable::timed_wait()`]
* [cond_any_wait_link `boost::condition_variable_any::wait()`]
* [cond_any_timed_wait_link `boost::condition_variable_any::timed_wait()`]
* [link thread.thread_management.thread.sleep `boost::thread::sleep()`]
* __sleep__
* __interruption_point__
[heading Thread IDs]
Objects of class __thread_id__ can be used to identify threads. Each running thread of execution has a unique ID obtainable
from the corresponding __thread__ by calling the `get_id()` member function, or by calling `boost::this_thread::get_id()` from
within the thread. Objects of class __thread_id__ can be copied, and used as keys in associative containers: the full range of
comparison operators is provided. Thread IDs can also be written to an output stream using the stream insertion operator, though the
output format is unspecified.
Each instance of __thread_id__ either refers to some thread, or __not_a_thread__. Instances that refer to __not_a_thread__
compare equal to each other, but not equal to any instances that refer to an actual thread of execution. The comparison operators on
__thread_id__ yield a total order for every non-equal thread ID.
[section:thread Class `thread`]
class thread
{
public:
thread();
~thread();
template <class F>
explicit thread(F f);
template <class F>
thread(detail::thread_move_t<F> f);
// move support
thread(detail::thread_move_t<thread> x);
thread& operator=(detail::thread_move_t<thread> x);
operator detail::thread_move_t<thread>();
detail::thread_move_t<thread> move();
void swap(thread& x);
class id;
id get_id() const;
bool joinable() const;
void join();
bool timed_join(const system_time& wait_until);
template<typename TimeDuration>
bool timed_join(TimeDuration const& rel_time);
void detach();
static unsigned hardware_concurrency();
typedef platform-specific-type native_handle_type;
native_handle_type native_handle();
void interrupt();
bool interruption_requested() const;
// backwards compatibility
bool operator==(const thread& other) const;
bool operator!=(const thread& other) const;
static void yield();
static void sleep(const system_time& xt);
};
[section:default_constructor Default Constructor]
thread();
[variablelist
[[Effects:] [Constructs a __thread__ instance that refers to __not_a_thread__.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:callable_constructor Thread Constructor]
template<typename Callable>
thread(Callable func);
[variablelist
[[Preconditions:] [`Callable` must by copyable.]]
[[Effects:] [`func` is copied into storage managed internally by the thread library, and that copy is invoked on a newly-created thread of execution.]]
[[Postconditions:] [`*this` refers to the newly created thread of execution.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:destructor Thread Destructor]
~thread();
[variablelist
[[Effects:] [If `*this` has an associated thread of execution, calls __detach__. Destroys `*this`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:joinable Member function `joinable()`]
bool joinable() const;
[variablelist
[[Returns:] [`true` if `*this` refers to a thread of execution, `false` otherwise.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:join Member function `join()`]
void join();
[variablelist
[[Preconditions:] [`this->get_id()!=boost::this_thread::get_id()`]]
[[Effects:] [If `*this` refers to a thread of execution, waits for that thread of execution to complete.]]
[[Postconditions:] [If `*this` refers to a thread of execution on entry, that thread of execution has completed. `*this` no longer refers to any thread of execution.]]
[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]]
[[Notes:] [`join()` is one of the predefined __interruption_points__.]]
]
[endsect]
[section:timed_join Member function `timed_join()`]
bool timed_join(const system_time& wait_until);
template<typename TimeDuration>
bool timed_join(TimeDuration const& rel_time);
[variablelist
[[Preconditions:] [`this->get_id()!=boost::this_thread::get_id()`]]
[[Effects:] [If `*this` refers to a thread of execution, waits for that thread of execution to complete, the time `wait_until` has
been reach or the specified duration `rel_time` has elapsed. If `*this` doesn't refer to a thread of execution, returns immediately.]]
[[Returns:] [`true` if `*this` refers to a thread of execution on entry, and that thread of execution has completed before the call
times out, `false` otherwise.]]
[[Postconditions:] [If `*this` refers to a thread of execution on entry, and `timed_join` returns `true`, that thread of execution
has completed, and `*this` no longer refers to any thread of execution. If this call to `timed_join` returns `false`, `*this` is
unchanged.]]
[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]]
[[Notes:] [`timed_join()` is one of the predefined __interruption_points__.]]
]
[endsect]
[section:detach Member function `detach()`]
void detach();
[variablelist
[[Effects:] [If `*this` refers to a thread of execution, that thread of execution becomes detached, and longer has an associated __thread__ object.]]
[[Postconditions:] [`*this` no longer refers to any thread of execution.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:get_id Member function `get_id()`]
thread::id get_id() const;
[variablelist
[[Returns:] [If `*this` refers to a thread of execution, an instance of __thread_id__ that represents that thread. Otherwise returns
a default-constructed __thread_id__.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:interrupt Member function `interrupt()`]
void interrupt();
[variablelist
[[Effects:] [If `*this` refers to a thread of execution, request that the thread will be interrupted the next time it enters one of
the predefined __interruption_points__ with interruption enabled, or if it is currently __blocked__ in a call to one of the
predefined __interruption_points__ with interruption enabled .]]
[[Throws:] [Nothing]]
]
[endsect]
[section:hardware_concurrency Static member function `hardware_concurrency()`]
unsigned hardware_concurrency();
[variablelist
[[Returns:] [The number of hardware threads available on the current system (e.g. number of CPUs or cores or hyperthreading units),
or 0 if this information is not available.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:equals `operator==`]
bool operator==(const thread& other) const;
[variablelist
[[Returns:] [`get_id()==other.get_id()`]]
]
[endsect]
[section:not_equals `operator!=`]
bool operator!=(const thread& other) const;
[variablelist
[[Returns:] [`get_id()!=other.get_id()`]]
]
[endsect]
[section:sleep Static member function `sleep()`]
void sleep(system_time const& abs_time);
[variablelist
[[Effects:] [Suspends the current thread until the specified time has been reached.]]
[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]]
[[Notes:] [`sleep()` is one of the predefined __interruption_points__.]]
]
[endsect]
[section:yield Static member function `yield()`]
void yield();
[variablelist
[[Effects:] [See [link thread.thread_management.this_thread.yield `boost::this_thread::yield()`].]]
]
[endsect]
[section:id Class `boost::thread::id`]
class thread::id
{
public:
id();
bool operator==(const id& y) const;
bool operator!=(const id& y) const;
bool operator<(const id& y) const;
bool operator>(const id& y) const;
bool operator<=(const id& y) const;
bool operator>=(const id& y) const;
template<class charT, class traits>
friend std::basic_ostream<charT, traits>&
operator<<(std::basic_ostream<charT, traits>& os, const id& x);
};
[section:constructor Default constructor]
id();
[variablelist
[[Effects:] [Constructs a __thread_id__ instance that represents __not_a_thread__.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:is_equal `operator==`]
bool operator==(const id& y) const;
[variablelist
[[Returns:] [`true` if `*this` and `y` both represent the same thread of execution, or both represent __not_a_thread__, `false`
otherwise.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:not_equal `operator!=`]
bool operator!=(const id& y) const;
[variablelist
[[Returns:] [`true` if `*this` and `y` represent the different threads of execution, or one represents a thread of execution, and
the other represent __not_a_thread__, `false` otherwise.]]
[[Throws:] [Nothing]]
]
[endsect]
[section:less_than `operator<`]
bool operator<(const id& y) const;
[variablelist
[[Returns:] [`true` if `*this!=y` is `true` and the implementation-defined total order of __thread_id__ values places `*this` before
`y`, `false` otherwise.]]
[[Throws:] [Nothing]]
[[Note:] [A __thread_id__ instance representing __not_a_thread__ will always compare less than an instance representing a thread of
execution.]]
]
[endsect]
[section:greater_than `operator>`]
bool operator>(const id& y) const;
[variablelist
[[Returns:] [`y<*this`]]
[[Throws:] [Nothing]]
]
[endsect]
[section:less_than_or_equal `operator>=`]
bool operator<=(const id& y) const;
[variablelist
[[Returns:] [`!(y<*this)`]]
[[Throws:] [Nothing]]
]
[endsect]
[section:greater_than_or_equal `operator>=`]
bool operator>=(const id& y) const;
[variablelist
[[Returns:] [`!(*this<y)`]]
[[Throws:] [Nothing]]
]
[endsect]
[section:stream_out Friend `operator<<`]
template<class charT, class traits>
friend std::basic_ostream<charT, traits>&
operator<<(std::basic_ostream<charT, traits>& os, const id& x);
[variablelist
[[Effects:] [Writes a representation of the __thread_id__ instance `x` to the stream `os`, such that the representation of two
instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `a!=b`.]]
[[Returns:] [`os`]]
]
[endsect]
[endsect]
[endsect]
[section:this_thread Namespace `this_thread`]
[section:get_id Non-member function `get_id()`]
namespace this_thread
{
thread::id get_id();
}
[variablelist
[[Returns:] [An instance of __thread_id__ that represents that currently executing thread.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
]
[endsect]
[section:interruption_point Non-member function `interruption_point()`]
namespace this_thread
{
void interruption_point();
}
[variablelist
[[Effects:] [Check to see if the current thread has been interrupted.]]
[[Throws:] [__thread_interrupted__ if __interruption_enabled__ and __interruption_requested__ both return `true`.]]
]
[endsect]
[section:interruption_requested Non-member function `interruption_requested()`]
namespace this_thread
{
bool interruption_requested();
}
[variablelist
[[Returns:] [`true` if interruption has been requested for the current thread, `false` otherwise.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:interruption_enabled Non-member function `interruption_enabled()`]
namespace this_thread
{
bool interruption_enabled();
}
[variablelist
[[Returns:] [`true` if interruption has been enabled for the current thread, `false` otherwise.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:sleep Non-member function `sleep()`]
namespace this_thread
{
template<typename TimeDuration>
void sleep(TimeDuration const& rel_time);
}
[variablelist
[[Effects:] [Suspends the current thread until the specified time has elapsed.]]
[[Throws:] [__thread_interrupted__ if the current thread of execution is interrupted.]]
[[Notes:] [`sleep()` is one of the predefined __interruption_points__.]]
]
[endsect]
[section:yield Non-member function `yield()`]
namespace this_thread
{
void yield();
}
[variablelist
[[Effects:] [Gives up the remainder of the current thread's time slice, to allow other threads to run.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:disable_interruption Class `disable_interruption`]
namespace this_thread
{
class disable_interruption
{
public:
disable_interruption();
~disable_interruption();
};
}
`boost::this_thread::disable_interruption` disables interruption for the current thread on construction, and restores the prior
interruption state on destruction. Instances of `disable_interruption` cannot be copied or moved.
[section:constructor Constructor]
disable_interruption();
[variablelist
[[Effects:] [Stores the current state of __interruption_enabled__ and disables interruption for the current thread.]]
[[Postconditions:] [__interruption_enabled__ returns `false` for the current thread.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:destructor Destructor]
~disable_interruption();
[variablelist
[[Preconditions:] [Must be called from the same thread from which `*this` was constructed.]]
[[Effects:] [Restores the current state of __interruption_enabled__ for the current thread to that prior to the construction of `*this`.]]
[[Postconditions:] [__interruption_enabled__ for the current thread returns the value stored in the constructor of `*this`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[endsect]
[section:restore_interruption Class `restore_interruption`]
namespace this_thread
{
class restore_interruption
{
public:
explicit restore_interruption(disable_interruption& disabler);
~restore_interruption();
};
}
On construction of an instance of `boost::this_thread::restore_interruption`, the interruption state for the current thread is
restored to the interruption state stored by the constructor of the supplied instance of __disable_interruption__. When the instance
is destroyed, interruption is again disabled. Instances of `restore_interruption` cannot be copied or moved.
[section:constructor Constructor]
explicit restore_interruption(disable_interruption& disabler);
[variablelist
[[Preconditions:] [Must be called from the same thread from which `disabler` was constructed.]]
[[Effects:] [Restores the current state of __interruption_enabled__ for the current thread to that prior to the construction of `disabler`.]]
[[Postconditions:] [__interruption_enabled__ for the current thread returns the value stored in the constructor of `disabler`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:destructor Destructor]
~restore_interruption();
[variablelist
[[Preconditions:] [Must be called from the same thread from which `*this` was constructed.]]
[[Effects:] [Disables interruption for the current thread.]]
[[Postconditions:] [__interruption_enabled__ for the current thread returns `false`.]]
[[Throws:] [Nothing.]]
]
[endsect]
[endsect]
[section:atthreadexit Non-member function template `at_thread_exit()`]
template<typename Callable>
void at_thread_exit(Callable func);
[variablelist
[[Effects:] [A copy of `func` is taken and stored to in thread-specific storage. This copy is invoked when the current thread exits.]]
[[Postconditions:] [A copy of `func` has been saved for invocation on thread exit.]]
[[Throws:] [`std::bad_alloc` if memory cannot be allocated for the copy of the function, __thread_resource_error__ if any other
error occurs within the thread library. Any exception thrown whilst copying `func` into internal storage.]]
]
[endsect]
[endsect]
[section:threadgroup Class `thread_group`]
class thread_group:
private noncopyable
{
public:
thread_group();
~thread_group();
thread* create_thread(const function0<void>& threadfunc);
void add_thread(thread* thrd);
void remove_thread(thread* thrd);
void join_all();
void interrupt_all();
int size() const;
};
`thread_group` provides for a collection of threads that are related in some fashion. New threads can be added to the group with
`add_thread` and `create_thread` member functions. `thread_group` is not copyable or movable.
[section:constructor Constructor]
thread_group();
[variablelist
[[Effects:] [Create a new thread group with no threads.]]
]
[endsect]
[section:destructor Destructor]
~thread_group();
[variablelist
[[Effects:] [Destroy `*this` and `delete` all __thread__ objects in the group.]]
]
[endsect]
[section:create_thread Member function `create_thread()`]
thread* create_thread(const function0<void>& threadfunc);
[variablelist
[[Effects:] [Create a new __thread__ object as-if by `new thread(threadfunc)` and add it to the group.]]
[[Postcondition:] [`this->size()` is increased by one, the new thread is running.]]
[[Returns:] [A pointer to the new __thread__ object.]]
]
[endsect]
[section:add_thread Member function `add_thread()`]
void add_thread(thread* thrd);
[variablelist
[[Precondition:] [The expression `delete thrd` is well-formed and will not result in undefined behaviour.]]
[[Effects:] [Take ownership of the __thread__ object pointed to by `thrd` and add it to the group.]]
[[Postcondition:] [`this->size()` is increased by one.]]
]
[endsect]
[section:remove_thread Member function `remove_thread()`]
void remove_thread(thread* thrd);
[variablelist
[[Effects:] [If `thrd` is a member of the group, remove it without calling `delete`.]]
[[Postcondition:] [If `thrd` was a member of the group, `this->size()` is decreased by one.]]
]
[endsect]
[section:join_all Member function `join_all()`]
void join_all();
[variablelist
[[Effects:] [Call `join()` on each __thread__ object in the group.]]
[[Postcondition:] [Every thread in the group has terminated.]]
[[Note:] [Since __join__ is one of the predefined __interruption_points__, `join_all()` is also an interruption point.]]
]
[endsect]
[section:interrupt_all Member function `interrupt_all()`]
void interrupt_all();
[variablelist
[[Effects:] [Call `interrupt()` on each __thread__ object in the group.]]
]
[endsect]
[section:size Member function `size()`]
int size();
[variablelist
[[Returns:] [The number of threads in the group.]]
[[Throws:] [Nothing.]]
]
[endsect]
[endsect]
[endsect]

View File

@@ -1,206 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/tss.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="thread_specific_ptr">
<purpose>
The <classname>thread_specific_ptr</classname> class defines
an interface for using thread specific storage.
</purpose>
<description>
<para>Thread specific storage is data associated with
individual threads and is often used to make operations
that rely on global data
<link linkend="thread.glossary.thread-safe">thread-safe</link>.
</para>
<para>Template <classname>thread_specific_ptr</classname>
stores a pointer to an object obtained on a thread-by-thread
basis and calls a specified cleanup handler on the contained
pointer when the thread terminates. The cleanup handlers are
called in the reverse order of construction of the
<classname>thread_specific_ptr</classname>s, and for the
initial thread are called by the destructor, providing the
same ordering guarantees as for normal declarations. Each
thread initially stores the null pointer in each
<classname>thread_specific_ptr</classname> instance.</para>
<para>The template <classname>thread_specific_ptr</classname>
is useful in the following cases:
<itemizedlist>
<listitem>An interface was originally written assuming
a single thread of control and it is being ported to a
multithreaded environment.</listitem>
<listitem>Each thread of control invokes sequences of
methods that share data that are physically unique
for each thread, but must be logically accessed
through a globally visible access point instead of
being explicitly passed.</listitem>
</itemizedlist>
</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<requires>The expression <code>delete get()</code> is well
formed.</requires>
<effects>A thread-specific data key is allocated and visible to
all threads in the process. Upon creation, the value
<code>NULL</code> will be associated with the new key in all
active threads. A cleanup method is registered with the key
that will call <code>delete</code> on the value associated
with the key for a thread when it exits. When a thread exits,
if a key has a registered cleanup method and the thread has a
non-<code>NULL</code> value associated with that key, the value
of the key is set to <code>NULL</code> and then the cleanup
method is called with the previously associated value as its
sole argument. The order in which registered cleanup methods
are called when a thread exits is undefined. If after all the
cleanup methods have been called for all non-<code>NULL</code>
values, there are still some non-<code>NULL</code> values
with associated cleanup handlers the result is undefined
behavior.</effects>
<throws><classname>boost::thread_resource_error</classname> if
the necessary resources can not be obtained.</throws>
<notes>There may be an implementation specific limit to the
number of thread specific storage objects that can be created,
and this limit may be small.</notes>
<rationale>The most common need for cleanup will be to call
<code>delete</code> on the associated value. If other forms
of cleanup are required the overloaded constructor should be
called instead.</rationale>
</constructor>
<constructor>
<parameter name="cleanup">
<paramtype>void (*cleanup)(void*)</paramtype>
</parameter>
<effects>A thread-specific data key is allocated and visible to
all threads in the process. Upon creation, the value
<code>NULL</code> will be associated with the new key in all
active threads. The <code>cleanup</code> method is registered
with the key and will be called for a thread with the value
associated with the key for that thread when it exits. When a
thread exits, if a key has a registered cleanup method and the
thread has a non-<code>NULL</code> value associated with that
key, the value of the key is set to <code>NULL</code> and then
the cleanup method is called with the previously associated
value as its sole argument. The order in which registered
cleanup methods are called when a thread exits is undefined.
If after all the cleanup methods have been called for all
non-<code>NULL</code> values, there are still some
non-<code>NULL</code> values with associated cleanup handlers
the result is undefined behavior.</effects>
<throws><classname>boost::thread_resource_error</classname> if
the necessary resources can not be obtained.</throws>
<notes>There may be an implementation specific limit to the
number of thread specific storage objects that can be created,
and this limit may be small.</notes>
<rationale>There is the occasional need to register
specialized cleanup methods, or to register no cleanup method
at all (done by passing <code>NULL</code> to this constructor.
</rationale>
</constructor>
<destructor>
<effects>Deletes the thread-specific data key allocated by the
constructor. The thread-specific data values associated with
the key need not be <code>NULL</code>. It is the responsibility
of the application to perform any cleanup actions for data
associated with the key.</effects>
<notes>Does not destroy any data that may be stored in any
thread's thread specific storage. For this reason you should
not destroy a <classname>thread_specific_ptr</classname> object
until you are certain there are no threads running that have
made use of its thread specific storage.</notes>
<rationale>Associated data is not cleaned up because registered
cleanup methods need to be run in the thread that allocated the
associated data to be guarranteed to work correctly. There's no
safe way to inject the call into another thread's execution
path, making it impossible to call the cleanup methods safely.
</rationale>
</destructor>
<method-group name="modifier functions">
<method name="release">
<type>T*</type>
<postconditions><code>*this</code> holds the null pointer
for the current thread.</postconditions>
<returns><code>this-&gt;get()</code> prior to the call.</returns>
<rationale>This method provides a mechanism for the user to
relinquish control of the data associated with the
thread-specific key.</rationale>
</method>
<method name="reset">
<type>void</type>
<parameter name="p">
<paramtype>T*</paramtype>
<default>0</default>
</parameter>
<effects>If <code>this-&gt;get() != p &amp;&amp;
this-&gt;get() != NULL</code> then call the
associated cleanup function.</effects>
<postconditions><code>*this</code> holds the pointer
<code>p</code> for the current thread.</postconditions>
</method>
</method-group>
<method-group name="observer functions">
<method name="get" cv="const">
<type>T*</type>
<returns>The object stored in thread specific storage for
the current thread for <code>*this</code>.</returns>
<notes>Each thread initially returns 0.</notes>
</method>
<method name="operator-&gt;" cv="const">
<type>T*</type>
<returns><code>this-&gt;get()</code>.</returns>
</method>
<method name="operator*()" cv="const">
<type>T&amp;</type>
<requires><code>this-&gt;get() != 0</code></requires>
<returns><code>this-&gt;get()</code>.</returns>
</method>
</method-group>
</class>
</namespace>
</header>

175
doc/tss.qbk Normal file
View File

@@ -0,0 +1,175 @@
[section Thread Local Storage]
[heading Synopsis]
Thread local storage allows multi-threaded applications to have a separate instance of a given data item for each thread. Where a
single-threaded application would use static or global data, this could lead to contention, deadlock or data corruption in a
multi-threaded application. One example is the C `errno` variable, used for storing the error code related to functions from the
Standard C library. It is common practice (and required by POSIX) for compilers that support multi-threaded applications to provide
a separate instance of `errno` for each thread, in order to avoid different threads competing to read or update the value.
Though compilers often provide this facility in the form of extensions to the declaration syntax (such as `__declspec(thread)` or
`__thread` annotations on `static` or namespace-scope variable declarations), such support is non-portable, and is often limited in
some way, such as only supporting POD types.
[heading Portable thread-local storage with `boost::thread_specific_ptr`]
`boost::thread_specific_ptr` provides a portable mechanism for thread-local storage that works on all compilers supported by
__boost_thread__. Each instance of `boost::thread_specific_ptr` represents a pointer to an object (such as `errno`) where each
thread must have a distinct value. The value for the current thread can be obtained using the `get()` member function, or by using
the `*` and `->` pointer deference operators. Initially the pointer has a value of `NULL` in each thread, but the value for the
current thread can be set using the `reset()` member function.
If the value of the pointer for the current thread is changed using `reset()`, then the previous value is destroyed by calling the
cleanup routine. Alternatively, the stored value can be reset to `NULL` and the prior value returned by calling the `release()`
member function, allowing the application to take back responsibility for destroying the object.
[heading Cleanup at thread exit]
When a thread exits, the objects associated with each `boost::thread_specific_ptr` instance are destroyed. By default, the object
pointed to by a pointer `p` is destroyed by invoking `delete p`, but this can be overridden for a specific instance of
`boost::thread_specific_ptr` by providing a cleanup routine to the constructor. In this case, the object is destroyed by invoking
`func(p)` where `func` is the cleanup routine supplied to the constructor. The cleanup functions are called in an unspecified
order. If a cleanup routine sets the value of associated with an instance of `boost::thread_specific_ptr` that has already been
cleaned up, that value is added to the cleanup list. Cleanup finishes when there are no outstanding instances of
`boost::thread_specific_ptr` with values.
[section:thread_specific_ptr Class `thread_specific_ptr`]
template <typename T>
class thread_specific_ptr
{
public:
thread_specific_ptr();
explicit thread_specific_ptr(void (*cleanup_function)(T*));
~thread_specific_ptr();
T* get() const;
T* operator->() const;
T& operator*() const;
T* release();
void reset(T* new_value=0);
};
[section:default_constructor `thread_specific_ptr();`]
[variablelist
[[Requires:] [`delete this->get()` is well-formed.]]
[[Effects:] [Construct a `thread_specific_ptr` object for storing a pointer to an object of type `T` specific to each thread. The
default `delete`-based cleanup function will be used to destroy any thread-local objects when `reset()` is called, or the thread
exits.]]
[[Throws:] [`boost::thread_resource_error` if an error occurs.]]
]
[endsect]
[section:constructor_with_custom_cleanup `explicit thread_specific_ptr(void (*cleanup_function)(T*));`]
[variablelist
[[Requires:] [`cleanup_function(this->get())` does not throw any exceptions.]]
[[Effects:] [Construct a `thread_specific_ptr` object for storing a pointer to an object of type `T` specific to each thread. The
supplied `cleanup_function` will be used to destroy any thread-local objects when `reset()` is called, or the thread exits.]]
[[Throws:] [`boost::thread_resource_error` if an error occurs.]]
]
[endsect]
[section:destructor `~thread_specific_ptr();`]
[variablelist
[[Effects:] [Calls `this->reset()` to clean up the associated value for the current thread, and destroys `*this`.]]
[[Throws:] [Nothing.]]
]
[note Care needs to be taken to ensure that any threads still running after an instance of `boost::thread_specific_ptr` has been
destroyed do not call any member functions on that instance.]
[endsect]
[section:get `T* get() const;`]
[variablelist
[[Returns:] [The pointer associated with the current thread.]]
[[Throws:] [Nothing.]]
]
[note The initial value associated with an instance of `boost::thread_specific_ptr` is `NULL` for each thread.]
[endsect]
[section:operator_arrow `T* operator->() const;`]
[variablelist
[[Returns:] [`this->get()`]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:operator_star `T& operator*() const;`]
[variablelist
[[Requires:] [`this->get` is not `NULL`.]]
[[Returns:] [`*(this->get())`]]
[[Throws:] [Nothing.]]
]
[endsect]
[section:reset `void reset(T* new_value=0);`]
[variablelist
[[Effects:] [If `this->get()!=new_value` and `this->get()` is non-`NULL`, invoke `delete this->get()` or
`cleanup_function(this->get())` as appropriate. Store `new_value` as the pointer associated with the current thread.]]
[[Postcondition:] [`this->get()==new_value`]]
[[Throws:] [`boost::thread_resource_error` if an error occurs.]]
]
[endsect]
[section:release `T* release();`]
[variablelist
[[Effects:] [Return `this->get()` and store `NULL` as the pointer associated with the current thread without invoking the cleanup
function.]]
[[Postcondition:] [`this->get()==0`]]
[[Throws:] [Nothing.]]
]
[endsect]
[endsect]
[endsect]

View File

@@ -1,82 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
-->
<header name="boost/thread/xtime.hpp"
last-revision="$Date$">
<namespace name="boost">
<enum name="xtime_clock_types">
<enumvalue name="TIME_UTC" />
<purpose>
<para>Specifies the clock type to use when creating
an object of type <classname>xtime</classname>.</para>
</purpose>
<description>
<para>The only clock type supported by &Boost.Thread; is
<code>TIME_UTC</code>. The epoch for <code>TIME_UTC</code>
is 1970-01-01 00:00:00.</para>
</description>
</enum>
<struct name="xtime">
<purpose>
<simpara>An object of type <classname>xtime</classname>
defines a time that is used to perform high-resolution time operations.
This is a temporary solution that will be replaced by a more robust time
library once available in Boost.</simpara>
</purpose>
<description>
<simpara>The <classname>xtime</classname> type is used to represent a point on
some time scale or a duration in time. This type may be proposed for the C standard by
Markus Kuhn. &Boost.Thread; provides only a very minimal implementation of this
proposal; it is expected that a full implementation (or some other time
library) will be provided in Boost as a separate library, at which time &Boost.Thread;
will deprecate its own implementation.</simpara>
<simpara><emphasis role="bold">Note</emphasis> that the resolution is
implementation specific. For many implementations the best resolution
of time is far more than one nanosecond, and even when the resolution
is reasonably good, the latency of a call to <code>xtime_get()</code>
may be significant. For maximum portability, avoid durations of less than
one second.</simpara>
</description>
<free-function-group name="creation">
<function name="xtime_get">
<type>int</type>
<parameter name="xtp">
<paramtype><classname>xtime</classname>*</paramtype>
</parameter>
<parameter name="clock_type">
<paramtype>int</paramtype>
</parameter>
<postconditions>
<simpara><code>xtp</code> represents the current point in
time as a duration since the epoch specified by
<code>clock_type</code>.</simpara>
</postconditions>
<returns>
<simpara><code>clock_type</code> if successful, otherwise 0.</simpara>
</returns>
</function>
</free-function-group>
<data-member name="sec">
<type><emphasis>platform-specific-type</emphasis></type>
</data-member>
</struct>
</namespace>
</header>