2
0
mirror of https://github.com/boostorg/thread.git synced 2026-02-05 22:22:11 +00:00

Compare commits

..

1 Commits

Author SHA1 Message Date
Beman Dawes
c484c376d3 Branch at revision 46530
[SVN r46531]
2008-06-19 18:57:10 +00:00
113 changed files with 684 additions and 14486 deletions

View File

@@ -1,2 +0,0 @@
bin*
*.pdb

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:acknowledgements Acknowledgments]
The original implementation of __boost_thread__ was written by William Kempf, with contributions from numerous others. This new

View File

@@ -1,73 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.acknowledgements"
last-revision="$Date$">
<title>Acknowledgements</title>
<para>William E. Kempf was the architect, designer, and implementor of
&Boost.Thread;.</para>
<para>Mac OS Carbon implementation written by Mac Murrett.</para>
<para>Dave Moore provided initial submissions and further comments on the
<code>barrier</code>
,
<code>thread_pool</code>
,
<code>read_write_mutex</code>
,
<code>read_write_try_mutex</code>
and
<code>read_write_timed_mutex</code>
classes.</para>
<para>Important contributions were also made by Jeremy Siek (lots of input
on the design and on the implementation), Alexander Terekhov (lots of input
on the Win32 implementation, especially in regards to boost::condition, as
well as a lot of explanation of POSIX behavior), Greg Colvin (lots of input
on the design), Paul Mclachlan, Thomas Matelich and Iain Hanson (for help
in trying to get the build to work on other platforms), and Kevin S. Van
Horn (for several updates/corrections to the documentation).</para>
<para>Mike Glassford finished changes to &Boost.Thread; that were begun
by William Kempf and moved them into the main CVS branch.
He also addressed a number of issues that were brought up on the Boost
developer's mailing list and provided some additions and changes to the
read_write_mutex and related classes.</para>
<para>The documentation was written by William E. Kempf. Beman Dawes
provided additional documentation material and editing.
Mike Glassford finished William Kempf's conversion of the documentation to
BoostBook format and added a number of new sections.</para>
<para>Discussions on the boost.org mailing list were essential in the
development of &Boost.Thread;
. As of August 1, 2001, participants included Alan Griffiths, Albrecht
Fritzsche, Aleksey Gurtovoy, Alexander Terekhov, Andrew Green, Andy Sawyer,
Asger Alstrup Nielsen, Beman Dawes, Bill Klein, Bill Rutiser, Bill Wade,
Branko &egrave;ibej, Brent Verner, Craig Henderson, Csaba Szepesvari,
Dale Peakall, Damian Dixon, Dan Nuffer, Darryl Green, Daryle Walker, David
Abrahams, David Allan Finch, Dejan Jelovic, Dietmar Kuehl, Douglas Gregor,
Duncan Harris, Ed Brey, Eric Swanson, Eugene Karpachov, Fabrice Truillot,
Frank Gerlach, Gary Powell, Gernot Neppert, Geurt Vos, Ghazi Ramadan, Greg
Colvin, Gregory Seidman, HYS, Iain Hanson, Ian Bruntlett, J Panzer, Jeff
Garland, Jeff Paquette, Jens Maurer, Jeremy Siek, Jesse Jones, Joe Gottman,
John (EBo) David, John Bandela, John Maddock, John Max Skaller, John
Panzer, Jon Jagger , Karl Nelson, Kevlin Henney, KG Chandrasekhar, Levente
Farkas, Lie-Quan Lee, Lois Goldthwaite, Luis Pedro Coelho, Marc Girod, Mark
A. Borgerding, Mark Rodgers, Marshall Clow, Matthew Austern, Matthew Hurd,
Michael D. Crawford, Michael H. Cox , Mike Haller, Miki Jovanovic, Nathan
Myers, Paul Moore, Pavel Cisler, Peter Dimov, Petr Kocmid, Philip Nash,
Rainer Deyke, Reid Sweatman, Ross Smith, Scott McCaskill, Shalom Reich,
Steve Cleary, Steven Kirk, Thomas Holenstein, Thomas Matelich, Trevor
Perrin, Valentin Bonnard, Vesa Karvonen, Wayne Miller, and William
Kempf.</para>
<para>
As of February 2006 Anthony Williams and Roland Schwarz took over maintainance
and further development of the library after it has been in an orphaned state
for a rather long period of time.
</para>
<para>Apologies for anyone inadvertently missed.</para>
</section>

View File

@@ -1,82 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/barrier.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="barrier">
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<purpose>
<para>An object of class <classname>barrier</classname> is a synchronization
primitive used to cause a set of threads to wait until they each perform a
certain function or each reach a particular point in their execution.</para>
</purpose>
<description>
<para>When a barrier is created, it is initialized with a thread count N.
The first N-1 calls to <code>wait()</code> will all cause their threads to be blocked.
The Nth call to <code>wait()</code> will allow all of the waiting threads, including
the Nth thread, to be placed in a ready state. The Nth call will also "reset"
the barrier such that, if an additional N+1th call is made to <code>wait()</code>,
it will be as though this were the first call to <code>wait()</code>; in other
words, the N+1th to 2N-1th calls to <code>wait()</code> will cause their
threads to be blocked, and the 2Nth call to <code>wait()</code> will allow all of
the waiting threads, including the 2Nth thread, to be placed in a ready state
and reset the barrier. This functionality allows the same set of N threads to re-use
a barrier object to synchronize their execution at multiple points during their
execution.</para>
<para>See <xref linkend="thread.glossary"/> for definitions of thread
states <link linkend="thread.glossary.thread-state">blocked</link>
and <link linkend="thread.glossary.thread-state">ready</link>.
Note that "waiting" is a synonym for blocked.</para>
</description>
<constructor>
<parameter name="count">
<paramtype>size_t</paramtype>
</parameter>
<effects><simpara>Constructs a <classname>barrier</classname> object that
will cause <code>count</code> threads to block on a call to <code>wait()</code>.
</simpara></effects>
</constructor>
<destructor>
<effects><simpara>Destroys <code>*this</code>. If threads are still executing
their <code>wait()</code> operations, the behavior for these threads is undefined.
</simpara></effects>
</destructor>
<method-group name="waiting">
<method name="wait">
<type>bool</type>
<effects><simpara>Wait until N threads call <code>wait()</code>, where
N equals the <code>count</code> provided to the constructor for the
barrier object.</simpara>
<simpara><emphasis role="bold">Note</emphasis> that if the barrier is
destroyed before <code>wait()</code> can return, the behavior is
undefined.</simpara></effects>
<returns>Exactly one of the N threads will receive a return value
of <code>true</code>, the others will receive a value of <code>false</code>.
Precisely which thread receives the return value of <code>true</code> will
be implementation-defined. Applications can use this value to designate one
thread as a leader that will take a certain action, and the other threads
emerging from the barrier can wait for that action to take place.</returns>
</method>
</method-group>
</class>
</namespace>
</header>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:barriers Barriers]
A barrier is a simple concept. Also known as a ['rendezvous], it is a synchronization point between multiple threads. The barrier is

View File

@@ -1,234 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<bibliography id="thread.bibliography"
last-revision="$Date$">
<title>Bibliography</title>
<biblioentry id="thread.bib.AndrewsSchneider83">
<abbrev id="thread.bib.AndrewsSchneider83.abbrev">AndrewsSchnieder83</abbrev>
<biblioset relation="journal">
<title>ACM Computing Surveys</title>
<volumenum>Vol. 15</volumenum>
<issuenum>No. 1</issuenum>
<date>March, 1983</date>
</biblioset>
<biblioset relation="article">
<authorgroup>
<author>
<firstname>Gregory</firstname>
<othername>R.</othername>
<surname>Andrews</surname>
</author>
<author>
<firstname>Fred</firstname>
<othername>B.</othername>
<surname>Schneider</surname>
</author>
</authorgroup>
<title>
<ulink
url="http://www.acm.org/pubs/citations/journals/surveys/1983-15-1/p3-andrews/"
>Concepts and Notations for Concurrent Programming</ulink>
</title>
</biblioset>
<para>Good general background reading. Includes descriptions of Path
Expressions, Message Passing, and Remote Procedure Call in addition to the
basics</para>
</biblioentry>
<biblioentry id="thread.bib.Boost">
<abbrev id="thread.bib.Boost.abbrev">Boost</abbrev>
<bibliomisc>The <emphasis>Boost</emphasis> world wide web site.
<ulink url="http:/www.boost.org">http://www.boost.org</ulink></bibliomisc>
<para>&Boost.Thread; is one of many Boost libraries. The Boost web
site includes a great deal of documentation and general information which
applies to all Boost libraries. Current copies of the libraries including
documentation and test programs may be downloaded from the web
site.</para>
</biblioentry>
<biblioentry id="thread.bib.Hansen73">
<abbrev id="thread.bib.Hansen73.abbrev">Hansen73</abbrev>
<biblioset relation="journal">
<title>ACM Computing Surveys</title>
<volumenum>Vol. 5</volumenum>
<issuenum>No. 4</issuenum>
<date>December, 1973</date>
</biblioset>
<biblioset relation="article">
<author>0-201-63392-2
<firstname>Per Brinch</firstname>
<lastname>Hansen</lastname>
</author>
<title>
<ulink
url="http://www.acm.org/pubs/articles/journals/surveys/1973-5-4/p223-hansen/"
>Concurrent Programming Concepts</ulink>
</title>
</biblioset>
<para>"This paper describes the evolution of language features for
multiprogramming from event queues and semaphores to critical regions and
monitors." Includes analysis of why events are considered error-prone. Also
noteworthy because of an introductory quotation from Christopher Alexander;
Brinch Hansen was years ahead of others in recognizing pattern concepts
applied to software, too.</para>
</biblioentry>
<biblioentry id="thread.bib.Butenhof97">
<abbrev id="thread.bib.Butenhof97.abbrev">Butenhof97</abbrev>
<title>
<ulink url="http://cseng.aw.com/book/0,3828,0201633922,00.html"
>Programming with POSIX Threads </ulink>
</title>
<author>
<firstname>David</firstname>
<othername>R.</othername>
<surname>Butenhof</surname>
</author>
<publisher>Addison-Wesley</publisher>
<copyright><year>1997</year></copyright>
<isbn>ISNB: 0-201-63392-2</isbn>
<para>This is a very readable explanation of threads and how to use
them. Many of the insights given apply to all multithreaded programming, not
just POSIX Threads</para>
</biblioentry>
<biblioentry id="thread.bib.Hoare74">
<abbrev id="thread.bib.Hoare74.abbrev">Hoare74</abbrev>
<biblioset relation="journal">
<title>Communications of the ACM</title>
<volumenum>Vol. 17</volumenum>
<issuenum>No. 10</issuenum>
<date>October, 1974</date>
</biblioset>
<biblioset relation="article">
<title>
<ulink url=" http://www.acm.org/classics/feb96/"
>Monitors: An Operating System Structuring Concept</ulink>
</title>
<author>
<firstname>C.A.R.</firstname>
<surname>Hoare</surname>
</author>
<pagenums>549-557</pagenums>
</biblioset>
<para>Hoare and Brinch Hansen's work on Monitors is the basis for reliable
multithreading patterns. This is one of the most often referenced papers in
all of computer science, and with good reason.</para>
</biblioentry>
<biblioentry id="thread.bib.ISO98">
<abbrev id="thread.bib.ISO98.abbrev">ISO98</abbrev>
<title>
<ulink url="http://www.ansi.org">Programming Language C++</ulink>
</title>
<orgname>ISO/IEC</orgname>
<releaseinfo>14882:1998(E)</releaseinfo>
<para>This is the official C++ Standards document. Available from the ANSI
(American National Standards Institute) Electronic Standards Store.</para>
</biblioentry>
<biblioentry id="thread.bib.McDowellHelmbold89">
<abbrev id="thread.bib.McDowellHelmbold89.abbrev">McDowellHelmbold89</abbrev>
<biblioset relation="journal">
<title>Communications of the ACM</title>
<volumenum>Vol. 21</volumenum>
<issuenum>No. 2</issuenum>
<date>December, 1989</date>
</biblioset>
<biblioset>
<author>
<firstname>Charles</firstname>
<othername>E.</othername>
<surname>McDowell</surname>
</author>
<author>
<firstname>David</firstname>
<othername>P.</othername>
<surname>Helmbold</surname>
</author>
<title>
<ulink
url="http://www.acm.org/pubs/citations/journals/surveys/1989-21-4/p593-mcdowell/"
>Debugging Concurrent Programs</ulink>
</title>
</biblioset>
<para>Identifies many of the unique failure modes and debugging difficulties
associated with concurrent programs.</para>
</biblioentry>
<biblioentry id="thread.bib.SchmidtPyarali">
<abbrev id="thread.bib.SchmidtPyarali.abbrev">SchmidtPyarali</abbrev>
<title>
<ulink url="http://www.cs.wustl.edu/~schmidt/win32-cv-1.html8"
>Strategies for Implementing POSIX Condition Variables on Win32</ulink>
</title>
<authorgroup>
<author>
<firstname>Douglas</firstname>
<othername>C.</othername>
<surname>Schmidt</surname>
</author>
<author>
<firstname>Irfan</firstname>
<surname>Pyarali</surname>
</author>
</authorgroup>
<orgname>Department of Computer Science, Washington University, St. Louis,
Missouri</orgname>
<para>Rationale for understanding &Boost.Thread; condition
variables. Note that Alexander Terekhov found some bugs in the
implementation given in this article, so pthreads-win32 and &Boost.Thread;
are even more complicated yet.</para>
</biblioentry>
<biblioentry id="thread.bib.SchmidtStalRohnertBuschmann">
<abbrev
id="thread.bib.SchmidtStalRohnertBuschmann.abbrev">SchmidtStalRohnertBuschmann</abbrev>
<title>
<ulink
url="http://www.wiley.com/Corporate/Website/Objects/Products/0,9049,104671,00.html"
>Pattern-Oriented Architecture Volume 2</ulink>
</title>
<subtitle>Patterns for Concurrent and Networked Objects</subtitle>
<titleabbrev>POSA2</titleabbrev>
<authorgroup>
<author>
<firstname>Douglas</firstname>
<othername>C.</othername>
<surname>Schmidt</surname>
</author>
<author>
<firstname>Michael</firstname>
<lastname>Stal</lastname>
</author>
<author>
<firstname>Hans</firstname>
<surname>Rohnert</surname>
</author>
<author>
<firstname>Frank</firstname>
<surname>Buschmann</surname>
</author>
</authorgroup>
<publisher>Wiley</publisher>
<copyright><year>2000</year></copyright>
<para>This is a very good explanation of how to apply several patterns
useful for concurrent programming. Among the patterns documented is the
Monitor Pattern mentioned frequently in the &Boost.Thread;
documentation.</para>
</biblioentry>
<biblioentry id="thread.bib.Stroustrup">
<abbrev id="thread.bib.Stroustrup.abbrev">Stroustrup</abbrev>
<title>
<ulink url="http://cseng.aw.com/book/0,3828,0201700735,00.html"
>The C++ Programming Language</ulink>
</title>
<edition>Special Edition</edition>
<publisher>Addison-Wesley</publisher>
<copyright><year>2000</year></copyright>
<isbn>ISBN: 0-201-70073-5</isbn>
<para>The first book a C++ programmer should own. Note that the 3rd edition
(and subsequent editions like the Special Edition) has been rewritten to
cover the ISO standard language and library.</para>
</biblioentry>
</bibliography>

View File

@@ -1,137 +0,0 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Copyright (c) 2007 Roland Schwarz
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.build" last-revision="$Date$">
<title>Build</title>
<para>
How you build the &Boost.Thread; libraries, and how you build your own applications
that use those libraries, are some of the most frequently asked questions. Build
processes are difficult to deal with in a portable manner. That's one reason
why &Boost.Thread; makes use of &Boost.Build;.
In general you should refer to the documentation for &Boost.Build;.
This document will only supply you with some simple usage examples for how to
use <emphasis>bjam</emphasis> to build and test &Boost.Thread;. In addition, this document
will try to explain the build requirements so that users may create their own
build processes (for instance, create an IDE specific project), both for building
and testing &Boost.Thread;, as well as for building their own projects using
&Boost.Thread;.
</para>
<section id="thread.build.building">
<title>Building the &Boost.Thread; Libraries</title>
<para>
Building the &Boost.Thread; Library depends on how you intend to use it. You have several options:
<itemizedlist>
<listitem>
Using as a <link linkend="thread.build.precompiled">precompiled</link> library, possibly
with auto-linking, or for use from within an IDE.
</listitem>
<listitem>
Use from a <link linkend="thread.build.bjam">&Boost.Build;</link> project.
</listitem>
<listitem>
Using in <link linkend="thread.build.source">source</link> form.
</listitem>
</itemizedlist>
</para>
<section id="thread.build.precompiled">
<title>Precompiled</title>
<para>
Using the &Boost.Thread; library in precompiled form is the way to go if you want to
install the library to a standard place, from where your linker is able to resolve code
in binary form. You also will want this option if compile time is a concern. Multiple
variants are available, for different toolsets and build variants (debug/release).
The library files are named <emphasis>{lead}boost_thread{build-specific-tags}.{extension}</emphasis>,
where the build-specific-tags indicate the toolset used to build the library, whether it's
a debug or release build, what version of &Boost; was used, etc.; and the lead and extension
are the appropriate extensions for a dynamic link library or static library for the platform
for which &Boost.Thread; is being built.
For instance, a debug build of the dynamic library built for Win32 with VC++ 7.1 using Boost 1.34 would
be named <emphasis>boost_thread-vc71-mt-gd-1_34.dll</emphasis>.
More information on this should be available from the &Boost.Build; documentation.
</para>
<para>
Building should be possible with the default configuration. If you are running into problems,
it might be wise to adjust your local settings of &Boost.Build; though. Typically you will
need to get your user-config.jam file to reflect your environment, i.e. used toolsets. Please
refer to the &Boost.Build; documentation to learn how to do this.
</para>
<para>
To create the libraries you need to open a command shell and change to the
<emphasis>boost_root</emphasis> directory. From there you give the command
<programlisting>bjam --toolset=<emphasis>mytoolset</emphasis> stage --with-thread</programlisting>
Replace <emphasis>mytoolset</emphasis> with the name of your toolset, e.g. msvc-7.1 .
This will compile and put the libraries into the <emphasis>stage</emphasis> directory which is just below the
<emphasis>boost_root</emphasis> directory. &Boost.Build; by default will generate static and
dynamic variants for debug and release.
</para>
<note>
Invoking the above command without the --with-thread switch &Boost.Build; will build all of
the Boost distribution, including &Boost.Thread;.
</note>
<para>
The next step is to copy your libraries to a place where your linker is able to pick them up.
It is also quite possible to leave them in the stage directory and instruct your IDE to take them
from there.
</para>
<para>
In your IDE you then need to add <emphasis>boost_root</emphasis>/boost to the paths where the compiler
expects to find files to be included. For toolsets that support <emphasis>auto-linking</emphasis>
it is not necessary to explicitly specify the name of the library to link against, it is sufficient
to specify the path of the stage directory. Typically this is true on Windows. For gcc you need
to specify the exact library name (including all the tags). Please don't forget that threading
support must be turned on to be able to use the library. You should be able now to build your
project from the IDE.
</para>
</section>
<section id="thread.build.bjam">
<title>&Boost.Build; Project</title>
<para>
If you have decided to use &Boost.Build; as a build environment for your application, you simply
need to add a single line to your <emphasis>Jamroot</emphasis> file:
<programlisting>use-project /boost : {path-to-boost-root} ;</programlisting>
where <emphasis>{path-to-boost-root}</emphasis> needs to be replaced with the location of
your copy of the boost tree.
Later when you specify a component that needs to link against &Boost.Thread; you specify this
as e.g.:
<programlisting>exe myapp : {myappsources} /boost//thread ;</programlisting>
and you are done.
</para>
</section>
<section id="thread.build.source">
<title>Source Form</title>
<para>
Of course it is also possible to use the &Boost.Thread; library in source form.
First you need to specify the <emphasis>boost_root</emphasis>/boost directory as
a path where your compiler expects to find files to include. It is not easy
to isolate the &Boost.Thread; include files from the rest of the boost
library though. You would also need to isolate every include file that the thread
library depends on. Next you need to copy the files from
<emphasis>boost_root</emphasis>/libs/thread/src to your project and instruct your
build system to compile them together with your project. Please look into the
<emphasis>Jamfile</emphasis> in <emphasis>boost_root</emphasis>/libs/thread/build
to find out which compiler options and defines you will need to get a clean compile.
Using the boost library in this way is the least recommended, and should only be
considered if avoiding dependency on &Boost.Build; is a requirement. Even if so
it might be a better option to use the library in it's precompiled form.
Precompiled downloads are available from the boost consulting web site, or as
part of most linux distributions.
</para>
</section>
</section>
<section id="thread.build.testing">
<title>Testing the &Boost.Thread; Libraries</title>
<para>
To test the &Boost.Thread; libraries using &Boost.Build;, simply change to the
directory <emphasis>boost_root</emphasis>/libs/thread/test and execute the command:
<programlisting>bjam --toolset=<emphasis>mytoolset</emphasis> test</programlisting>
</para>
</section>
</section>

View File

@@ -1,27 +1,4 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:changes Changes since boost 1.35]
The 1.36.0 release of Boost includes a few new features in the thread library:
* New generic __lock_multiple_ref__ and __try_lock_multiple_ref__ functions for locking multiple mutexes at once.
* Rvalue reference support for move semantics where the compilers supports it.
* A few bugs fixed and missing functions added (including the serious win32 condition variable bug).
* `scoped_try_lock` types are now backwards-compatible with Boost 1.34.0 and previous releases.
* Support for passing function arguments to the thread function by supplying additional arguments to the __thread__ constructor.
* Backwards-compatibility overloads added for `timed_lock` and `timed_wait` functions to allow use of `xtime` for timeouts.
[heading Changes since boost 1.34]
[section:changes Changes since boost 1.34]
Almost every line of code in __boost_thread__ has been changed since the 1.34 release of boost. However, most of the interface
changes have been extensions, so the new code is largely backwards-compatible with the old code. The new features and breaking
@@ -77,7 +54,4 @@ been moved to __thread_id__.
* __mutex__ is now never recursive. For Boost releases prior to 1.35 __mutex__ was recursive on Windows and not on POSIX platforms.
* When using a __recursive_mutex__ with a call to [cond_any_wait_link `boost::condition_variable_any::wait()`], the mutex is only
unlocked one level, and not completely. This prior behaviour was not guaranteed and did not feature in the tests.
[endsect]

File diff suppressed because it is too large Load Diff

View File

@@ -1,196 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/condition.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="condition">
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<purpose>
<para>An object of class <classname>condition</classname> is a
synchronization primitive used to cause a thread to wait until a
particular shared-data condition (or time) is met.</para>
</purpose>
<description>
<para>A <classname>condition</classname> object is always used in
conjunction with a <link linkend="thread.concepts.mutexes">mutex</link>
object (an object whose type is a model of a <link
linkend="thread.concepts.Mutex">Mutex</link> or one of its
refinements). The mutex object must be locked prior to waiting on the
condition, which is verified by passing a lock object (an object whose
type is a model of <link linkend="thread.concepts.Lock">Lock</link> or
one of its refinements) to the <classname>condition</classname> object's
wait functions. Upon blocking on the <classname>condition</classname>
object, the thread unlocks the mutex object. When the thread returns
from a call to one of the <classname>condition</classname> object's wait
functions the mutex object is again locked. The tricky unlock/lock
sequence is performed automatically by the
<classname>condition</classname> object's wait functions.</para>
<para>The <classname>condition</classname> type is often used to
implement the Monitor Object and other important patterns (see
&cite.SchmidtStalRohnertBuschmann; and &cite.Hoare74;). Monitors are one
of the most important patterns for creating reliable multithreaded
programs.</para>
<para>See <xref linkend="thread.glossary"/> for definitions of <link
linkend="thread.glossary.thread-state">thread states</link>
blocked and ready. Note that "waiting" is a synonym for blocked.</para>
</description>
<constructor>
<effects><simpara>Constructs a <classname>condition</classname>
object.</simpara></effects>
</constructor>
<destructor>
<effects><simpara>Destroys <code>*this</code>.</simpara></effects>
</destructor>
<method-group name="notification">
<method name="notify_one">
<type>void</type>
<effects><simpara>If there is a thread waiting on <code>*this</code>,
change that thread's state to ready. Otherwise there is no
effect.</simpara></effects>
<notes><simpara>If more than one thread is waiting on <code>*this</code>,
it is unspecified which is made ready. After returning to a ready
state the notified thread must still acquire the mutex again (which
occurs within the call to one of the <classname>condition</classname>
object's wait functions.)</simpara></notes>
</method>
<method name="notify_all">
<type>void</type>
<effects><simpara>Change the state of all threads waiting on
<code>*this</code> to ready. If there are no waiting threads,
<code>notify_all()</code> has no effect.</simpara></effects>
</method>
</method-group>
<method-group name="waiting">
<method name="wait">
<template>
<template-type-parameter name="ScopedLock"/>
</template>
<type>void</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements.</simpara></requires>
<effects><simpara>Releases the lock on the <link
linkend="thread.concepts.mutexes">mutex object</link>
associated with <code>lock</code>, blocks the current thread of execution
until readied by a call to <code>this->notify_one()</code>
or<code> this->notify_all()</code>, and then reacquires the
lock.</simpara></effects>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="wait">
<template>
<template-type-parameter name="ScopedLock"/>
<template-type-parameter name="Pred"/>
</template>
<type>void</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="pred">
<paramtype>Pred</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements and the return from <code>pred()</code> is
convertible to <code>bool</code>.</simpara></requires>
<effects><simpara>As if: <code>while (!pred())
wait(lock)</code></simpara></effects>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="timed_wait">
<template>
<template-type-parameter name="ScopedLock"/>
</template>
<type>bool</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="xt">
<paramtype>const <classname>boost::xtime</classname>&amp;</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements.</simpara></requires>
<effects><simpara>Releases the lock on the <link
linkend="thread.concepts.mutexes">mutex object</link>
associated with <code>lock</code>, blocks the current thread of execution
until readied by a call to <code>this->notify_one()</code>
or<code> this->notify_all()</code>, or until time <code>xt</code>
is reached, and then reacquires the lock.</simpara></effects>
<returns><simpara><code>false</code> if time <code>xt</code> is reached,
otherwise <code>true</code>.</simpara></returns>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
<method name="timed_wait">
<template>
<template-type-parameter name="ScopedLock"/>
<template-type-parameter name="Pred"/>
</template>
<type>bool</type>
<parameter name="lock">
<paramtype>ScopedLock&amp;</paramtype>
</parameter>
<parameter name="xt">
<paramtype>const <classname>boost::xtime</classname>&amp;</paramtype>
</parameter>
<parameter name="pred">
<paramtype>Pred</paramtype>
</parameter>
<requires><simpara><code>ScopedLock</code> meets the <link
linkend="thread.concepts.ScopedLock">ScopedLock</link>
requirements and the return from <code>pred()</code> is
convertible to <code>bool</code>.</simpara></requires>
<effects><simpara>As if: <code>while (!pred()) { if (!timed_wait(lock,
xt)) return false; } return true;</code></simpara></effects>
<returns><simpara><code>false</code> if <code>xt</code> is reached,
otherwise <code>true</code>.</simpara></returns>
<throws><simpara><classname>lock_error</classname> if
<code>!lock.locked()</code></simpara></throws>
</method>
</method-group>
</class>
</namespace>
</header>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:condvar_ref Condition Variables]
[heading Synopsis]
@@ -74,8 +67,6 @@ optimizations in some cases, based on the knowledge of the mutex type;
[section:condition_variable Class `condition_variable`]
#include <boost/thread/condition_variable.hpp>
namespace boost
{
class condition_variable
@@ -84,9 +75,6 @@ optimizations in some cases, based on the knowledge of the mutex type;
condition_variable();
~condition_variable();
void notify_one();
void notify_all();
void wait(boost::unique_lock<boost::mutex>& lock);
template<typename predicate_type>
@@ -296,8 +284,6 @@ return true;
[section:condition_variable_any Class `condition_variable_any`]
#include <boost/thread/condition_variable.hpp>
namespace boost
{
class condition_variable_any
@@ -306,9 +292,6 @@ return true;
condition_variable_any();
~condition_variable_any();
void notify_one();
void notify_all();
template<typename lock_type>
void wait(lock_type& lock);
@@ -502,8 +485,6 @@ return true;
[section:condition Typedef `condition`]
#include <boost/thread/condition.hpp>
typedef condition_variable_any condition;
The typedef `condition` is provided for backwards compatibility with previous boost releases.

View File

@@ -1,96 +0,0 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.configuration" last-revision="$Date$">
<title>Configuration</title>
<para>&Boost.Thread; uses several configuration macros in &lt;boost/config.hpp&gt;,
as well as configuration macros meant to be supplied by the application. These
macros are documented here.
</para>
<section id="thread.configuration.public">
<title>Library Defined Public Macros</title>
<para>
These macros are defined by &Boost.Thread; but are expected to be used
by application code.
</para>
<informaltable>
<tgroup cols="2">
<thead>
<row>
<entry>Macro</entry>
<entry>Meaning</entry>
</row>
</thead>
<tbody>
<row>
<entry>BOOST_HAS_THREADS</entry>
<entry>
Indicates that threading support is available. This means both that there
is a platform specific implementation for &Boost.Thread; and that
threading support has been enabled in a platform specific manner. For instance,
on the Win32 platform there&#39;s an implementation for &Boost.Thread;
but unless the program is compiled against one of the multithreading runtimes
(often determined by the compiler predefining the macro _MT) the BOOST_HAS_THREADS
macro remains undefined.
</entry>
</row>
</tbody>
</tgroup>
</informaltable>
</section>
<section id="thread.configuration.implementation">
<title>Library Defined Implementation Macros</title>
<para>
These macros are defined by &Boost.Thread; and are implementation details
of interest only to implementors.
</para>
<informaltable>
<tgroup cols="2">
<thead>
<row>
<entry>Macro</entry>
<entry>Meaning</entry>
</row>
</thead>
<tbody>
<row>
<entry>BOOST_HAS_WINTHREADS</entry>
<entry>
Indicates that the platform has the Microsoft Win32 threading libraries,
and that they should be used to implement &Boost.Thread;.
</entry>
</row>
<row>
<entry>BOOST_HAS_PTHREADS</entry>
<entry>
Indicates that the platform has the POSIX pthreads libraries, and that
they should be used to implement &Boost.Thread;.
</entry>
</row>
<row>
<entry>BOOST_HAS_FTIME</entry>
<entry>
Indicates that the implementation should use GetSystemTimeAsFileTime()
and the FILETIME type to calculate the current time. This is an implementation
detail used by boost::detail::getcurtime().
</entry>
</row>
<row>
<entry>BOOST_HAS_GETTTIMEOFDAY</entry>
<entry>
Indicates that the implementation should use gettimeofday() to calculate
the current time. This is an implementation detail used by boost::detail::getcurtime().
</entry>
</row>
</tbody>
</tgroup>
</informaltable>
</section>
</section>

View File

@@ -1,159 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.design" last-revision="$Date$">
<title>Design</title>
<para>With client/server and three-tier architectures becoming common place
in today's world, it's becoming increasingly important for programs to be
able to handle parallel processing. Modern day operating systems usually
provide some support for this through native thread APIs. Unfortunately,
writing portable code that makes use of parallel processing in C++ is made
very difficult by a lack of a standard interface for these native APIs.
Further, these APIs are almost universally C APIs and fail to take
advantage of C++'s strengths, or to address concepts unique to C++, such as
exceptions.</para>
<para>The &Boost.Thread; library is an attempt to define a portable interface
for writing parallel processes in C++.</para>
<section id="thread.design.goals">
<title>Goals</title>
<para>The &Boost.Thread; library has several goals that should help to set
it apart from other solutions. These goals are listed in order of precedence
with full descriptions below.
<variablelist>
<varlistentry>
<term>Portability</term>
<listitem>
<para>&Boost.Thread; was designed to be highly portable. The goal is
for the interface to be easily implemented on any platform that
supports threads, and possibly even on platforms without native thread
support.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Safety</term>
<listitem>
<para>&Boost.Thread; was designed to be as safe as possible. Writing
<link linkend="thread.glossary.thread-safe">thread-safe</link>
code is very difficult and successful libraries must strive to
insulate the programmer from dangerous constructs as much as
possible. This is accomplished in several ways:
<itemizedlist>
<listitem>
<para>C++ language features are used to make correct usage easy
(if possible) and error-prone usage impossible or at least more
difficult. For example, see the <link
linkend="thread.concepts.Mutex">Mutex</link> and <link
linkend="thread.concepts.Lock">Lock</link> designs, and note
how they interact.</para>
</listitem>
<listitem>
<para>Certain traditional concurrent programming features are
considered so error-prone that they are not provided at all. For
example, see <xref linkend="thread.rationale.events" />.</para>
</listitem>
<listitem>
<para>Dangerous features, or features which may be misused, are
identified as such in the documentation to make users aware of
potential pitfalls.</para>
</listitem>
</itemizedlist></para>
</listitem>
</varlistentry>
<varlistentry>
<term>Flexibility</term>
<listitem>
<para>&Boost.Thread; was designed to be flexible. This goal is often
at odds with <emphasis>safety</emphasis>. When functionality might be
compromised by the desire to keep the interface safe, &Boost.Thread;
has been designed to provide the functionality, but to make it's use
prohibitive for general use. In other words, the interfaces have been
designed such that it's usually obvious when something is unsafe, and
the documentation is written to explain why.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Efficiency</term>
<listitem>
<para>&Boost.Thread; was designed to be as efficient as
possible. When building a library on top of another library there is
always a danger that the result will be so much slower than the
"native" API that programmers are inclined to ignore the higher level
API. &Boost.Thread; was designed to minimize the chances of this
occurring. The interfaces have been crafted to allow an implementation
the greatest chance of being as efficient as possible. This goal is
often at odds with the goal for <emphasis>safety</emphasis>. Every
effort was made to ensure efficient implementations, but when in
conflict <emphasis>safety</emphasis> has always taken
precedence.</para>
</listitem>
</varlistentry>
</variablelist></para>
</section>
<section>
<title>Iterative Phases</title>
<para>Another goal of &Boost.Thread; was to take a dynamic, iterative
approach in its development. The computing industry is still exploring the
concepts of parallel programming. Most thread libraries supply only simple
primitive concepts for thread synchronization. These concepts are very
simple, but it is very difficult to use them safely or to provide formal
proofs for constructs built on top of them. There has been a lot of research
into other concepts, such as in "Communicating Sequential Processes."
&Boost.Thread; was designed in iterative steps, with each step providing
the building blocks necessary for the next step and giving the researcher
the tools necessary to explore new concepts in a portable manner.</para>
<para>Given the goal of following a dynamic, iterative approach
&Boost.Thread; shall go through several growth cycles. Each phase in its
development shall be roughly documented here.</para>
</section>
<section>
<title>Phase 1, Synchronization Primitives</title>
<para>Boost is all about providing high quality libraries with
implementations for many platforms. Unfortunately, there's a big problem
faced by developers wishing to supply such high quality libraries, namely
thread-safety. The C++ standard doesn't address threads at all, but real
world programs often make use of native threading support. A portable
library that doesn't address the issue of thread-safety is therefore not
much help to a programmer who wants to use the library in his multithreaded
application. So there's a very great need for portable primitives that will
allow the library developer to create <link
linkend="thread.glossary.thread-safe">thread-safe</link>
implementations. This need far out weighs the need for portable methods to
create and manage threads.</para>
<para>Because of this need, the first phase of &Boost.Thread; focuses
solely on providing portable primitive concepts for thread
synchronization. Types provided in this phase include the
<classname>boost::mutex</classname>,
<classname>boost::try_mutex</classname>,
<classname>boost::timed_mutex</classname>,
<classname>boost::recursive_mutex</classname>,
<classname>boost::recursive_try_mutex</classname>,
<classname>boost::recursive_timed_mutex</classname>, and
<classname>boost::lock_error</classname>. These are considered the "core"
synchronization primitives, though there are others that will be added in
later phases.</para>
</section>
<section id="thread.design.phase2">
<title>Phase 2, Thread Management and Thread Specific Storage</title>
<para>This phase addresses the creation and management of threads and
provides a mechanism for thread specific storage (data associated with a
thread instance). Thread management is a tricky issue in C++, so this
phase addresses only the basic needs of multithreaded program. Later
phases are likely to add additional functionality in this area. This
phase of &Boost.Thread; adds the <classname>boost::thread</classname> and
<classname>boost::thread_specific_ptr</classname> types. With these
additions the &Boost.Thread; library can be considered minimal but
complete.</para>
</section>
<section>
<title>The Next Phase</title>
<para>The next phase will address more advanced synchronization concepts,
such as read/write mutexes and barriers.</para>
</section>
</section>

View File

@@ -1,31 +0,0 @@
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<!ENTITY Boost "<emphasis role='bold'>Boost</emphasis>">
<!ENTITY Boost.Thread "<emphasis role='bold'>Boost.Thread</emphasis>">
<!ENTITY Boost.Build "<emphasis role='bold'>Boost.Build</emphasis>">
<!ENTITY cite.AndrewsSchneider83 "<citation><xref
linkend='thread.bib.AndrewsSchneider83'
endterm='thread.bib.AndrewsSchneider83.abbrev'/></citation>">
<!ENTITY cite.Boost "<citation><xref linkend='thread.bib.Boost'
endterm='thread.bib.Boost.abbrev'/></citation>">
<!ENTITY cite.Hansen73 "<citation><xref linkend='thread.bib.Hansen73'
endterm='thread.bib.Hansen73.abbrev'/></citation>">
<!ENTITY cite.Butenhof97 "<citation><xref linkend='thread.bib.Butenhof97'
endterm='thread.bib.Butenhof97.abbrev'/></citation>">
<!ENTITY cite.Hoare74 "<citation><xref linkend='thread.bib.Hoare74'
endterm='thread.bib.Hoare74.abbrev'/></citation>">
<!ENTITY cite.ISO98 "<citation><xref linkend='thread.bib.ISO98'
endterm='thread.bib.ISO98.abbrev'/></citation>">
<!ENTITY cite.McDowellHelmbold89 "<citation><xref
linkend='thread.bib.McDowellHelmbold89'
endterm='thread.bib.McDowellHelmbold89.abbrev'/></citation>">
<!ENTITY cite.SchmidtPyarali "<citation><xref
linkend='thread.bib.SchmidtPyarali'
endterm='thread.bib.SchmidtPyarali.abbrev'/></citation>">
<!ENTITY cite.SchmidtStalRohnertBuschmann "<citation><xref
linkend='thread.bib.SchmidtStalRohnertBuschmann'
endterm='thread.bib.SchmidtStalRohnertBuschmann.abbrev'/></citation>">
<!ENTITY cite.Stroustrup "<citation><xref linkend='thread.bib.Stroustrup'
endterm='thread.bib.Stroustrup.abbrev'/></citation>">

View File

@@ -1,62 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/exceptions.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="lock_error">
<purpose>
<simpara>The lock_error class defines an exception type thrown
to indicate a locking related error has been detected.</simpara>
</purpose>
<description>
<simpara>Examples of errors indicated by a lock_error exception
include a lock operation which can be determined to result in a
deadlock, or unlock operations attempted by a thread that does
not own the lock.</simpara>
</description>
<inherit access="public">
<type><classname>std::logical_error</classname></type>
</inherit>
<constructor>
<effects><simpara>Constructs a <code>lock_error</code> object.
</simpara></effects>
</constructor>
</class>
<class name="thread_resource_error">
<purpose>
<simpara>The <classname>thread_resource_error</classname> class
defines an exception type that is thrown by constructors in the
&Boost.Thread; library when thread-related resources can not be
acquired.</simpara>
</purpose>
<description>
<simpara><classname>thread_resource_error</classname> is used
only when thread-related resources cannot be acquired; memory
allocation failures are indicated by
<classname>std::bad_alloc</classname>.</simpara>
</description>
<inherit access="public">
<type><classname>std::runtime_error</classname></type>
</inherit>
<constructor>
<effects><simpara>Constructs a <code>thread_resource_error</code>
object.</simpara></effects>
</constructor>
</class>
</namespace>
</header>

View File

@@ -1,235 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.faq" last-revision="$Date$">
<title>Frequently Asked Questions</title>
<qandaset>
<qandaentry>
<question>
<para>Are lock objects <link
linkend="thread.glossary.thread-safe">thread safe</link>?</para>
</question>
<answer>
<para><emphasis role="bold">No!</emphasis> Lock objects are not meant to
be shared between threads. They are meant to be short-lived objects
created on automatic storage within a code block. Any other usage is
just likely to lead to errors and won't really be of actual benefit anyway.
Share <link linkend="thread.concepts.mutexes">Mutexes</link>, not
Locks. For more information see the <link
linkend="thread.rationale.locks">rationale</link> behind the
design for lock objects.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why was &Boost.Thread; modeled after (specific library
name)?</para>
</question>
<answer>
<para>It wasn't. &Boost.Thread; was designed from scratch. Extensive
design discussions involved numerous people representing a wide range of
experience across many platforms. To ensure portability, the initial
implements were done in parallel using POSIX Threads and the Win32
threading API. But the &Boost.Thread; design is very much in the spirit
of C++, and thus doesn't model such C based APIs.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why wasn't &Boost.Thread; modeled after (specific library
name)?</para>
</question>
<answer>
<para>Existing C++ libraries either seemed dangerous (often failing to
take advantage of prior art to reduce errors) or had excessive
dependencies on library components unrelated to threading. Existing C
libraries couldn't meet our C++ requirements, and were also missing
certain features. For instance, the WIN32 thread API lacks condition
variables, even though these are critical for the important Monitor
pattern &cite.SchmidtStalRohnertBuschmann;.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why do <link linkend="thread.concepts.mutexes">Mutexes</link>
have noncopyable semantics?</para>
</question>
<answer>
<para>To ensure that <link
linkend="thread.glossary.deadlock">deadlocks</link> don't occur. The
only logical form of copy would be to use some sort of shallow copy
semantics in which multiple mutex objects could refer to the same mutex
state. This means that if ObjA has a mutex object as part of its state
and ObjB is copy constructed from it, then when ObjB::foo() locks the
mutex it has effectively locked ObjA as well. This behavior can result
in deadlock. Other copy semantics result in similar problems (if you
think you can prove this to be wrong then supply us with an alternative
and we'll reconsider).</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>How can you prevent <link
linkend="thread.glossary.deadlock">deadlock</link> from occurring when
a thread must lock multiple mutexes?</para>
</question>
<answer>
<para>Always lock them in the same order. One easy way of doing this is
to use each mutex's address to determine the order in which they are
locked. A future &Boost.Thread; concept may wrap this pattern up in a
reusable class.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Don't noncopyable <link
linkend="thread.concepts.mutexes">Mutex</link> semantics mean that a
class with a mutex member will be noncopyable as well?</para>
</question>
<answer>
<para>No, but what it does mean is that the compiler can't generate a
copy constructor and assignment operator, so they will have to be coded
explicitly. This is a <emphasis role="bold">good thing</emphasis>,
however, since the compiler generated operations would not be <link
linkend="thread.glossary.thread-safe">thread-safe</link>. The following
is a simple example of a class with copyable semantics and internal
synchronization through a mutex member.</para>
<programlisting>
class counter
{
public:
// Doesn't need synchronization since there can be no references to *this
// until after it's constructed!
explicit counter(int initial_value)
: m_value(initial_value)
{
}
// We only need to synchronize other for the same reason we don't have to
// synchronize on construction!
counter(const counter&amp; other)
{
boost::mutex::scoped_lock scoped_lock(other.m_mutex);
m_value = other.m_value;
}
// For assignment we need to synchronize both objects!
const counter&amp; operator=(const counter&amp; other)
{
if (this == &amp;other)
return *this;
boost::mutex::scoped_lock lock1(&amp;m_mutex &lt; &amp;other.m_mutex ? m_mutex : other.m_mutex);
boost::mutex::scoped_lock lock2(&amp;m_mutex &gt; &amp;other.m_mutex ? m_mutex : other.m_mutex);
m_value = other.m_value;
return *this;
}
int value() const
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
return m_value;
}
int increment()
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
return ++m_value;
}
private:
mutable boost::mutex m_mutex;
int m_value;
};
</programlisting>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>How can you lock a <link
linkend="thread.concepts.mutexes">Mutex</link> member in a const member
function, in order to implement the Monitor Pattern?</para>
</question>
<answer>
<para>The Monitor Pattern &cite.SchmidtStalRohnertBuschmann; mutex
should simply be declared as mutable. See the example code above. The
internal state of mutex types could have been made mutable, with all
lock calls made via const functions, but this does a poor job of
documenting the actual semantics (and in fact would be incorrect since
the logical state of a locked mutex clearly differs from the logical
state of an unlocked mutex). Declaring a mutex member as mutable clearly
documents the intended semantics.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why supply <classname>boost::condition</classname> variables rather than
event variables?</para>
</question>
<answer>
<para>Condition variables result in user code much less prone to <link
linkend="thread.glossary.race-condition">race conditions</link> than
event variables. See <xref linkend="thread.rationale.events" />
for analysis. Also see &cite.Hoare74; and &cite.SchmidtStalRohnertBuschmann;.
</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why isn't thread cancellation or termination provided?</para>
</question>
<answer>
<para>There's a valid need for thread termination, so at some point
&Boost.Thread; probably will include it, but only after we can find a
truly safe (and portable) mechanism for this concept.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Is it safe for threads to share automatic storage duration (stack)
objects via pointers or references?</para>
</question>
<answer>
<para>Only if you can guarantee that the lifetime of the stack object
will not end while other threads might still access the object. Thus the
safest practice is to avoid sharing stack objects, particularly in
designs where threads are created and destroyed dynamically. Restrict
sharing of stack objects to simple designs with very clear and
unchanging function and thread lifetimes. (Suggested by Darryl
Green).</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why has class semaphore disappeared?</para>
</question>
<answer>
<para>Semaphore was removed as too error prone. The same effect can be
achieved with greater safety by the combination of a mutex and a
condition variable.</para>
</answer>
</qandaentry>
<qandaentry>
<question>
<para>Why doesn't the thread's ctor take at least a void* to pass any
information along with the function? All other threading libs support
that and it makes Boost.Threads inferiour. </para>
</question>
<answer>
<para>There is no need, because Boost.Threads are superiour! First
thing is that its ctor doesn't take a function but a functor. That
means that you can pass an object with an overloaded operator() and
include additional data as members in that object. Beware though that
this object is copied, use boost::ref to prevent that. Secondly, even
a boost::function&lt;void (void)&gt; can carry parameters, you only have to
use boost::bind() to create it from any function and bind its
parameters.</para>
<para>That is also why Boost.Threads are superiour, because they
don't require you to pass a type-unsafe void pointer. Rather, you can
use the flexible Boost.Functions to create a thread entry out of
anything that can be called.</para>
</answer>
</qandaentry>
</qandaset>
</section>

View File

@@ -1,304 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<glossary id="thread.glossary" last-revision="$Date$">
<title>Glossary</title>
<para>Definitions are given in terms of the C++ Standard
&cite.ISO98;. References to the standard are in the form [1.2.3/4], which
represents the section number, with the paragraph number following the
"/".</para>
<para>Because the definitions are written in something akin to "standardese",
they can be difficult to understand. The intent isn't to confuse, but rather
to clarify the additional requirements &Boost.Thread; places on a C++
implementation as defined by the C++ Standard.</para>
<glossentry id="thread.glossary.thread">
<glossterm>Thread</glossterm>
<glossdef>
<para>Thread is short for "thread of execution". A thread of execution is
an execution environment [1.9/7] within the execution environment of a C++
program [1.9]. The main() function [3.6.1] of the program is the initial
function of the initial thread. A program in a multithreading environment
always has an initial thread even if the program explicitly creates no
additional threads.</para>
<para>Unless otherwise specified, each thread shares all aspects of its
execution environment with other threads in the program. Shared aspects of
the execution environment include, but are not limited to, the
following:</para>
<itemizedlist>
<listitem><para>Static storage duration (static, extern) objects
[3.7.1].</para></listitem>
<listitem><para>Dynamic storage duration (heap) objects [3.7.3]. Thus
each memory allocation will return a unique addresses, regardless of the
thread making the allocation request.</para></listitem>
<listitem><para>Automatic storage duration (stack) objects [3.7.2]
accessed via pointer or reference from another thread.</para></listitem>
<listitem><para>Resources provided by the operating system. For example,
files.</para></listitem>
<listitem><para>The program itself. In other words, each thread is
executing some function of the same program, not a totally different
program.</para></listitem>
</itemizedlist>
<para>Each thread has its own:</para>
<itemizedlist>
<listitem><para>Registers and current execution sequence (program
counter) [1.9/5].</para></listitem>
<listitem><para>Automatic storage duration (stack) objects
[3.7.2].</para></listitem>
</itemizedlist>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.thread-safe">
<glossterm>Thread-safe</glossterm>
<glossdef>
<para>A program is thread-safe if it has no <link
linkend="thread.glossary.race-condition">race conditions</link>, does
not <link linkend="thread.glossary.deadlock">deadlock</link>, and has
no <link linkend="thread.glossary.priority-failure">priority
failures</link>.</para>
<para>Note that thread-safety does not necessarily imply efficiency, and
than while some thread-safety violations can be determined statically at
compile time, many thread-safety errors can only only be detected at
runtime.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.thread-state">
<glossterm>Thread State</glossterm>
<glossdef>
<para>During the lifetime of a thread, it shall be in one of the following
states:</para>
<table>
<title>Thread States</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>State</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>Ready</entry>
<entry>Ready to run, but waiting for a processor.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Currently executing on a processor. Zero or more threads
may be running at any time, with a maximum equal to the number of
processors.</entry>
</row>
<row>
<entry>Blocked</entry>
<entry>Waiting for some resource other than a processor which is
not currently available, or for the completion of calls to library
functions [1.9/6]. The term "waiting" is synonymous with
"blocked"</entry>
</row>
<row>
<entry>Terminated</entry>
<entry>Finished execution but not yet detached or joined.</entry>
</row>
</tbody>
</tgroup>
</table>
<para>Thread state transitions shall occur only as specified:</para>
<table>
<title>Thread States Transitions</title>
<tgroup cols="3" align="left">
<thead>
<row>
<entry>From</entry>
<entry>To</entry>
<entry>Cause</entry>
</row>
</thead>
<tbody>
<row>
<entry>[none]</entry>
<entry>Ready</entry>
<entry><para>Thread is created by a call to a library function.
In the case of the initial thread, creation is implicit and
occurs during the startup of the main() function [3.6.1].</para></entry>
</row>
<row>
<entry>Ready</entry>
<entry>Running</entry>
<entry><para>Processor becomes available.</para></entry>
</row>
<row>
<entry>Running</entry>
<entry>Ready</entry>
<entry>Thread preempted.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Blocked</entry>
<entry>Thread calls a library function which waits for a resource or
for the completion of I/O.</entry>
</row>
<row>
<entry>Running</entry>
<entry>Terminated</entry>
<entry>Thread returns from its initial function, calls a thread
termination library function, or is canceled by some other thread
calling a thread termination library function.</entry>
</row>
<row>
<entry>Blocked</entry>
<entry>Ready</entry>
<entry>The resource being waited for becomes available, or the
blocking library function completes.</entry>
</row>
<row>
<entry>Terminated</entry>
<entry>[none]</entry>
<entry>Thread is detached or joined by some other thread calling the
appropriate library function, or by program termination
[3.6.3].</entry>
</row>
</tbody>
</tgroup>
</table>
<para>[Note: if a suspend() function is added to the threading library,
additional transitions to the blocked state will have to be added to the
above table.]</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.race-condition">
<glossterm>Race Condition</glossterm>
<glossdef>
<para>A race condition is what occurs when multiple threads read from and write
to the same memory without proper synchronization, resulting in an incorrect
value being read or written. The result of a race condition may be a bit
pattern which isn't even a valid value for the data type. A race condition
results in undefined behavior [1.3.12].</para>
<para>Race conditions can be prevented by serializing memory access using
the tools provided by &Boost.Thread;.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.deadlock">
<glossterm>Deadlock</glossterm>
<glossdef>
<para>Deadlock is an execution state where for some set of threads, each
thread in the set is blocked waiting for some action by one of the other
threads in the set. Since each is waiting on the others, none will ever
become ready again.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.starvation">
<glossterm>Starvation</glossterm>
<glossdef>
<para>The condition in which a thread is not making sufficient progress in
its work during a given time interval.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.priority-failure">
<glossterm>Priority Failure</glossterm>
<glossdef>
<para>A priority failure (such as priority inversion or infinite overtaking)
occurs when threads are executed in such a sequence that required work is not
performed in time to be useful.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.undefined-behavior">
<glossterm>Undefined Behavior</glossterm>
<glossdef>
<para>The result of certain operations in &Boost.Thread; is undefined;
this means that those operations can invoke almost any behavior when
they are executed.</para>
<para>An operation whose behavior is undefined can work "correctly"
in some implementations (i.e., do what the programmer thought it
would do), while in other implementations it may exhibit almost
any "incorrect" behavior--such as returning an invalid value,
throwing an exception, generating an access violation, or terminating
the process.</para>
<para>Executing a statement whose behavior is undefined is a
programming error.</para>
</glossdef>
</glossentry>
<glossentry id="thread.glossary.memory-visibility">
<glossterm>Memory Visibility</glossterm>
<glossdef>
<para>An address [1.7] shall always point to the same memory byte,
regardless of the thread or processor dereferencing the address.</para>
<para>An object [1.8, 1.9] is accessible from multiple threads if it is of
static storage duration (static, extern) [3.7.1], or if a pointer or
reference to it is explicitly or implicitly dereferenced in multiple
threads.</para>
<para>For an object accessible from multiple threads, the value of the
object accessed from one thread may be indeterminate or different from the
value accessed from another thread, except under the conditions specified in
the following table. For the same row of the table, the value of an object
accessible at the indicated sequence point in thread A will be determinate
and the same if accessed at or after the indicated sequence point in thread
B, provided the object is not otherwise modified. In the table, the
"sequence point at a call" is the sequence point after the evaluation of all
function arguments [1.9/17], while the "sequence point after a call" is the
sequence point after the copying of the returned value... [1.9/17].</para>
<table>
<title>Memory Visibility</title>
<tgroup cols="2">
<thead>
<row>
<entry>Thread A</entry>
<entry>Thread B</entry>
</row>
</thead>
<tbody>
<row>
<entry>The sequence point at a call to a library thread-creation
function.</entry>
<entry>The first sequence point of the initial function in the new
thread created by the Thread A call.</entry>
</row>
<row>
<entry>The sequence point at a call to a library function which
locks a mutex, directly or by waiting for a condition
variable.</entry>
<entry>The sequence point after a call to a library function which
unlocks the same mutex.</entry>
</row>
<row>
<entry>The last sequence point before thread termination.</entry>
<entry>The sequence point after a call to a library function which
joins the terminated thread.</entry>
</row>
<row>
<entry>The sequence point at a call to a library function which
signals or broadcasts a condition variable.</entry>
<entry>The sequence point after the call to the library function
which was waiting on that same condition variable or signal.</entry>
</row>
</tbody>
</tgroup>
</table>
<para>The architecture of the execution environment and the observable
behavior of the abstract machine [1.9] shall be the same on all
processors.</para>
<para>The latitude granted by the C++ standard for an implementation to
alter the definition of observable behavior of the abstract machine to
include additional library I/O functions [1.9/6] is extended to include
threading library functions.</para>
<para>When an exception is thrown and there is no matching exception handler
in the same thread, behavior is undefined. The preferred behavior is the
same as when there is no matching exception handler in a program
[15.3/9]. That is, terminate() is called, and it is implementation-defined
whether or not the stack is unwound.</para>
</glossdef>
</glossentry>
<section>
<title>Acknowledgements</title>
<para>This document was originally written by Beman Dawes, and then much
improved by the incorporation of comments from William Kempf, who now
maintains the contents.</para>
<para>The visibility rules are based on &cite.Butenhof97;.</para>
</section>
</glossary>

View File

@@ -1,38 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.implementation_notes" last-revision="$Date$">
<title>Implementation Notes</title>
<section id="thread.implementation_notes.win32">
<title>Win32</title>
<para>
In the current Win32 implementation, creating a boost::thread object
during dll initialization will result in deadlock because the thread
class constructor causes the current thread to wait on the thread that
is being created until it signals that it has finished its initialization,
and, as stated in the
<ulink url="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/dllmain.asp">MSDN Library, "DllMain" article, "Remarks" section</ulink>,
"Because DLL notifications are serialized, entry-point functions should not
attempt to communicate with other threads or processes. Deadlocks may occur as a result."
(Also see <ulink url="http://www.microsoft.com/msj/archive/S220.aspx">"Under the Hood", January 1996</ulink>
for a more detailed discussion of this issue).
</para>
<para>
The following non-exhaustive list details some of the situations that
should be avoided until this issue can be addressed:
<itemizedlist>
<listitem>Creating a boost::thread object in DllMain() or in any function called by it.</listitem>
<listitem>Creating a boost::thread object in the constructor of a global static object or in any function called by one.</listitem>
<listitem>Creating a boost::thread object in MFC's CWinApp::InitInstance() function or in any function called by it.</listitem>
<listitem>Creating a boost::thread object in the function pointed to by MFC's _pRawDllMain function pointer or in any function called by it.</listitem>
</itemizedlist>
</para>
</section>
</section>

View File

@@ -1,309 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="mutex">
<purpose>
<para>The <classname>mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>try_mutex</classname> and <classname>timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>mutex</classname> class supplies the following typedef,
which <link linkend="thread.concepts.lock-models">models</link>
the specified locking strategy:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="try_mutex">
<purpose>
<para>The <classname>try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>mutex</classname> and <classname>timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>try_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>try_mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>try_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>try_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>try_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>try_mutex</classname> object.
</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="timed_mutex">
<purpose>
<para>The <classname>timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>mutex</classname> and <classname>try_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics, see <classname>recursive_mutex</classname>,
<classname>recursive_try_mutex</classname>, and <classname>recursive_timed_mutex</classname>.
</para>
<para>The <classname>timed_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>timed_mutex</classname> class uses an
<link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>timed_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>timed_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>timed_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>timed_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:mutex_concepts Mutex Concepts]
A mutex object facilitates protection against data races and allows thread-safe synchronization of data between threads. A thread
@@ -312,8 +305,6 @@ without blocking.]]
[section:lock_guard Class template `lock_guard`]
#include <boost/thread/locks.hpp>
template<typename Lockable>
class lock_guard
{
@@ -378,8 +369,6 @@ object passed to the constructor.]]
[section:unique_lock Class template `unique_lock`]
#include <boost/thread/locks.hpp>
template<typename Lockable>
class unique_lock
{
@@ -621,8 +610,6 @@ __owns_lock_ref__ returns `false`.]]
[section:shared_lock Class template `shared_lock`]
#include <boost/thread/locks.hpp>
template<typename Lockable>
class shared_lock
{
@@ -856,8 +843,6 @@ __owns_lock_shared_ref__ returns `false`.]]
[section:upgrade_lock Class template `upgrade_lock`]
#include <boost/thread/locks.hpp>
template<typename Lockable>
class upgrade_lock
{
@@ -905,8 +890,6 @@ state (including the destructor) must be called by the same thread that acquired
[section:upgrade_to_unique_lock Class template `upgrade_to_unique_lock`]
#include <boost/thread/locks.hpp>
template <class Lockable>
class upgrade_to_unique_lock
{
@@ -931,189 +914,4 @@ __lockable_concept_type__ is downgraded back to ['upgrade ownership].
[endsect]
[section:scoped_try_lock Mutex-specific class `scoped_try_lock`]
class MutexType::scoped_try_lock
{
private:
MutexType::scoped_try_lock(MutexType::scoped_try_lock<MutexType>& other);
MutexType::scoped_try_lock& operator=(MutexType::scoped_try_lock<MutexType>& other);
public:
MutexType::scoped_try_lock();
explicit MutexType::scoped_try_lock(MutexType& m);
MutexType::scoped_try_lock(MutexType& m_,adopt_lock_t);
MutexType::scoped_try_lock(MutexType& m_,defer_lock_t);
MutexType::scoped_try_lock(MutexType& m_,try_to_lock_t);
MutexType::scoped_try_lock(MutexType::scoped_try_lock<MutexType>&& other);
MutexType::scoped_try_lock& operator=(MutexType::scoped_try_lock<MutexType>&& other);
void swap(MutexType::scoped_try_lock&& other);
void lock();
bool try_lock();
void unlock();
bool owns_lock() const;
MutexType* mutex() const;
MutexType* release();
bool operator!() const;
typedef ``['unspecified-bool-type]`` bool_type;
operator bool_type() const;
};
The member typedef `scoped_try_lock` is provided for each distinct
`MutexType` as a typedef to a class with the preceding definition. The
semantics of each constructor and member function are identical to
those of [unique_lock_link `boost::unique_lock<MutexType>`] for the same `MutexType`, except
that the constructor that takes a single reference to a mutex will
call [try_lock_ref_link `m.try_lock()`] rather than `m.lock()`.
[endsect]
[endsect]
[section:lock_functions Lock functions]
[section:lock_multiple Non-member function `lock(Lockable1,Lockable2,...)`]
template<typename Lockable1,typename Lockable2>
void lock(Lockable1& l1,Lockable2& l2);
template<typename Lockable1,typename Lockable2,typename Lockable3>
void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3);
template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4>
void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4);
template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4,typename Lockable5>
void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4,Lockable5& l5);
[variablelist
[[Effects:] [Locks the __lockable_concept_type__ objects supplied as
arguments in an unspecified and indeterminate order in a way that
avoids deadlock. It is safe to call this function concurrently from
multiple threads with the same mutexes (or other lockable objects) in
different orders without risk of deadlock. If any of the __lock_ref__
or __try_lock_ref__ operations on the supplied
__lockable_concept_type__ objects throws an exception any locks
acquired by the function will be released before the function exits.]]
[[Throws:] [Any exceptions thrown by calling __lock_ref__ or
__try_lock_ref__ on the supplied __lockable_concept_type__ objects.]]
[[Postcondition:] [All the supplied __lockable_concept_type__ objects
are locked by the calling thread.]]
]
[endsect]
[section:lock_range Non-member function `lock(begin,end)`]
template<typename ForwardIterator>
void lock(ForwardIterator begin,ForwardIterator end);
[variablelist
[[Preconditions:] [The `value_type` of `ForwardIterator` must implement the __lockable_concept__]]
[[Effects:] [Locks all the __lockable_concept_type__ objects in the
supplied range in an unspecified and indeterminate order in a way that
avoids deadlock. It is safe to call this function concurrently from
multiple threads with the same mutexes (or other lockable objects) in
different orders without risk of deadlock. If any of the __lock_ref__
or __try_lock_ref__ operations on the __lockable_concept_type__
objects in the supplied range throws an exception any locks acquired
by the function will be released before the function exits.]]
[[Throws:] [Any exceptions thrown by calling __lock_ref__ or
__try_lock_ref__ on the supplied __lockable_concept_type__ objects.]]
[[Postcondition:] [All the __lockable_concept_type__ objects in the
supplied range are locked by the calling thread.]]
]
[endsect]
[section:try_lock_multiple Non-member function `try_lock(Lockable1,Lockable2,...)`]
template<typename Lockable1,typename Lockable2>
int try_lock(Lockable1& l1,Lockable2& l2);
template<typename Lockable1,typename Lockable2,typename Lockable3>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3);
template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4);
template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4,typename Lockable5>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4,Lockable5& l5);
[variablelist
[[Effects:] [Calls __try_lock_ref__ on each of the
__lockable_concept_type__ objects supplied as arguments. If any of the
calls to __try_lock_ref__ returns `false` then all locks acquired are
released and the zero-based index of the failed lock is returned.
If any of the __try_lock_ref__ operations on the supplied
__lockable_concept_type__ objects throws an exception any locks
acquired by the function will be released before the function exits.]]
[[Returns:] [`-1` if all the supplied __lockable_concept_type__ objects
are now locked by the calling thread, the zero-based index of the
object which could not be locked otherwise.]]
[[Throws:] [Any exceptions thrown by calling __try_lock_ref__ on the
supplied __lockable_concept_type__ objects.]]
[[Postcondition:] [If the function returns `-1`, all the supplied
__lockable_concept_type__ objects are locked by the calling
thread. Otherwise any locks acquired by this function will have been
released.]]
]
[endsect]
[section:try_lock_range Non-member function `try_lock(begin,end)`]
template<typename ForwardIterator>
ForwardIterator try_lock(ForwardIterator begin,ForwardIterator end);
[variablelist
[[Preconditions:] [The `value_type` of `ForwardIterator` must implement the __lockable_concept__]]
[[Effects:] [Calls __try_lock_ref__ on each of the
__lockable_concept_type__ objects in the supplied range. If any of the
calls to __try_lock_ref__ returns `false` then all locks acquired are
released and an iterator referencing the failed lock is returned.
If any of the __try_lock_ref__ operations on the supplied
__lockable_concept_type__ objects throws an exception any locks
acquired by the function will be released before the function exits.]]
[[Returns:] [`end` if all the supplied __lockable_concept_type__
objects are now locked by the calling thread, an iterator referencing
the object which could not be locked otherwise.]]
[[Throws:] [Any exceptions thrown by calling __try_lock_ref__ on the
supplied __lockable_concept_type__ objects.]]
[[Postcondition:] [If the function returns `end` then all the
__lockable_concept_type__ objects in the supplied range are locked by
the calling thread, otherwise all locks acquired by the function have
been released.]]
]
[endsect]
[endsect]

View File

@@ -1,16 +1,7 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:mutex_types Mutex Types]
[section:mutex Class `mutex`]
#include <boost/thread/mutex.hpp>
class mutex:
boost::noncopyable
{
@@ -26,7 +17,7 @@
native_handle_type native_handle();
typedef unique_lock<mutex> scoped_lock;
typedef unspecified-type scoped_try_lock;
typedef scoped_lock scoped_try_lock;
};
__mutex__ implements the __lockable_concept__ to provide an exclusive-ownership mutex. At most one thread can own the lock on a given
@@ -53,8 +44,6 @@ implementation. If no such instance exists, `native_handle()` and `native_handle
[section:try_mutex Typedef `try_mutex`]
#include <boost/thread/mutex.hpp>
typedef mutex try_mutex;
__try_mutex__ is a `typedef` to __mutex__, provided for backwards compatibility with previous releases of boost.
@@ -63,8 +52,6 @@ __try_mutex__ is a `typedef` to __mutex__, provided for backwards compatibility
[section:timed_mutex Class `timed_mutex`]
#include <boost/thread/mutex.hpp>
class timed_mutex:
boost::noncopyable
{
@@ -84,7 +71,7 @@ __try_mutex__ is a `typedef` to __mutex__, provided for backwards compatibility
native_handle_type native_handle();
typedef unique_lock<timed_mutex> scoped_timed_lock;
typedef unspecified-type scoped_try_lock;
typedef scoped_timed_lock scoped_try_lock;
typedef scoped_timed_lock scoped_lock;
};
@@ -112,8 +99,6 @@ implementation. If no such instance exists, `native_handle()` and `native_handle
[section:recursive_mutex Class `recursive_mutex`]
#include <boost/thread/recursive_mutex.hpp>
class recursive_mutex:
boost::noncopyable
{
@@ -129,7 +114,7 @@ implementation. If no such instance exists, `native_handle()` and `native_handle
native_handle_type native_handle();
typedef unique_lock<recursive_mutex> scoped_lock;
typedef unspecified-type scoped_try_lock;
typedef scoped_lock scoped_try_lock;
};
__recursive_mutex__ implements the __lockable_concept__ to provide an exclusive-ownership recursive mutex. At most one thread can
@@ -158,8 +143,6 @@ implementation. If no such instance exists, `native_handle()` and `native_handle
[section:recursive_try_mutex Typedef `recursive_try_mutex`]
#include <boost/thread/recursive_mutex.hpp>
typedef recursive_mutex recursive_try_mutex;
__recursive_try_mutex__ is a `typedef` to __recursive_mutex__, provided for backwards compatibility with previous releases of boost.
@@ -168,8 +151,6 @@ __recursive_try_mutex__ is a `typedef` to __recursive_mutex__, provided for back
[section:recursive_timed_mutex Class `recursive_timed_mutex`]
#include <boost/thread/recursive_mutex.hpp>
class recursive_timed_mutex:
boost::noncopyable
{
@@ -190,7 +171,7 @@ __recursive_try_mutex__ is a `typedef` to __recursive_mutex__, provided for back
native_handle_type native_handle();
typedef unique_lock<recursive_timed_mutex> scoped_lock;
typedef unspecified-type scoped_try_lock;
typedef scoped_lock scoped_try_lock;
typedef scoped_lock scoped_timed_lock;
};

View File

@@ -1,88 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/once.hpp"
last-revision="$Date$">
<macro name="BOOST_ONCE_INIT">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>The implementation-defined macro
<macroname>BOOST_ONCE_INIT</macroname> is a constant value used to
initialize <code>once_flag</code> instances to indicate that the
logically associated routine has not been run yet. See
<functionname>call_once</functionname> for more details.</description>
</macro>
<namespace name="boost">
<typedef name="once_flag">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>The implementation-defined type <code>once_flag</code>
is used as a flag to insure a routine is called only once.
Instances of this type should be statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>. See
<functionname>call_once</functionname> for more details.
</description>
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<function name="call_once">
<purpose>The <functionname>call_once</functionname> function and
<code>once_flag</code> type (statically initialized to
<macroname>BOOST_ONCE_INIT</macroname>) can be used to run a
routine exactly once. This can be used to initialize data in a
<link linkend="thread.glossary.thread-safe">thread-safe</link>
manner.</purpose>
<description>
<para>Example usage is as follows:</para>
<para>
<programlisting>//Example usage:
boost::once_flag once = BOOST_ONCE_INIT;
void init()
{
//...
}
void thread_proc()
{
boost::call_once(once, &amp;init);
}</programlisting>
</para></description>
<parameter name="flag">
<paramtype>once_flag&amp;</paramtype>
</parameter>
<parameter name="func">
<paramtype>Function func</paramtype>
</parameter>
<effects>As if (in an atomic fashion):
<code>if (flag == BOOST_ONCE_INIT) func();</code>. If <code>func()</code> throws an exception, it shall be as if this
thread never invoked <code>call_once</code></effects>
<postconditions><code>flag != BOOST_ONCE_INIT</code> unless <code>func()</code> throws an exception.
</postconditions>
</function>
</namespace>
</header>

View File

@@ -1,18 +1,9 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:once One-time Initialization]
`boost::call_once` provides a mechanism for ensuring that an initialization routine is run exactly once without data races or deadlocks.
[section:once_flag Typedef `once_flag`]
#include <boost/thread/once.hpp>
typedef platform-specific-type once_flag;
#define BOOST_ONCE_INIT platform-specific-initializer
@@ -24,8 +15,6 @@ Objects of type `boost::once_flag` shall be initialized with `BOOST_ONCE_INIT`:
[section:call_once Non-member function `call_once`]
#include <boost/thread/once.hpp>
template<typename Callable>
void call_once(once_flag& flag,Callable func);
@@ -35,8 +24,8 @@ Objects of type `boost::once_flag` shall be initialized with `BOOST_ONCE_INIT`:
be equivalent to calling the original. ]]
[[Effects:] [Calls to `call_once` on the same `once_flag` object are serialized. If there has been no prior effective `call_once` on
the same `once_flag` object, the argument `func` (or a copy thereof) is called as-if by invoking `func()`, and the invocation of
`call_once` is effective if and only if `func()` returns without exception. If an exception is thrown, the exception is
the same `once_flag` object, the argument `func` (or a copy thereof) is called as-if by invoking `func(args)`, and the invocation of
`call_once` is effective if and only if `func(args)` returns without exception. If an exception is thrown, the exception is
propagated to the caller. If there has been a prior effective `call_once` on the same `once_flag` object, the `call_once` returns
without invoking `func`. ]]

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:overview Overview]
__boost_thread__ enables the use of multiple threads of execution with shared data in portable C++ code. It provides classes and
@@ -19,12 +12,4 @@ closely follow the proposals presented to the C++ Standards Committee, in partic
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2139.html N2139], and
[@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2094.html N2094]
In order to use the classes and functions described here, you can
either include the specific headers specified by the descriptions of
each class or function, or include the master thread library header:
#include <boost/thread.hpp>
which includes all the other headers in turn.
[endsect]

View File

@@ -1,206 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.overview" last-revision="$Date$">
<title>Overview</title>
<section id="thread.introduction">
<title>Introduction</title>
<para>&Boost.Thread; allows C++ programs to execute as multiple,
asynchronous, independent threads-of-execution. Each thread has its own
machine state including program instruction counter and registers. Programs
which execute as multiple threads are called multithreaded programs to
distinguish them from traditional single-threaded programs. The <link
linkend="thread.glossary">glossary</link> gives a more complete description
of the multithreading execution environment.</para>
<para>Multithreading provides several advantages:
<itemizedlist>
<listitem>
<para>Programs which would otherwise block waiting for some external
event can continue to respond if the blocking operation is placed in a
separate thread. Multithreading is usually an absolute requirement for
these programs.</para>
</listitem>
<listitem>
<para>Well-designed multithreaded programs may execute faster than
single-threaded programs, particularly on multiprocessor hardware.
Note, however, that poorly-designed multithreaded programs are often
slower than single-threaded programs.</para>
</listitem>
<listitem>
<para>Some program designs may be easier to formulate using a
multithreaded approach. After all, the real world is
asynchronous!</para>
</listitem>
</itemizedlist></para>
</section>
<section>
<title>Dangers</title>
<section>
<title>General considerations</title>
<para>Beyond the errors which can occur in single-threaded programs,
multithreaded programs are subject to additional errors:
<itemizedlist>
<listitem>
<para><link linkend="thread.glossary.race-condition">Race
conditions</link></para>
</listitem>
<listitem>
<para><link linkend="thread.glossary.deadlock">Deadlock</link>
(sometimes called "deadly embrace")</para>
</listitem>
<listitem>
<para><link linkend="thread.glossary.priority-failure">Priority
failures</link> (priority inversion, infinite overtaking, starvation,
etc.)</para>
</listitem>
</itemizedlist></para>
<para>Every multithreaded program must be designed carefully to avoid these
errors. These aren't rare or exotic failures - they are virtually guaranteed
to occur unless multithreaded code is designed to avoid them. Priority
failures are somewhat less common, but are nonetheless serious.</para>
<para>The <link linkend="thread.design">&Boost.Thread; design</link>
attempts to minimize these errors, but they will still occur unless the
programmer proactively designs to avoid them.</para>
<note>Please also see <xref linkend="thread.implementation_notes"/>
for additional, implementation-specific considerations.</note>
</section>
<section>
<title>Testing and debugging considerations</title>
<para>Multithreaded programs are non-deterministic. In other words, the
same program with the same input data may follow different execution
paths each time it is invoked. That can make testing and debugging a
nightmare:
<itemizedlist>
<listitem>
<para>Failures are often not repeatable.</para>
</listitem>
<listitem>
<para>Probe effect causes debuggers to produce very different results
from non-debug uses.</para>
</listitem>
<listitem>
<para>Debuggers require special support to show thread state.</para>
</listitem>
<listitem>
<para>Tests on a single processor system may give no indication of
serious errors which would appear on multiprocessor systems, and visa
versa. Thus test cases should include a varying number of
processors.</para>
</listitem>
<listitem>
<para>For programs which create a varying number of threads according
to workload, tests which don't span the full range of possibilities
may miss serious errors.</para>
</listitem>
</itemizedlist></para>
</section>
<section>
<title>Getting a head start</title>
<para>Although it might appear that multithreaded programs are inherently
unreliable, many reliable multithreaded programs do exist. Multithreading
techniques are known which lead to reliable programs.</para>
<para>Design patterns for reliable multithreaded programs, including the
important <emphasis>monitor</emphasis> pattern, are presented in
<emphasis>Pattern-Oriented Software Architecture Volume 2 - Patterns for
Concurrent and Networked Objects</emphasis>
&cite.SchmidtStalRohnertBuschmann;. Many important multithreading programming
considerations (independent of threading library) are discussed in
<emphasis>Programming with POSIX Threads</emphasis> &cite.Butenhof97;.</para>
<para>Doing some reading before attempting multithreaded designs will
give you a head start toward reliable multithreaded programs.</para>
</section>
</section>
<section>
<title>C++ Standard Library usage in multithreaded programs</title>
<section>
<title>Runtime libraries</title>
<para>
<emphasis role="bold">Warning:</emphasis> Multithreaded programs such as
those using &Boost.Thread; must link to <link
linkend="thread.glossary.thread-safe">thread-safe</link> versions of
all runtime libraries used by the program, including the runtime library
for the C++ Standard Library. Failure to do so will cause <link
linkend="thread.glossary.race-condition">race conditions</link> to occur
when multiple threads simultaneously execute runtime library functions for
<code>new</code>, <code>delete</code>, or other language features which
imply shared state.</para>
</section>
<section>
<title>Potentially non-thread-safe functions</title>
<para>Certain C++ Standard Library functions inherited from C are
particular problems because they hold internal state between
calls:
<itemizedlist>
<listitem>
<para><code>rand</code></para>
</listitem>
<listitem>
<para><code>strtok</code></para>
</listitem>
<listitem>
<para><code>asctime</code></para>
</listitem>
<listitem>
<para><code>ctime</code></para>
</listitem>
<listitem>
<para><code>gmtime</code></para>
</listitem>
<listitem>
<para><code>localtime</code></para>
</listitem>
</itemizedlist></para>
<para>It is possible to write thread-safe implementations of these by
using thread specific storage (see
<classname>boost::thread_specific_ptr</classname>), and several C++
compiler vendors do just that. The technique is well-know and is explained
in &cite.Butenhof97;.</para>
<para>But at least one vendor (HP-UX) does not provide thread-safe
implementations of the above functions in their otherwise thread-safe
runtime library. Instead they provide replacement functions with
different names and arguments.</para>
<para><emphasis role="bold">Recommendation:</emphasis> For the most
portable, yet thread-safe code, use Boost replacements for the problem
functions. See the <libraryname>Boost Random Number Library</libraryname>
and <libraryname>Boost Tokenizer Library</libraryname>.</para>
</section>
</section>
<section>
<title>Common guarantees for all &Boost.Thread; components</title>
<section>
<title>Exceptions</title>
<para>&Boost.Thread; destructors never
throw exceptions. Unless otherwise specified, other
&Boost.Thread; functions that do not have
an exception-specification may throw implementation-defined
exceptions.</para>
<para>In particular, &Boost.Thread;
reports failure to allocate storage by throwing an exception of type
<code>std::bad_alloc</code> or a class derived from
<code>std::bad_alloc</code>, failure to obtain thread resources other than
memory by throwing an exception of type
<classname>boost::thread_resource_error</classname>, and certain lock
related failures by throwing an exception of type
<classname>boost::lock_error</classname>.</para>
<para><emphasis role="bold">Rationale:</emphasis> Follows the C++ Standard
Library practice of allowing all functions except destructors or other
specified functions to throw exceptions on errors.</para>
</section>
<section>
<title>NonCopyable requirement</title>
<para>&Boost.Thread; classes documented as
meeting the NonCopyable requirement disallow copy construction and copy
assignment. For the sake of exposition, the synopsis of such classes show
private derivation from <classname>boost::noncopyable</classname>. Users
should not depend on this derivation, however, as implementations are free
to meet the NonCopyable requirement in other ways.</para>
</section>
</section>
</section>

View File

@@ -1,438 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.rationale" last-revision="$Date$">
<title>Rationale</title>
<para>This page explains the rationale behind various design decisions in the
&Boost.Thread; library. Having the rationale documented here should explain
how we arrived at the current design as well as prevent future rehashing of
discussions and thought processes that have already occurred. It can also give
users a lot of insight into the design process required for this
library.</para>
<section id="thread.rationale.Boost.Thread">
<title>Rationale for the Creation of &Boost.Thread;</title>
<para>Processes often have a degree of "potential parallelism" and it can
often be more intuitive to design systems with this in mind. Further, these
parallel processes can result in more responsive programs. The benefits for
multithreaded programming are quite well known to most modern programmers,
yet the C++ language doesn't directly support this concept.</para>
<para>Many platforms support multithreaded programming despite the fact that
the language doesn't support it. They do this through external libraries,
which are, unfortunately, platform specific. POSIX has tried to address this
problem through the standardization of a "pthread" library. However, this is
a standard only on POSIX platforms, so its portability is limited.</para>
<para>Another problem with POSIX and other platform specific thread
libraries is that they are almost universally C based libraries. This leaves
several C++ specific issues unresolved, such as what happens when an
exception is thrown in a thread. Further, there are some C++ concepts, such
as destructors, that can make usage much easier than what's available in a C
library.</para>
<para>What's truly needed is C++ language support for threads. However, the
C++ standards committee needs existing practice or a good proposal as a
starting point for adding this to the standard.</para>
<para>The &Boost.Thread; library was developed to provide a C++ developer
with a portable interface for writing multithreaded programs on numerous
platforms. There's a hope that the library can be the basis for a more
detailed proposal for the C++ standards committee to consider for inclusion
in the next C++ standard.</para>
</section>
<section id="thread.rationale.primitives">
<title>Rationale for the Low Level Primitives Supported in &Boost.Thread;</title>
<para>The &Boost.Thread; library supplies a set of low level primitives for
writing multithreaded programs, such as mutexes and condition variables. In
fact, the first release of &Boost.Thread; supports only these low level
primitives. However, computer science research has shown that use of these
primitives is difficult since it's difficult to mathematically prove that a
usage pattern is correct, meaning it doesn't result in race conditions or
deadlocks. There are several algebras (such as CSP, CCS and Join calculus)
that have been developed to help write provably correct parallel
processes. In order to prove the correctness these processes must be coded
using higher level abstractions. So why does &Boost.Thread; support the
lower level concepts?</para>
<para>The reason is simple: the higher level concepts need to be implemented
using at least some of the lower level concepts. So having portable lower
level concepts makes it easier to develop the higher level concepts and will
allow researchers to experiment with various techniques.</para>
<para>Beyond this theoretical application of higher level concepts, however,
the fact remains that many multithreaded programs are written using only the
lower level concepts, so they are useful in and of themselves, even if it's
hard to prove that their usage is correct. Since many users will be familiar
with these lower level concepts but unfamiliar with any of the higher
level concepts, supporting the lower level concepts provides
greater accessibility.</para>
</section>
<section id="thread.rationale.locks">
<title>Rationale for the Lock Design</title>
<para>Programmers who are used to multithreaded programming issues will
quickly note that the &Boost.Thread; design for mutex lock concepts is not
<link linkend="thread.glossary.thread-safe">thread-safe</link> (this is
clearly documented as well). At first this may seem like a serious design
flaw. Why have a multithreading primitive that's not thread-safe
itself?</para>
<para>A lock object is not a synchronization primitive. A lock object's sole
responsibility is to ensure that a mutex is both locked and unlocked in a
manner that won't result in the common error of locking a mutex and then
forgetting to unlock it. This means that instances of a lock object are only
going to be created, at least in theory, within block scope and won't be
shared between threads. Only the mutex objects will be created outside of
block scope and/or shared between threads. Though it's possible to create a
lock object outside of block scope and to share it between threads, to do so
would not be a typical usage (in fact, to do so would likely be an
error). Nor are there any cases when such usage would be required.</para>
<para>Lock objects must maintain some state information. In order to allow a
program to determine if a try_lock or timed_lock was successful the lock
object must retain state indicating the success or failure of the call made
in its constructor. If a lock object were to have such state and remain
thread-safe it would need to synchronize access to the state information
which would result in roughly doubling the time of most operations. Worse,
since checking the state can occur only by a call after construction, we'd
have a race condition if the lock object were shared between threads.</para>
<para>So, to avoid the overhead of synchronizing access to the state
information and to avoid the race condition, the &Boost.Thread; library
simply does nothing to make lock objects thread-safe. Instead, sharing a
lock object between threads results in undefined behavior. Since the only
proper usage of lock objects is within block scope this isn't a problem, and
so long as the lock object is properly used there's no danger of any
multithreading issues.</para>
</section>
<section id="thread.rationale.non-copyable">
<title>Rationale for NonCopyable Thread Type</title>
<para>Programmers who are used to C libraries for multithreaded programming
are likely to wonder why &Boost.Thread; uses a noncopyable design for
<classname>boost::thread</classname>. After all, the C thread types are
copyable, and you often have a need for copying them within user
code. However, careful comparison of C designs to C++ designs shows a flaw
in this logic.</para>
<para>All C types are copyable. It is, in fact, not possible to make a
noncopyable type in C. For this reason types that represent system resources
in C are often designed to behave very similarly to a pointer to dynamic
memory. There's an API for acquiring the resource and an API for releasing
the resource. For memory we have pointers as the type and alloc/free for
the acquisition and release APIs. For files we have FILE* as the type and
fopen/fclose for the acquisition and release APIs. You can freely copy
instances of the types but must manually manage the lifetime of the actual
resource through the acquisition and release APIs.</para>
<para>C++ designs recognize that the acquisition and release APIs are error
prone and try to eliminate possible errors by acquiring the resource in the
constructor and releasing it in the destructor. The best example of such a
design is the std::iostream set of classes which can represent the same
resource as the FILE* type in C. A file is opened in the std::fstream's
constructor and closed in its destructor. However, if an iostream were
copyable it could lead to a file being closed twice, an obvious error, so
the std::iostream types are noncopyable by design. This is the same design
used by boost::thread, which is a simple and easy to understand design
that's consistent with other C++ standard types.</para>
<para>During the design of boost::thread it was pointed out that it would be
possible to allow it to be a copyable type if some form of "reference
management" were used, such as ref-counting or ref-lists, and many argued
for a boost::thread_ref design instead. The reasoning was that copying
"thread" objects was a typical need in the C libraries, and so presumably
would be in the C++ libraries as well. It was also thought that
implementations could provide more efficient reference management than
wrappers (such as boost::shared_ptr) around a noncopyable thread
concept. Analysis of whether or not these arguments would hold true doesn't
appear to bear them out. To illustrate the analysis we'll first provide
pseudo-code illustrating the six typical usage patterns of a thread
object.</para>
<section id="thread.rationale.non-copyable.simple">
<title>1. Use case: Simple creation of a thread.</title>
<programlisting>
void foo()
{
create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.joined">
<title>2. Use case: Creation of a thread that's later joined.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
join(thread);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.loop">
<title>3. Use case: Simple creation of several threads in a loop.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.loop-join">
<title>4. Use case: Creation of several threads in a loop which are later joined.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = create_thread(&amp;bar);
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i].join();
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.pass">
<title>5. Use case: Creation of a thread whose ownership is passed to another object/method.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
manager.owns(thread);
}
</programlisting>
</section>
<section id="thread.rationale.non-copyable.shared">
<title>6. Use case: Creation of a thread whose ownership is shared between multiple
objects.</title>
<programlisting>
void foo()
{
thread = create_thread(&amp;bar);
manager1.add(thread);
manager2.add(thread);
}
</programlisting>
</section>
<para>Of these usage patterns there's only one that requires reference
management (number 6). Hopefully it's fairly obvious that this usage pattern
simply won't occur as often as the other usage patterns. So there really
isn't a "typical need" for a thread concept, though there is some
need.</para>
<para>Since the need isn't typical we must use different criteria for
deciding on either a thread_ref or thread design. Possible criteria include
ease of use and performance. So let's analyze both of these
carefully.</para>
<para>With ease of use we can look at existing experience. The standard C++
objects that represent a system resource, such as std::iostream, are
noncopyable, so we know that C++ programmers must at least be experienced
with this design. Most C++ developers are also used to smart pointers such
as boost::shared_ptr, so we know they can at least adapt to a thread_ref
concept with little effort. So existing experience isn't going to lead us to
a choice.</para>
<para>The other thing we can look at is how difficult it is to use both
types for the six usage patterns above. If we find it overly difficult to
use a concept for any of the usage patterns there would be a good argument
for choosing the other design. So we'll code all six usage patterns using
both designs.</para>
<section id="thread.rationale_comparison.non-copyable.simple">
<title>1. Comparison: simple creation of a thread.</title>
<programlisting>
void foo()
{
thread thrd(&amp;bar);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.joined">
<title>2. Comparison: creation of a thread that's later joined.</title>
<programlisting>
void foo()
{
thread thrd(&amp;bar);
thrd.join();
}
void foo()
{
thread_ref thrd =
create_thread(&amp;bar);thrd-&gt;join();
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.loop">
<title>3. Comparison: simple creation of several threads in a loop.</title>
<programlisting>
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
thread thrd(&amp;bar);
}
void foo()
{
for (int i=0; i&lt;NUM_THREADS; ++i)
thread_ref thrd = create_thread(&amp;bar);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.loop-join">
<title>4. Comparison: creation of several threads in a loop which are later joined.</title>
<programlisting>
void foo()
{
std::auto_ptr&lt;thread&gt; threads[NUM_THREADS];
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = std::auto_ptr&lt;thread&gt;(new thread(&amp;bar));
for (int i= 0; i&lt;NUM_THREADS;
++i)threads[i]-&gt;join();
}
void foo()
{
thread_ref threads[NUM_THREADS];
for (int i=0; i&lt;NUM_THREADS; ++i)
threads[i] = create_thread(&amp;bar);
for (int i= 0; i&lt;NUM_THREADS;
++i)threads[i]-&gt;join();
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.pass">
<title>5. Comparison: creation of a thread whose ownership is passed to another object/method.</title>
<programlisting>
void foo()
{
thread thrd* = new thread(&amp;bar);
manager.owns(thread);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
manager.owns(thrd);
}
</programlisting>
</section>
<section id="thread.rationale_comparison.non-copyable.shared">
<title>6. Comparison: creation of a thread whose ownership is shared
between multiple objects.</title>
<programlisting>
void foo()
{
boost::shared_ptr&lt;thread&gt; thrd(new thread(&amp;bar));
manager1.add(thrd);
manager2.add(thrd);
}
void foo()
{
thread_ref thrd = create_thread(&amp;bar);
manager1.add(thrd);
manager2.add(thrd);
}
</programlisting>
</section>
<para>This shows the usage patterns being nearly identical in complexity for
both designs. The only actual added complexity occurs because of the use of
operator new in
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>,
<link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link>, and
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>;
and the use of std::auto_ptr and boost::shared_ptr in
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link> and
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
respectively. However, that's not really
much added complexity, and C++ programmers are used to using these idioms
anyway. Some may dislike the presence of operator new in user code, but
this can be eliminated by proper design of higher level concepts, such as
the boost::thread_group class that simplifies example
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
down to:</para>
<programlisting>
void foo()
{
thread_group threads;
for (int i=0; i&lt;NUM_THREADS; ++i)
threads.create_thread(&amp;bar);
threads.join_all();
}
</programlisting>
<para>So ease of use is really a wash and not much help in picking a
design.</para>
<para>So what about performance? Looking at the above code examples,
we can analyze the theoretical impact to performance that both designs
have. For <link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>
we can see that platforms that don't have a ref-counted native
thread type (POSIX, for instance) will be impacted by a thread_ref
design. Even if the native thread type is ref-counted there may be an impact
if more state information has to be maintained for concepts foreign to the
native API, such as clean up stacks for Win32 implementations.
For <link linkend="thread.rationale_comparison.non-copyable.joined">(2)</link>
and <link linkend="thread.rationale_comparison.non-copyable.loop">(3)</link>
the performance impact will be identical to
<link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>.
For <link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
things get a little more interesting and we find that theoretically at least
the thread_ref may perform faster since the thread design requires dynamic
memory allocation/deallocation. However, in practice there may be dynamic
allocation for the thread_ref design as well, it will just be hidden from
the user. As long as the implementation has to do dynamic allocations the
thread_ref loses again because of the reference management. For
<link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link> we see
the same impact as we do for
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>.
For <link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
we still have a possible impact to
the thread design because of dynamic allocation but thread_ref no longer
suffers because of its reference management, and in fact, theoretically at
least, the thread_ref may do a better job of managing the references. All of
this indicates that thread wins for
<link linkend="thread.rationale_comparison.non-copyable.simple">(1)</link>,
<link linkend="thread.rationale_comparison.non-copyable.joined">(2)</link> and
<link linkend="thread.rationale_comparison.non-copyable.loop">(3)</link>; with
<link linkend="thread.rationale_comparison.non-copyable.loop-join">(4)</link>
and <link linkend="thread.rationale_comparison.non-copyable.pass">(5)</link> the
winner depending on the implementation and the platform but with the thread design
probably having a better chance; and with
<link linkend="thread.rationale_comparison.non-copyable.shared">(6)</link>
it will again depend on the
implementation and platform but this time we favor thread_ref
slightly. Given all of this it's a narrow margin, but the thread design
prevails.</para>
<para>Given this analysis, and the fact that noncopyable objects for system
resources are the normal designs that C++ programmers are used to dealing
with, the &Boost.Thread; library has gone with a noncopyable design.</para>
</section>
<section id="thread.rationale.events">
<title>Rationale for not providing <emphasis>Event Variables</emphasis></title>
<para><emphasis>Event variables</emphasis> are simply far too
error-prone. <classname>boost::condition</classname> variables are a much safer
alternative. [Note that Graphical User Interface <emphasis>events</emphasis> are
a different concept, and are not what is being discussed here.]</para>
<para>Event variables were one of the first synchronization primitives. They
are still used today, for example, in the native Windows multithreading
API. Yet both respected computer science researchers and experienced
multithreading practitioners believe event variables are so inherently
error-prone that they should never be used, and thus should not be part of a
multithreading library.</para>
<para>Per Brinch Hansen &cite.Hansen73; analyzed event variables in some
detail, pointing out [emphasis his] that "<emphasis>event operations force
the programmer to be aware of the relative speeds of the sending and
receiving processes</emphasis>". His summary:</para>
<blockquote>
<para>We must therefore conclude that event variables of the previous type
are impractical for system design. <emphasis>The effect of an interaction
between two processes must be independent of the speed at which it is
carried out.</emphasis></para>
</blockquote>
<para>Experienced programmers using the Windows platform today report that
event variables are a continuing source of errors, even after previous bad
experiences caused them to be very careful in their use of event
variables. Overt problems can be avoided, for example, by teaming the event
variable with a mutex, but that may just convert a <link
linkend="thread.glossary.race-condition">race condition</link> into another
problem, such as excessive resource use. One of the most distressing aspects
of the experience reports is the claim that many defects are latent. That
is, the programs appear to work correctly, but contain hidden timing
dependencies which will cause them to fail when environmental factors or
usage patterns change, altering relative thread timings.</para>
<para>The decision to exclude event variables from &Boost.Thread; has been
surprising to some Windows programmers. They have written programs which
work using event variables, and wonder what the problem is. It seems similar
to the "goto considered harmful" controversy of 30 years ago. It isn't that
events, like gotos, can't be made to work, but rather that virtually all
programs using alternatives will be easier to write, debug, read, maintain,
and will be less likely to contain latent defects.</para>
<para>[Rationale provided by Beman Dawes]</para>
</section>
</section>

View File

@@ -1,492 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/read_write_mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<namespace name="read_write_scheduling_policy">
<enum name="read_write_scheduling_policy">
<enumvalue name="writer_priority" />
<enumvalue name="reader_priority" />
<enumvalue name="alternating_many_reads" />
<enumvalue name="alternating_single_read" />
<purpose>
<para>Specifies the
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
to use when a set of threads try to obtain different types of
locks simultaneously.</para>
</purpose>
<description>
<para>The only clock type supported by &Boost.Thread; is
<code>TIME_UTC</code>. The epoch for <code>TIME_UTC</code>
is 1970-01-01 00:00:00.</para>
</description>
</enum>
</namespace>
<class name="read_write_mutex">
<purpose>
<para>The <classname>read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.ReadWriteMutex">ReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.ReadWriteMutex">ReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>try_read_write_mutex</classname> and <classname>timed_read_write_mutex</classname>.</para>
<para>The <classname>read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="try_read_write_mutex">
<purpose>
<para>The <classname>try_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryReadWriteMutex">TryReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>try_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryReadWriteMutex">TryReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>read_write_mutex</classname> and <classname>timed_read_write_mutex</classname>.</para>
<para>The <classname>try_read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_try_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryReadWriteLock">ScopedTryReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>try_read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>try_read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>try_read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>try_read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>try_read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>try_read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>try_read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>try_read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="timed_read_write_mutex">
<purpose>
<para>The <classname>timed_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedReadWriteMutex">TimedReadWriteMutex</link> concept.</para>
<note> Unfortunately it turned out that the current implementation of Read/Write Mutex has
some serious problems. So it was decided not to put this implementation into
release grade code. Also discussions on the mailing list led to the
conclusion that the current concepts need to be rethought. In particular
the schedulings <link linkend="thread.concepts.read-write-scheduling-policies.inter-class">
Inter-Class Scheduling Policies</link> are deemed unnecessary.
There seems to be common belief that a fair scheme suffices.
The following documentation has been retained however, to give
readers of this document the opportunity to study the original design.
</note>
</purpose>
<description>
<para>The <classname>timed_read_write_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedReadWriteMutex">TimedReadWriteMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>read_write_mutex</classname> and <classname>try_read_write_mutex</classname>.</para>
<para>The <classname>timed_read_write_mutex</classname> class supplies the following typedefs,
which <link linkend="thread.concepts.read-write-lock-models">model</link>
the specified locking strategies:
<informaltable>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedReadWriteLock">ScopedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_try_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryReadWriteLock">ScopedTryReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_timed_read_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedReadWriteLock">ScopedTimedReadWriteLock</link></entry>
</row>
<row>
<entry>scoped_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_read_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
<row>
<entry>scoped_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_write_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</informaltable>
</para>
<para>The <classname>timed_read_write_mutex</classname> class uses an
<link linkend="thread.concepts.read-write-locking-strategies.unspecified">Unspecified</link>
locking strategy, so attempts to recursively lock a <classname>timed_read_write_mutex</classname>
object or attempts to unlock one by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.
This strategy allows implementations to be as efficient as possible
on any given platform. It is, however, recommended that
implementations include debugging support to detect misuse when
<code>NDEBUG</code> is not defined.</para>
<para>Like all
<link linkend="thread.concepts.read-write-mutex-models">read/write mutex models</link>
in &Boost.Thread;, <classname>timed_read_write_mutex</classname> has two types of
<link linkend="thread.concepts.read-write-scheduling-policies">scheduling policies</link>, an
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
between threads trying to obtain different types of locks and an
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link>
between threads trying to obtain the same type of lock.
The <classname>timed_read_write_mutex</classname> class allows the
programmer to choose what
<link linkend="thread.concepts.read-write-scheduling-policies.inter-class">inter-class sheduling policy</link>
will be used; however, like all read/write mutex models,
<classname>timed_read_write_mutex</classname> leaves the
<link linkend="thread.concepts.read-write-scheduling-policies.intra-class">intra-class sheduling policy</link> as
<link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
</para>
<note>Self-deadlock is virtually guaranteed if a thread tries to
lock the same <classname>timed_read_write_mutex</classname> multiple times
unless all locks are read-locks (but see below)</note>
</description>
<typedef name="scoped_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_read_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_read_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_write_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<parameter name="count">
<paramtype>boost::read_write_scheduling_policy</paramtype>
</parameter>
<effects>Constructs a <classname>timed_read_write_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>timed_read_write_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

View File

@@ -1,306 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/recursive_mutex.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="recursive_mutex">
<purpose>
<para>The <classname>recursive_mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_mutex</classname> class is a model of the
<link linkend="thread.concepts.Mutex">Mutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_try_mutex</classname> and <classname>recursive_timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_mutex</classname> class supplies the following typedef,
which models the specified locking strategy:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="recursive_try_mutex">
<purpose>
<para>The <classname>recursive_try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_try_mutex</classname> class is a model of the
<link linkend="thread.concepts.TryMutex">TryMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_mutex</classname> and <classname>recursive_timed_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_try_mutex</classname> class supplies the following typedefs,
which model the specified locking strategies:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_try_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_try_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_try_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_try_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_try_mutex</classname> object.
</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
<class name="recursive_timed_mutex">
<purpose>
<para>The <classname>recursive_timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.</para>
</purpose>
<description>
<para>The <classname>recursive_timed_mutex</classname> class is a model of the
<link linkend="thread.concepts.TimedMutex">TimedMutex</link> concept.
It should be used to synchronize access to shared resources using
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking mechanics.</para>
<para>For classes that model related mutex concepts, see
<classname>recursive_mutex</classname> and <classname>recursive_try_mutex</classname>.</para>
<para>For <link linkend="thread.concepts.unspecified-locking-strategy">Unspecified</link>
locking mechanics, see <classname>mutex</classname>,
<classname>try_mutex</classname>, and <classname>timed_mutex</classname>.
</para>
<para>The <classname>recursive_timed_mutex</classname> class supplies the following typedefs,
which model the specified locking strategies:
<table>
<title>Supported Lock Types</title>
<tgroup cols="2" align="left">
<thead>
<row>
<entry>Lock Name</entry>
<entry>Lock Concept</entry>
</row>
</thead>
<tbody>
<row>
<entry>scoped_lock</entry>
<entry><link linkend="thread.concepts.ScopedLock">ScopedLock</link></entry>
</row>
<row>
<entry>scoped_try_lock</entry>
<entry><link linkend="thread.concepts.ScopedTryLock">ScopedTryLock</link></entry>
</row>
<row>
<entry>scoped_timed_lock</entry>
<entry><link linkend="thread.concepts.ScopedTimedLock">ScopedTimedLock</link></entry>
</row>
</tbody>
</tgroup>
</table>
</para>
<para>The <classname>recursive_timed_mutex</classname> class uses a
<link linkend="thread.concepts.recursive-locking-strategy">Recursive</link>
locking strategy, so attempts to recursively lock a
<classname>recursive_timed_mutex</classname> object
succeed and an internal "lock count" is maintained.
Attempts to unlock a <classname>recursive_mutex</classname> object
by threads that don't own a lock on it result in
<emphasis role="bold">undefined behavior</emphasis>.</para>
<para>Like all
<link linkend="thread.concepts.mutex-models">mutex models</link>
in &Boost.Thread;, <classname>recursive_timed_mutex</classname> leaves the
<link linkend="thread.concepts.sheduling-policies">scheduling policy</link>
as <link linkend="thread.concepts.unspecified-scheduling-policy">Unspecified</link>.
Programmers should make no assumptions about the order in which
waiting threads acquire a lock.</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<typedef name="scoped_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_try_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<typedef name="scoped_timed_lock">
<type><emphasis>implementation-defined</emphasis></type>
</typedef>
<constructor>
<effects>Constructs a <classname>recursive_timed_mutex</classname> object.
</effects>
<postconditions><code>*this</code> is in an unlocked state.
</postconditions>
</constructor>
<destructor>
<effects>Destroys a <classname>recursive_timed_mutex</classname> object.</effects>
<requires><code>*this</code> is in an unlocked state.</requires>
<notes><emphasis role="bold">Danger:</emphasis> Destruction of a
locked mutex is a serious programming error resulting in undefined
behavior such as a program crash.</notes>
</destructor>
</class>
</namespace>
</header>

View File

@@ -1,30 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<library-reference id="thread.reference"
last-revision="$Date$"
xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="barrier-ref.xml"/>
<xi:include href="condition-ref.xml"/>
<xi:include href="exceptions-ref.xml"/>
<xi:include href="mutex-ref.xml"/>
<xi:include href="once-ref.xml"/>
<xi:include href="recursive_mutex-ref.xml"/>
<!--
The read_write_mutex is held back from release, since the
implementation suffers from a serious, yet unresolved bug.
The implementation is likely to appear in a reworked
form in the next release.
-->
<xi:include href="read_write_mutex-ref.xml"/>
<xi:include href="thread-ref.xml"/>
<xi:include href="tss-ref.xml"/>
<xi:include href="xtime-ref.xml"/>
</library-reference>

View File

@@ -1,204 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<section id="thread.release_notes" last-revision="$Date$">
<title>Release Notes</title>
<section id="thread.release_notes.boost_1_34_0">
<title>Boost 1.34.0</title>
<section id="thread.release_notes.boost_1_34_0.change_log.maintainance">
<title>New team of maintainers</title>
<para>
Since the original author William E. Kempf no longer is available to
maintain the &Boost.Thread; library, a new team has been formed
in an attempt to continue the work on &Boost.Thread;.
Fortunately William E. Kempf has given
<ulink url="http://lists.boost.org/Archives/boost/2006/09/110143.php">
permission </ulink>
to use his work under the boost license.
</para>
<para>
The team currently consists of
<itemizedlist>
<listitem>
Anthony Williams, for the Win32 platform,
</listitem>
<listitem>
Roland Schwarz, for the linux platform, and various "housekeeping" tasks.
</listitem>
</itemizedlist>
Volunteers for other platforms are welcome!
</para>
<para>
As the &Boost.Thread; was kind of orphaned over the last release, this release
attempts to fix the known bugs. Upcoming releases will bring in new things.
</para>
</section>
<section id="thread.release_notes.boost_1_34_0.change_log.read_write_mutex">
<title>read_write_mutex still broken</title>
<para>
<note>
It has been decided not to release the Read/Write Mutex, since the current
implementation suffers from a serious bug. The documentation of the concepts
has been included though, giving the interested reader an opportunity to study the
original concepts. Please refer to the following links if you are interested
which problems led to the decision to held back this mutex type.The issue
has been discovered before 1.33 was released and the code has
been omitted from that release. A reworked mutex is expected to appear in 1.35.
Also see:
<ulink url="http://lists.boost.org/Archives/boost/2005/08/92307.php">
read_write_mutex bug</ulink>
and
<ulink url="http://lists.boost.org/Archives/boost/2005/09/93180.php">
read_write_mutex fundamentally broken in 1.33</ulink>
</note>
</para>
</section>
</section>
<section id="thread.release_notes.boost_1_32_0">
<title>Boost 1.32.0</title>
<section id="thread.release_notes.boost_1_32_0.change_log.documentation">
<title>Documentation converted to BoostBook</title>
<para>The documentation was converted to BoostBook format,
and a number of errors and inconsistencies were
fixed in the process.
Since this was a fairly large task, there are likely to be
more errors and inconsistencies remaining. If you find any,
please report them!</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.static_link">
<title>Statically-link build option added</title>
<para>The option to link &Boost.Thread; as a static
library has been added (with some limitations on Win32 platforms).
This feature was originally removed from an earlier version
of Boost because <classname>boost::thread_specific_ptr</classname>
required that &Boost.Thread; be dynamically linked in order
for its cleanup functionality to work on Win32 platforms.
Because this limitation never applied to non-Win32 platforms,
because significant progress has been made in removing
the limitation on Win32 platforms (many thanks to
Aaron LaFramboise and Roland Scwarz!), and because the lack
of static linking is one of the most common complaints of
&Boost.Thread; users, this decision was reversed.</para>
<para>On non-Win32 platforms:
To choose the dynamically linked version of &Boost.Thread;
using Boost's auto-linking feature, #define BOOST_THREAD_USE_DLL;
to choose the statically linked version,
#define BOOST_THREAD_USE_LIB.
If neither symbols is #defined, the default will be chosen.
Currently the default is the statically linked version.</para>
<para>On Win32 platforms using VC++:
Use the same #defines as for non-Win32 platforms
(BOOST_THREAD_USE_DLL and BOOST_THREAD_USE_LIB).
If neither is #defined, the default will be chosen.
Currently the default is the statically linked version
if the VC++ run-time library is set to
"Multi-threaded" or "Multi-threaded Debug", and
the dynamically linked version
if the VC++ run-time library is set to
"Multi-threaded DLL" or "Multi-threaded Debug DLL".</para>
<para>On Win32 platforms using compilers other than VC++:
Use the same #defines as for non-Win32 platforms
(BOOST_THREAD_USE_DLL and BOOST_THREAD_USE_LIB).
If neither is #defined, the default will be chosen.
Currently the default is the dynamically linked version
because it has not yet been possible to implement automatic
tss cleanup in the statically linked version for compilers
other than VC++, although it is hoped that this will be
possible in a future version of &Boost.Thread;.
Note for advanced users: &Boost.Thread; provides several "hook"
functions to allow users to experiment with the statically
linked version on Win32 with compilers other than VC++.
These functions are on_process_enter(), on_process_exit(),
on_thread_enter(), and on_thread_exit(), and are defined
in tls_hooks.cpp. See the comments in that file for more
information.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.barrier">
<title>Barrier functionality added</title>
<para>A new class, <classname>boost::barrier</classname>, was added.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.read_write_mutex">
<title>Read/write mutex functionality added</title>
<para>New classes,
<classname>boost::read_write_mutex</classname>,
<classname>boost::try_read_write_mutex</classname>, and
<classname>boost::timed_read_write_mutex</classname>
were added.
<note>Since the read/write mutex and related classes are new,
both interface and implementation are liable to change
in future releases of &Boost.Thread;.
The lock concepts and lock promotion in particular are
still under discussion and very likely to change.</note>
</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.thread_specific_ptr">
<title>Thread-specific pointer functionality changed</title>
<para>The <classname>boost::thread_specific_ptr</classname>
constructor now takes an optional pointer to a cleanup function that
is called to release the thread-specific data that is being pointed
to by <classname>boost::thread_specific_ptr</classname> objects.</para>
<para>Fixed: the number of available thread-specific storage "slots"
is too small on some platforms.</para>
<para>Fixed: <functionname>thread_specific_ptr::reset()</functionname>
doesn't check error returned by <functionname>tss::set()</functionname>
(the <functionname>tss::set()</functionname> function now throws
if it fails instead of returning an error code).</para>
<para>Fixed: calling
<functionname>boost::thread_specific_ptr::reset()</functionname> or
<functionname>boost::thread_specific_ptr::release()</functionname>
causes double-delete: once when
<functionname>boost::thread_specific_ptr::reset()</functionname> or
<functionname>boost::thread_specific_ptr::release()</functionname>
is called and once when
<functionname>boost::thread_specific_ptr::~thread_specific_ptr()</functionname>
is called.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.mutex">
<title>Mutex implementation changed for Win32</title>
<para>On Win32, <classname>boost::mutex</classname>,
<classname>boost::try_mutex</classname>, <classname>boost::recursive_mutex</classname>,
and <classname>boost::recursive_try_mutex</classname> now use a Win32 critical section
whenever possible; otherwise they use a Win32 mutex. As before,
<classname>boost::timed_mutex</classname> and
<classname>boost::recursive_timed_mutex</classname> use a Win32 mutex.</para>
</section>
<section id="thread.release_notes.boost_1_32_0.change_log.wince">
<title>Windows CE support improved</title>
<para>Minor changes were made to make Boost.Thread work on Windows CE.</para>
</section>
</section>
</section>

View File

@@ -1,14 +1,5 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:shared_mutex Class `shared_mutex`]
#include <boost/thread/shared_mutex.hpp>
class shared_mutex
{
public:

View File

@@ -1,270 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/thread.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="thread">
<purpose>
<para>The <classname>thread</classname> class represents threads of
execution, and provides the functionality to create and manage
threads within the &Boost.Thread; library. See
<xref linkend="thread.glossary"/> for a precise description of
<link linkend="thread.glossary.thread">thread of execution</link>,
and for definitions of threading-related terms and of thread states such as
<link linkend="thread.glossary.thread-state">blocked</link>.</para>
</purpose>
<description>
<para>A <link linkend="thread.glossary.thread">thread of execution</link>
has an initial function. For the program's initial thread, the initial
function is <code>main()</code>. For other threads, the initial
function is <code>operator()</code> of the function object passed to
the <classname>thread</classname> object's constructor.</para>
<para>A thread of execution is said to be &quot;finished&quot;
or to have &quot;finished execution&quot; when its initial function returns or
is terminated. This includes completion of all thread cleanup
handlers, and completion of the normal C++ function return behaviors,
such as destruction of automatic storage (stack) objects and releasing
any associated implementation resources.</para>
<para>A thread object has an associated state which is either
&quot;joinable&quot; or &quot;non-joinable&quot;.</para>
<para>Except as described below, the policy used by an implementation
of &Boost.Thread; to schedule transitions between thread states is
unspecified.</para>
<para><note>Just as the lifetime of a file may be different from the
lifetime of an <code>iostream</code> object which represents the file, the lifetime
of a thread of execution may be different from the
<classname>thread</classname> object which represents the thread of
execution. In particular, after a call to <code>join()</code>,
the thread of execution will no longer exist even though the
<classname>thread</classname> object continues to exist until the
end of its normal lifetime. The converse is also possible; if
a <classname>thread</classname> object is destroyed without
<code>join()</code> first having been called, the thread of execution
continues until its initial function completes.</note></para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<effects>Constructs a <classname>thread</classname> object
representing the current thread of execution.</effects>
<postconditions><code>*this</code> is non-joinable.</postconditions>
<notes><emphasis role="bold">Danger:</emphasis>
<code>*this</code> is valid only within the current thread.</notes>
</constructor>
<constructor specifiers="explicit">
<parameter name="threadfunc">
<paramtype>const boost::function0&lt;void&gt;&amp;</paramtype>
</parameter>
<effects>
Starts a new thread of execution and constructs a
<classname>thread</classname> object representing it.
Copies <code>threadfunc</code> (which in turn copies
the function object wrapped by <code>threadfunc</code>)
to an internal location which persists for the lifetime
of the new thread of execution. Calls <code>operator()</code>
on the copy of the <code>threadfunc</code> function object
in the new thread of execution.
</effects>
<postconditions><code>*this</code> is joinable.</postconditions>
<throws><code>boost::thread_resource_error</code> if a new thread
of execution cannot be started.</throws>
</constructor>
<destructor>
<effects>Destroys <code>*this</code>. The actual thread of
execution may continue to execute after the
<classname>thread</classname> object has been destroyed.
</effects>
<notes>If <code>*this</code> is joinable the actual thread
of execution becomes &quot;detached&quot;. Any resources used
by the thread will be reclaimed when the thread of execution
completes. To ensure such a thread of execution runs to completion
before the <classname>thread</classname> object is destroyed, call
<code>join()</code>.</notes>
</destructor>
<method-group name="comparison">
<method name="operator==" cv="const">
<type>bool</type>
<parameter name="rhs">
<type>const thread&amp;</type>
</parameter>
<requires>The thread is non-terminated or <code>*this</code>
is joinable.</requires>
<returns><code>true</code> if <code>*this</code> and
<code>rhs</code> represent the same thread of
execution.</returns>
</method>
<method name="operator!=" cv="const">
<type>bool</type>
<parameter name="rhs">
<type>const thread&amp;</type>
</parameter>
<requires>The thread is non-terminated or <code>*this</code>
is joinable.</requires>
<returns><code>!(*this==rhs)</code>.</returns>
</method>
</method-group>
<method-group name="modifier">
<method name="join">
<type>void</type>
<requires><code>*this</code> is joinable.</requires>
<effects>The current thread of execution blocks until the
initial function of the thread of execution represented by
<code>*this</code> finishes and all resources are
reclaimed.</effects>
<postcondition><code>*this</code> is non-joinable.</postcondition>
<notes>If <code>*this == thread()</code> the result is
implementation-defined. If the implementation doesn't
detect this the result will be
<link linkend="thread.glossary.deadlock">deadlock</link>.
</notes>
</method>
</method-group>
<method-group name="static">
<method name="sleep" specifiers="static">
<type>void</type>
<parameter name="xt">
<paramtype>const <classname>xtime</classname>&amp;</paramtype>
</parameter>
<effects>The current thread of execution blocks until
<code>xt</code> is reached.</effects>
</method>
<method name="yield" specifiers="static">
<type>void</type>
<effects>The current thread of execution is placed in the
<link linkend="thread.glossary.thread-state">ready</link>
state.</effects>
<notes>
<simpara>Allow the current thread to give up the rest of its
time slice (or other scheduling quota) to another thread.
Particularly useful in non-preemptive implementations.</simpara>
</notes>
</method>
</method-group>
</class>
<class name="thread_group">
<purpose>
The <classname>thread_group</classname> class provides a container
for easy grouping of threads to simplify several common thread
creation and management idioms.
</purpose>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<effects>Constructs an empty <classname>thread_group</classname>
container.</effects>
</constructor>
<destructor>
<effects>Destroys each contained thread object. Destroys <code>*this</code>.</effects>
<notes>Behavior is undefined if another thread references
<code>*this </code> during the execution of the destructor.
</notes>
</destructor>
<method-group name="modifier">
<method name="create_thread">
<type><classname>thread</classname>*</type>
<parameter name="threadfunc">
<paramtype>const boost::function0&lt;void&gt;&amp;</paramtype>
</parameter>
<effects>Creates a new <classname>thread</classname> object
that executes <code>threadfunc</code> and adds it to the
<code>thread_group</code> container object's list of managed
<classname>thread</classname> objects.</effects>
<returns>Pointer to the newly created
<classname>thread</classname> object.</returns>
</method>
<method name="add_thread">
<type>void</type>
<parameter name="thrd">
<paramtype><classname>thread</classname>*</paramtype>
</parameter>
<effects>Adds <code>thrd</code> to the
<classname>thread_group</classname> object's list of managed
<classname>thread</classname> objects. The <code>thrd</code>
object must have been allocated via <code>operator new</code> and will
be deleted when the group is destroyed.</effects>
</method>
<method name="remove_thread">
<type>void</type>
<parameter name="thrd">
<paramtype><classname>thread</classname>*</paramtype>
</parameter>
<effects>Removes <code>thread</code> from <code>*this</code>'s
list of managed <classname>thread</classname> objects.</effects>
<throws><emphasis role="bold">???</emphasis> if
<code>thrd</code> is not in <code>*this</code>'s list
of managed <classname>thread</classname> objects.</throws>
</method>
<method name="join_all">
<type>void</type>
<effects>Calls <code>join()</code> on each of the managed
<classname>thread</classname> objects.</effects>
</method>
</method-group>
</class>
</namespace>
</header>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[article Thread
[quickbook 1.4]
[authors [Williams, Anthony]]
@@ -38,12 +31,6 @@
[template lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.lock [link_text]]]
[def __lock_ref__ [lock_ref_link `lock()`]]
[template lock_multiple_ref_link[link_text] [link thread.synchronization.lock_functions.lock_multiple [link_text]]]
[def __lock_multiple_ref__ [lock_multiple_ref_link `lock()`]]
[template try_lock_multiple_ref_link[link_text] [link thread.synchronization.lock_functions.try_lock_multiple [link_text]]]
[def __try_lock_multiple_ref__ [try_lock_multiple_ref_link `try_lock()`]]
[template unlock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.unlock [link_text]]]
[def __unlock_ref__ [unlock_ref_link `unlock()`]]
@@ -107,10 +94,8 @@
[def __recursive_timed_mutex__ [link thread.synchronization.mutex_types.recursive_timed_mutex `boost::recursive_timed_mutex`]]
[def __shared_mutex__ [link thread.synchronization.mutex_types.shared_mutex `boost::shared_mutex`]]
[template unique_lock_link[link_text] [link thread.synchronization.locks.unique_lock [link_text]]]
[def __lock_guard__ [link thread.synchronization.locks.lock_guard `boost::lock_guard`]]
[def __unique_lock__ [unique_lock_link `boost::unique_lock`]]
[def __unique_lock__ [link thread.synchronization.locks.unique_lock `boost::unique_lock`]]
[def __shared_lock__ [link thread.synchronization.locks.shared_lock `boost::shared_lock`]]
[def __upgrade_lock__ [link thread.synchronization.locks.upgrade_lock `boost::upgrade_lock`]]
[def __upgrade_to_unique_lock__ [link thread.synchronization.locks.upgrade_to_unique_lock `boost::upgrade_to_unique_lock`]]

View File

@@ -1,48 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<library name="Thread" dirname="thread" id="thread"
last-revision="$Date$"
xmlns:xi="http://www.w3.org/2001/XInclude">
<libraryinfo>
<author>
<firstname>William</firstname>
<othername>E.</othername>
<surname>Kempf</surname>
</author>
<copyright>
<year>2001</year>
<year>2002</year>
<year>2003</year>
<holder>William E. Kempf</holder>
</copyright>
<legalnotice>
<para>Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)</para>
</legalnotice>
<librarypurpose>Portable C++ multi-threading</librarypurpose>
<librarycategory name="category:concurrent" />
<title>Boost.Thread</title>
</libraryinfo>
<title>Boost.Thread</title>
<xi:include href="overview.xml"/>
<xi:include href="design.xml"/>
<xi:include href="concepts.xml"/>
<xi:include href="rationale.xml"/>
<xi:include href="reference.xml"/>
<xi:include href="faq.xml"/>
<xi:include href="configuration.xml"/>
<xi:include href="build.xml"/>
<xi:include href="implementation_notes.xml"/>
<xi:include href="release_notes.xml"/>
<xi:include href="glossary.xml"/>
<xi:include href="acknowledgements.xml"/>
<xi:include href="bibliography.xml"/>
</library>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:thread_management Thread Management]
[heading Synopsis]
@@ -165,8 +158,6 @@ __thread_id__ yield a total order for every non-equal thread ID.
[section:thread Class `thread`]
#include <boost/thread/thread.hpp>
class thread
{
public:
@@ -176,9 +167,6 @@ __thread_id__ yield a total order for every non-equal thread ID.
template <class F>
explicit thread(F f);
template <class F,class A1,class A2,...>
thread(F f,A1 a1,A2 a2,...);
template <class F>
thread(detail::thread_move_t<F> f);
@@ -255,30 +243,6 @@ not of type __thread_interrupted__, then `std::terminate()` will be called.]]
[endsect]
[section:multiple_argument_constructor Thread Constructor with arguments]
template <class F,class A1,class A2,...>
thread(F f,A1 a1,A2 a2,...);
[variablelist
[[Preconditions:] [`F` and each `A`n must by copyable or movable.]]
[[Effects:] [As if [link
thread.thread_management.thread.callable_constructor
`thread(boost::bind(f,a1,a2,...))`. Consequently, `f` and each `a`n
are copied into internal storage for access by the new thread.]]]
[[Postconditions:] [`*this` refers to the newly created thread of execution.]]
[[Throws:] [__thread_resource_error__ if an error occurs.]]
[[Note:] [Currently up to nine additional arguments `a1` to `a9` can be specified in addition to the function `f`.]]
]
[endsect]
[section:destructor Thread Destructor]
~thread();
@@ -509,8 +473,6 @@ value as `this->get_id()` prior to the call.]]
[section:non_member_swap Non-member function `swap()`]
#include <boost/thread/thread.hpp>
void swap(thread& lhs,thread& rhs);
[variablelist
@@ -524,8 +486,6 @@ value as `this->get_id()` prior to the call.]]
[section:id Class `boost::thread::id`]
#include <boost/thread/thread.hpp>
class thread::id
{
public:
@@ -674,8 +634,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:get_id Non-member function `get_id()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
thread::id get_id();
@@ -693,8 +651,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:interruption_point Non-member function `interruption_point()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
void interruption_point();
@@ -712,8 +668,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:interruption_requested Non-member function `interruption_requested()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
bool interruption_requested();
@@ -731,8 +685,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:interruption_enabled Non-member function `interruption_enabled()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
bool interruption_enabled();
@@ -750,8 +702,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:sleep Non-member function `sleep()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
template<typename TimeDuration>
@@ -772,8 +722,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:yield Non-member function `yield()`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
void yield();
@@ -791,8 +739,6 @@ instances of __thread_id__ `a` and `b` is the same if `a==b`, and different if `
[section:disable_interruption Class `disable_interruption`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
class disable_interruption
@@ -844,8 +790,6 @@ interruption state on destruction. Instances of `disable_interruption` cannot be
[section:restore_interruption Class `restore_interruption`]
#include <boost/thread/thread.hpp>
namespace this_thread
{
class restore_interruption
@@ -900,16 +844,12 @@ is destroyed, interruption is again disabled. Instances of `restore_interruption
[section:atthreadexit Non-member function template `at_thread_exit()`]
#include <boost/thread/thread.hpp>
template<typename Callable>
void at_thread_exit(Callable func);
[variablelist
[[Effects:] [A copy of `func` is placed in
thread-specific storage. This copy is invoked when the current thread
exits (even if the thread has been interrupted).]]
[[Effects:] [A copy of `func` is taken and stored to in thread-specific storage. This copy is invoked when the current thread exits.]]
[[Postconditions:] [A copy of `func` has been saved for invocation on thread exit.]]
@@ -924,8 +864,6 @@ error occurs within the thread library. Any exception thrown whilst copying `fun
[section:threadgroup Class `thread_group`]
#include <boost/thread/thread.hpp>
class thread_group:
private noncopyable
{
@@ -933,8 +871,7 @@ error occurs within the thread library. Any exception thrown whilst copying `fun
thread_group();
~thread_group();
template<typename F>
thread* create_thread(F threadfunc);
thread* create_thread(const function0<void>& threadfunc);
void add_thread(thread* thrd);
void remove_thread(thread* thrd);
void join_all();
@@ -971,8 +908,7 @@ error occurs within the thread library. Any exception thrown whilst copying `fun
[section:create_thread Member function `create_thread()`]
template<typename F>
thread* create_thread(F threadfunc);
thread* create_thread(const function0<void>& threadfunc);
[variablelist

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section:time Date and Time Requirements]
As of Boost 1.35.0, the __boost_thread__ library uses the [link date_time Boost.Date_Time] library for all operations that require a
@@ -47,8 +40,6 @@ date_time.posix_time.time_duration Boost.Date_Time Time Duration requirements] c
[section:system_time Typedef `system_time`]
#include <boost/thread/thread_time.hpp>
typedef boost::posix_time::ptime system_time;
See the documentation for [link date_time.posix_time.ptime_class `boost::posix_time::ptime`] in the Boost.Date_Time library.
@@ -57,8 +48,6 @@ See the documentation for [link date_time.posix_time.ptime_class `boost::posix_t
[section:get_system_time Non-member function `get_system_time()`]
#include <boost/thread/thread_time.hpp>
system_time get_system_time();
[variablelist

View File

@@ -1,206 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/tss.hpp"
last-revision="$Date$">
<namespace name="boost">
<class name="thread_specific_ptr">
<purpose>
The <classname>thread_specific_ptr</classname> class defines
an interface for using thread specific storage.
</purpose>
<description>
<para>Thread specific storage is data associated with
individual threads and is often used to make operations
that rely on global data
<link linkend="thread.glossary.thread-safe">thread-safe</link>.
</para>
<para>Template <classname>thread_specific_ptr</classname>
stores a pointer to an object obtained on a thread-by-thread
basis and calls a specified cleanup handler on the contained
pointer when the thread terminates. The cleanup handlers are
called in the reverse order of construction of the
<classname>thread_specific_ptr</classname>s, and for the
initial thread are called by the destructor, providing the
same ordering guarantees as for normal declarations. Each
thread initially stores the null pointer in each
<classname>thread_specific_ptr</classname> instance.</para>
<para>The template <classname>thread_specific_ptr</classname>
is useful in the following cases:
<itemizedlist>
<listitem>An interface was originally written assuming
a single thread of control and it is being ported to a
multithreaded environment.</listitem>
<listitem>Each thread of control invokes sequences of
methods that share data that are physically unique
for each thread, but must be logically accessed
through a globally visible access point instead of
being explicitly passed.</listitem>
</itemizedlist>
</para>
</description>
<inherit access="private">
<type><classname>boost::noncopyable</classname></type>
<purpose>Exposition only</purpose>
</inherit>
<constructor>
<requires>The expression <code>delete get()</code> is well
formed.</requires>
<effects>A thread-specific data key is allocated and visible to
all threads in the process. Upon creation, the value
<code>NULL</code> will be associated with the new key in all
active threads. A cleanup method is registered with the key
that will call <code>delete</code> on the value associated
with the key for a thread when it exits. When a thread exits,
if a key has a registered cleanup method and the thread has a
non-<code>NULL</code> value associated with that key, the value
of the key is set to <code>NULL</code> and then the cleanup
method is called with the previously associated value as its
sole argument. The order in which registered cleanup methods
are called when a thread exits is undefined. If after all the
cleanup methods have been called for all non-<code>NULL</code>
values, there are still some non-<code>NULL</code> values
with associated cleanup handlers the result is undefined
behavior.</effects>
<throws><classname>boost::thread_resource_error</classname> if
the necessary resources can not be obtained.</throws>
<notes>There may be an implementation specific limit to the
number of thread specific storage objects that can be created,
and this limit may be small.</notes>
<rationale>The most common need for cleanup will be to call
<code>delete</code> on the associated value. If other forms
of cleanup are required the overloaded constructor should be
called instead.</rationale>
</constructor>
<constructor>
<parameter name="cleanup">
<paramtype>void (*cleanup)(void*)</paramtype>
</parameter>
<effects>A thread-specific data key is allocated and visible to
all threads in the process. Upon creation, the value
<code>NULL</code> will be associated with the new key in all
active threads. The <code>cleanup</code> method is registered
with the key and will be called for a thread with the value
associated with the key for that thread when it exits. When a
thread exits, if a key has a registered cleanup method and the
thread has a non-<code>NULL</code> value associated with that
key, the value of the key is set to <code>NULL</code> and then
the cleanup method is called with the previously associated
value as its sole argument. The order in which registered
cleanup methods are called when a thread exits is undefined.
If after all the cleanup methods have been called for all
non-<code>NULL</code> values, there are still some
non-<code>NULL</code> values with associated cleanup handlers
the result is undefined behavior.</effects>
<throws><classname>boost::thread_resource_error</classname> if
the necessary resources can not be obtained.</throws>
<notes>There may be an implementation specific limit to the
number of thread specific storage objects that can be created,
and this limit may be small.</notes>
<rationale>There is the occasional need to register
specialized cleanup methods, or to register no cleanup method
at all (done by passing <code>NULL</code> to this constructor.
</rationale>
</constructor>
<destructor>
<effects>Deletes the thread-specific data key allocated by the
constructor. The thread-specific data values associated with
the key need not be <code>NULL</code>. It is the responsibility
of the application to perform any cleanup actions for data
associated with the key.</effects>
<notes>Does not destroy any data that may be stored in any
thread's thread specific storage. For this reason you should
not destroy a <classname>thread_specific_ptr</classname> object
until you are certain there are no threads running that have
made use of its thread specific storage.</notes>
<rationale>Associated data is not cleaned up because registered
cleanup methods need to be run in the thread that allocated the
associated data to be guarranteed to work correctly. There's no
safe way to inject the call into another thread's execution
path, making it impossible to call the cleanup methods safely.
</rationale>
</destructor>
<method-group name="modifier functions">
<method name="release">
<type>T*</type>
<postconditions><code>*this</code> holds the null pointer
for the current thread.</postconditions>
<returns><code>this-&gt;get()</code> prior to the call.</returns>
<rationale>This method provides a mechanism for the user to
relinquish control of the data associated with the
thread-specific key.</rationale>
</method>
<method name="reset">
<type>void</type>
<parameter name="p">
<paramtype>T*</paramtype>
<default>0</default>
</parameter>
<effects>If <code>this-&gt;get() != p &amp;&amp;
this-&gt;get() != NULL</code> then call the
associated cleanup function.</effects>
<postconditions><code>*this</code> holds the pointer
<code>p</code> for the current thread.</postconditions>
</method>
</method-group>
<method-group name="observer functions">
<method name="get" cv="const">
<type>T*</type>
<returns>The object stored in thread specific storage for
the current thread for <code>*this</code>.</returns>
<notes>Each thread initially returns 0.</notes>
</method>
<method name="operator-&gt;" cv="const">
<type>T*</type>
<returns><code>this-&gt;get()</code>.</returns>
</method>
<method name="operator*()" cv="const">
<type>T&amp;</type>
<requires><code>this-&gt;get() != 0</code></requires>
<returns><code>this-&gt;get()</code>.</returns>
</method>
</method-group>
</class>
</namespace>
</header>

View File

@@ -1,10 +1,3 @@
[/
(C) Copyright 2007-8 Anthony Williams.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
]
[section Thread Local Storage]
[heading Synopsis]
@@ -44,8 +37,6 @@ cleaned up, that value is added to the cleanup list. Cleanup finishes when there
[section:thread_specific_ptr Class `thread_specific_ptr`]
#include <boost/thread/tss.hpp>
template <typename T>
class thread_specific_ptr
{

View File

@@ -1,82 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN"
"http://www.boost.org/tools/boostbook/dtd/boostbook.dtd" [
<!ENTITY % thread.entities SYSTEM "entities.xml">
%thread.entities;
]>
<!-- Copyright (c) 2002-2003 William E. Kempf, Michael Glassford
Subject to the Boost Software License, Version 1.0.
(See accompanying file LICENSE-1.0 or http://www.boost.org/LICENSE-1.0)
-->
<header name="boost/thread/xtime.hpp"
last-revision="$Date$">
<namespace name="boost">
<enum name="xtime_clock_types">
<enumvalue name="TIME_UTC" />
<purpose>
<para>Specifies the clock type to use when creating
an object of type <classname>xtime</classname>.</para>
</purpose>
<description>
<para>The only clock type supported by &Boost.Thread; is
<code>TIME_UTC</code>. The epoch for <code>TIME_UTC</code>
is 1970-01-01 00:00:00.</para>
</description>
</enum>
<struct name="xtime">
<purpose>
<simpara>An object of type <classname>xtime</classname>
defines a time that is used to perform high-resolution time operations.
This is a temporary solution that will be replaced by a more robust time
library once available in Boost.</simpara>
</purpose>
<description>
<simpara>The <classname>xtime</classname> type is used to represent a point on
some time scale or a duration in time. This type may be proposed for the C standard by
Markus Kuhn. &Boost.Thread; provides only a very minimal implementation of this
proposal; it is expected that a full implementation (or some other time
library) will be provided in Boost as a separate library, at which time &Boost.Thread;
will deprecate its own implementation.</simpara>
<simpara><emphasis role="bold">Note</emphasis> that the resolution is
implementation specific. For many implementations the best resolution
of time is far more than one nanosecond, and even when the resolution
is reasonably good, the latency of a call to <code>xtime_get()</code>
may be significant. For maximum portability, avoid durations of less than
one second.</simpara>
</description>
<free-function-group name="creation">
<function name="xtime_get">
<type>int</type>
<parameter name="xtp">
<paramtype><classname>xtime</classname>*</paramtype>
</parameter>
<parameter name="clock_type">
<paramtype>int</paramtype>
</parameter>
<postconditions>
<simpara><code>xtp</code> represents the current point in
time as a duration since the epoch specified by
<code>clock_type</code>.</simpara>
</postconditions>
<returns>
<simpara><code>clock_type</code> if successful, otherwise 0.</simpara>
</returns>
</function>
</free-function-group>
<data-member name="sec">
<type><emphasis>platform-specific-type</emphasis></type>
</data-member>
</struct>
</namespace>
</header>

View File

@@ -1,2 +0,0 @@
bin
*.pdb

View File

@@ -1,209 +0,0 @@
// Copyright (C) 2001-2003
// William E. Kempf
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_XLOCK_WEK070601_HPP
#define BOOST_XLOCK_WEK070601_HPP
#include <boost/thread/detail/config.hpp>
#include <boost/utility.hpp>
#include <boost/thread/exceptions.hpp>
namespace boost {
class condition;
struct xtime;
namespace detail { namespace thread {
template <typename Mutex>
class lock_ops : private noncopyable
{
private:
lock_ops() { }
public:
typedef typename Mutex::cv_state lock_state;
static void lock(Mutex& m)
{
m.do_lock();
}
static bool trylock(Mutex& m)
{
return m.do_trylock();
}
static bool timedlock(Mutex& m, const xtime& xt)
{
return m.do_timedlock(xt);
}
static void unlock(Mutex& m)
{
m.do_unlock();
}
static void lock(Mutex& m, lock_state& state)
{
m.do_lock(state);
}
static void unlock(Mutex& m, lock_state& state)
{
m.do_unlock(state);
}
};
template <typename Mutex>
class scoped_lock : private noncopyable
{
public:
typedef Mutex mutex_type;
explicit scoped_lock(Mutex& mx, bool initially_locked=true)
: m_mutex(mx), m_locked(false)
{
if (initially_locked) lock();
}
~scoped_lock()
{
if (m_locked) unlock();
}
void lock()
{
if (m_locked) throw lock_error();
lock_ops<Mutex>::lock(m_mutex);
m_locked = true;
}
void unlock()
{
if (!m_locked) throw lock_error();
lock_ops<Mutex>::unlock(m_mutex);
m_locked = false;
}
bool locked() const { return m_locked; }
operator const void*() const { return m_locked ? this : 0; }
private:
friend class boost::condition;
Mutex& m_mutex;
bool m_locked;
};
template <typename TryMutex>
class scoped_try_lock : private noncopyable
{
public:
typedef TryMutex mutex_type;
explicit scoped_try_lock(TryMutex& mx)
: m_mutex(mx), m_locked(false)
{
try_lock();
}
scoped_try_lock(TryMutex& mx, bool initially_locked)
: m_mutex(mx), m_locked(false)
{
if (initially_locked) lock();
}
~scoped_try_lock()
{
if (m_locked) unlock();
}
void lock()
{
if (m_locked) throw lock_error();
lock_ops<TryMutex>::lock(m_mutex);
m_locked = true;
}
bool try_lock()
{
if (m_locked) throw lock_error();
return (m_locked = lock_ops<TryMutex>::trylock(m_mutex));
}
void unlock()
{
if (!m_locked) throw lock_error();
lock_ops<TryMutex>::unlock(m_mutex);
m_locked = false;
}
bool locked() const { return m_locked; }
operator const void*() const { return m_locked ? this : 0; }
private:
friend class boost::condition;
TryMutex& m_mutex;
bool m_locked;
};
template <typename TimedMutex>
class scoped_timed_lock : private noncopyable
{
public:
typedef TimedMutex mutex_type;
scoped_timed_lock(TimedMutex& mx, const xtime& xt)
: m_mutex(mx), m_locked(false)
{
timed_lock(xt);
}
scoped_timed_lock(TimedMutex& mx, bool initially_locked)
: m_mutex(mx), m_locked(false)
{
if (initially_locked) lock();
}
~scoped_timed_lock()
{
if (m_locked) unlock();
}
void lock()
{
if (m_locked) throw lock_error();
lock_ops<TimedMutex>::lock(m_mutex);
m_locked = true;
}
bool try_lock()
{
if (m_locked) throw lock_error();
return (m_locked = lock_ops<TimedMutex>::trylock(m_mutex));
}
bool timed_lock(const xtime& xt)
{
if (m_locked) throw lock_error();
return (m_locked = lock_ops<TimedMutex>::timedlock(m_mutex, xt));
}
void unlock()
{
if (!m_locked) throw lock_error();
lock_ops<TimedMutex>::unlock(m_mutex);
m_locked = false;
}
bool locked() const { return m_locked; }
operator const void*() const { return m_locked ? this : 0; }
private:
friend class boost::condition;
TimedMutex& m_mutex;
bool m_locked;
};
} // namespace thread
} // namespace detail
} // namespace boost
#endif // BOOST_XLOCK_WEK070601_HPP
// Change Log:
// 8 Feb 01 WEKEMPF Initial version.
// 22 May 01 WEKEMPF Modified to use xtime for time outs.
// 30 Jul 01 WEKEMPF Moved lock types into boost::detail::thread. Renamed
// some types. Added locked() methods.

View File

@@ -6,10 +6,8 @@
#ifndef BOOST_THREAD_MOVE_HPP
#define BOOST_THREAD_MOVE_HPP
#ifndef BOOST_NO_SFINAE
#include <boost/utility/enable_if.hpp>
#include <boost/type_traits/is_convertible.hpp>
#endif
#include <boost/config/abi_prefix.hpp>
@@ -39,13 +37,11 @@ namespace boost
};
}
#ifndef BOOST_NO_SFINAE
template<typename T>
typename enable_if<boost::is_convertible<T&,detail::thread_move_t<T> >, T >::type move(T& t)
typename enable_if<boost::is_convertible<T&,detail::thread_move_t<T> >, detail::thread_move_t<T> >::type move(T& t)
{
return T(detail::thread_move_t<T>(t));
return t;
}
#endif
template<typename T>
detail::thread_move_t<T> move(detail::thread_move_t<T> t)

View File

@@ -42,9 +42,9 @@
#elif defined(__QNXNTO__)
# define BOOST_THREAD_QNXNTO
#elif defined(unix) || defined(__unix) || defined(_XOPEN_SOURCE) || defined(_POSIX_SOURCE)
# if defined(BOOST_HAS_PTHREADS) && !defined(BOOST_THREAD_POSIX)
# define BOOST_THREAD_POSIX
# endif
# if defined(BOOST_HAS_PTHREADS) && !defined(BOOST_THREAD_POSIX)
# define BOOST_THREAD_POSIX
# endif
#endif
// For every supported platform add a new entry into the dispatch table below.

File diff suppressed because it is too large Load Diff

View File

@@ -172,14 +172,6 @@ namespace boost
return static_cast<thread&&>(*this);
}
#else
#ifdef BOOST_NO_SFINAE
template <class F>
explicit thread(F f):
thread_info(make_thread_info(f))
{
start_thread();
}
#else
template <class F>
explicit thread(F f,typename disable_if<boost::is_convertible<F&,detail::thread_move_t<F> >, dummy* >::type=0):
@@ -187,10 +179,9 @@ namespace boost
{
start_thread();
}
#endif
template <class F>
explicit thread(detail::thread_move_t<F> f):
thread(detail::thread_move_t<F> f):
thread_info(make_thread_info(f))
{
start_thread();
@@ -339,9 +330,9 @@ namespace boost
return t;
}
#else
inline thread move(detail::thread_move_t<thread> t)
inline detail::thread_move_t<thread> move(detail::thread_move_t<thread> t)
{
return thread(t);
return t;
}
#endif

View File

@@ -10,21 +10,12 @@
#include <algorithm>
#include <iterator>
#include <boost/thread/thread_time.hpp>
#include <boost/detail/workaround.hpp>
#include <boost/utility/enable_if.hpp>
#include <boost/config/abi_prefix.hpp>
namespace boost
{
struct xtime;
#if defined(BOOST_NO_SFINAE) || \
BOOST_WORKAROUND(__IBMCPP__, BOOST_TESTED_AT(600)) || \
BOOST_WORKAROUND(__SUNPRO_CC, BOOST_TESTED_AT(0x590))
#define BOOST_THREAD_NO_AUTO_DETECT_MUTEX_TYPES
#endif
#ifndef BOOST_THREAD_NO_AUTO_DETECT_MUTEX_TYPES
namespace detail
{
template<typename T>
@@ -86,13 +77,7 @@ namespace boost
detail::has_member_try_lock<T>::value);
};
#else
template<typename T>
struct is_mutex_type
{
BOOST_STATIC_CONSTANT(bool, value = false);
};
#endif
struct defer_lock_t
{};
@@ -111,74 +96,6 @@ namespace boost
template<typename Mutex>
class upgrade_lock;
template<typename Mutex>
class unique_lock;
namespace detail
{
template<typename Mutex>
class try_lock_wrapper;
}
#ifdef BOOST_THREAD_NO_AUTO_DETECT_MUTEX_TYPES
template<typename T>
struct is_mutex_type<unique_lock<T> >
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<typename T>
struct is_mutex_type<shared_lock<T> >
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<typename T>
struct is_mutex_type<upgrade_lock<T> >
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<typename T>
struct is_mutex_type<detail::try_lock_wrapper<T> >
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
class mutex;
class timed_mutex;
class recursive_mutex;
class recursive_timed_mutex;
class shared_mutex;
template<>
struct is_mutex_type<mutex>
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<>
struct is_mutex_type<timed_mutex>
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<>
struct is_mutex_type<recursive_mutex>
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<>
struct is_mutex_type<recursive_timed_mutex>
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
template<>
struct is_mutex_type<shared_mutex>
{
BOOST_STATIC_CONSTANT(bool, value = true);
};
#endif
template<typename Mutex>
class lock_guard
{
@@ -209,10 +126,8 @@ namespace boost
private:
Mutex* m;
bool is_locked;
unique_lock(unique_lock&);
explicit unique_lock(upgrade_lock<Mutex>&);
explicit unique_lock(unique_lock&);
unique_lock& operator=(unique_lock&);
unique_lock& operator=(upgrade_lock<Mutex>& other);
public:
unique_lock():
m(0),is_locked(false)
@@ -234,51 +149,11 @@ namespace boost
{
try_lock();
}
template<typename TimeDuration>
unique_lock(Mutex& m_,TimeDuration const& target_time):
m(&m_),is_locked(false)
{
timed_lock(target_time);
}
unique_lock(Mutex& m_,system_time const& target_time):
m(&m_),is_locked(false)
{
timed_lock(target_time);
}
#ifdef BOOST_HAS_RVALUE_REFS
unique_lock(unique_lock&& other):
m(other.m),is_locked(other.is_locked)
{
other.is_locked=false;
other.m=0;
}
explicit unique_lock(upgrade_lock<Mutex>&& other);
unique_lock<Mutex>&& move()
{
return static_cast<unique_lock<Mutex>&&>(*this);
}
unique_lock& operator=(unique_lock<Mutex>&& other)
{
unique_lock temp(other);
swap(temp);
return *this;
}
unique_lock& operator=(upgrade_lock<Mutex>&& other)
{
unique_lock temp(other);
swap(temp);
return *this;
}
void swap(unique_lock&& other)
{
std::swap(m,other.m);
std::swap(is_locked,other.is_locked);
}
#else
unique_lock(detail::thread_move_t<unique_lock<Mutex> > other):
m(other->m),is_locked(other->is_locked)
{
@@ -310,6 +185,7 @@ namespace boost
swap(temp);
return *this;
}
void swap(unique_lock& other)
{
std::swap(m,other.m);
@@ -320,7 +196,6 @@ namespace boost
std::swap(m,other->m);
std::swap(is_locked,other->is_locked);
}
#endif
~unique_lock()
{
@@ -359,11 +234,6 @@ namespace boost
is_locked=m->timed_lock(absolute_time);
return is_locked;
}
bool timed_lock(::boost::xtime const& absolute_time)
{
is_locked=m->timed_lock(absolute_time);
return is_locked;
}
void unlock()
{
if(!owns_lock())
@@ -405,27 +275,11 @@ namespace boost
friend class upgrade_lock<Mutex>;
};
#ifdef BOOST_HAS_RVALUE_REFS
template<typename Mutex>
void swap(unique_lock<Mutex>&& lhs,unique_lock<Mutex>&& rhs)
{
lhs.swap(rhs);
}
#else
template<typename Mutex>
void swap(unique_lock<Mutex>& lhs,unique_lock<Mutex>& rhs)
{
lhs.swap(rhs);
}
#endif
#ifdef BOOST_HAS_RVALUE_REFS
template<typename Mutex>
inline unique_lock<Mutex>&& move(unique_lock<Mutex>&& ul)
{
return ul;
}
#endif
template<typename Mutex>
class shared_lock
@@ -524,29 +378,11 @@ namespace boost
return *this;
}
#ifdef BOOST_HAS_RVALUE_REFS
void swap(shared_lock&& other)
{
std::swap(m,other.m);
std::swap(is_locked,other.is_locked);
}
#else
void swap(shared_lock& other)
{
std::swap(m,other.m);
std::swap(is_locked,other.is_locked);
}
void swap(boost::detail::thread_move_t<shared_lock<Mutex> > other)
{
std::swap(m,other->m);
std::swap(is_locked,other->is_locked);
}
#endif
Mutex* mutex() const
{
return m;
}
~shared_lock()
{
@@ -582,16 +418,6 @@ namespace boost
is_locked=m->timed_lock_shared(target_time);
return is_locked;
}
template<typename Duration>
bool timed_lock(Duration const& target_time)
{
if(owns_lock())
{
throw boost::lock_error();
}
is_locked=m->timed_lock_shared(target_time);
return is_locked;
}
void unlock()
{
if(!owns_lock())
@@ -602,7 +428,7 @@ namespace boost
is_locked=false;
}
typedef void (shared_lock<Mutex>::*bool_type)();
typedef void (shared_lock::*bool_type)();
operator bool_type() const
{
return is_locked?&shared_lock::lock:0;
@@ -618,20 +444,6 @@ namespace boost
};
#ifdef BOOST_HAS_RVALUE_REFS
template<typename Mutex>
void swap(shared_lock<Mutex>&& lhs,shared_lock<Mutex>&& rhs)
{
lhs.swap(rhs);
}
#else
template<typename Mutex>
void swap(shared_lock<Mutex>& lhs,shared_lock<Mutex>& rhs)
{
lhs.swap(rhs);
}
#endif
template<typename Mutex>
class upgrade_lock
{
@@ -764,18 +576,6 @@ namespace boost
};
#ifdef BOOST_HAS_RVALUE_REFS
template<typename Mutex>
unique_lock<Mutex>::unique_lock(upgrade_lock<Mutex>&& other):
m(other.m),is_locked(other.is_locked)
{
other.is_locked=false;
if(is_locked)
{
m.unlock_upgrade_and_lock();
}
}
#else
template<typename Mutex>
unique_lock<Mutex>::unique_lock(detail::thread_move_t<upgrade_lock<Mutex> > other):
m(other->m),is_locked(other->is_locked)
@@ -786,7 +586,7 @@ namespace boost
m->unlock_upgrade_and_lock();
}
}
#endif
template <class Mutex>
class upgrade_to_unique_lock
{
@@ -885,12 +685,6 @@ namespace boost
return *this;
}
#ifdef BOOST_HAS_RVALUE_REFS
void swap(try_lock_wrapper&& other)
{
base::swap(other);
}
#else
void swap(try_lock_wrapper& other)
{
base::swap(other);
@@ -899,7 +693,6 @@ namespace boost
{
base::swap(*other);
}
#endif
void lock()
{
@@ -933,23 +726,15 @@ namespace boost
typedef typename base::bool_type bool_type;
operator bool_type() const
{
return base::operator bool_type();
return static_cast<base const&>(*this);
}
};
#ifdef BOOST_HAS_RVALUE_REFS
template<typename Mutex>
void swap(try_lock_wrapper<Mutex>&& lhs,try_lock_wrapper<Mutex>&& rhs)
{
lhs.swap(rhs);
}
#else
template<typename Mutex>
void swap(try_lock_wrapper<Mutex>& lhs,try_lock_wrapper<Mutex>& rhs)
{
lhs.swap(rhs);
}
#endif
template<typename MutexType1,typename MutexType2>
unsigned try_lock_internal(MutexType1& m1,MutexType2& m2)
@@ -1074,63 +859,28 @@ namespace boost
}
}
namespace detail
template<typename MutexType1,typename MutexType2>
typename enable_if<is_mutex_type<MutexType1>, void>::type lock(MutexType1& m1,MutexType2& m2)
{
template<bool x>
struct is_mutex_type_wrapper
{};
template<typename MutexType1,typename MutexType2>
void lock_impl(MutexType1& m1,MutexType2& m2,is_mutex_type_wrapper<true>)
unsigned const lock_count=2;
unsigned lock_first=0;
while(true)
{
unsigned const lock_count=2;
unsigned lock_first=0;
while(true)
switch(lock_first)
{
switch(lock_first)
{
case 0:
lock_first=detail::lock_helper(m1,m2);
if(!lock_first)
return;
break;
case 1:
lock_first=detail::lock_helper(m2,m1);
if(!lock_first)
return;
lock_first=(lock_first+1)%lock_count;
break;
}
case 0:
lock_first=detail::lock_helper(m1,m2);
if(!lock_first)
return;
break;
case 1:
lock_first=detail::lock_helper(m2,m1);
if(!lock_first)
return;
lock_first=(lock_first+1)%lock_count;
break;
}
}
template<typename Iterator>
void lock_impl(Iterator begin,Iterator end,is_mutex_type_wrapper<false>);
}
template<typename MutexType1,typename MutexType2>
void lock(MutexType1& m1,MutexType2& m2)
{
detail::lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
void lock(const MutexType1& m1,MutexType2& m2)
{
detail::lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
void lock(MutexType1& m1,const MutexType2& m2)
{
detail::lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
void lock(const MutexType1& m1,const MutexType2& m2)
{
detail::lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2,typename MutexType3>
@@ -1245,52 +995,10 @@ namespace boost
}
}
namespace detail
{
template<typename Mutex,bool x=is_mutex_type<Mutex>::value>
struct try_lock_impl_return
{
typedef int type;
};
template<typename Iterator>
struct try_lock_impl_return<Iterator,false>
{
typedef Iterator type;
};
template<typename MutexType1,typename MutexType2>
int try_lock_impl(MutexType1& m1,MutexType2& m2,is_mutex_type_wrapper<true>)
{
return ((int)detail::try_lock_internal(m1,m2))-1;
}
template<typename Iterator>
Iterator try_lock_impl(Iterator begin,Iterator end,is_mutex_type_wrapper<false>);
}
template<typename MutexType1,typename MutexType2>
typename detail::try_lock_impl_return<MutexType1>::type try_lock(MutexType1& m1,MutexType2& m2)
typename enable_if<is_mutex_type<MutexType1>, int>::type try_lock(MutexType1& m1,MutexType2& m2)
{
return detail::try_lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
typename detail::try_lock_impl_return<MutexType1>::type try_lock(const MutexType1& m1,MutexType2& m2)
{
return detail::try_lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
typename detail::try_lock_impl_return<MutexType1>::type try_lock(MutexType1& m1,const MutexType2& m2)
{
return detail::try_lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
}
template<typename MutexType1,typename MutexType2>
typename detail::try_lock_impl_return<MutexType1>::type try_lock(const MutexType1& m1,const MutexType2& m2)
{
return detail::try_lock_impl(m1,m2,detail::is_mutex_type_wrapper<is_mutex_type<MutexType1>::value>());
return ((int)detail::try_lock_internal(m1,m2))-1;
}
template<typename MutexType1,typename MutexType2,typename MutexType3>
@@ -1312,6 +1020,9 @@ namespace boost
}
template<typename Iterator>
typename disable_if<is_mutex_type<Iterator>, void>::type lock(Iterator begin,Iterator end);
namespace detail
{
template<typename Iterator>
@@ -1339,59 +1050,70 @@ namespace boost
}
}
};
template<typename Iterator>
Iterator try_lock_impl(Iterator begin,Iterator end,is_mutex_type_wrapper<false>)
{
if(begin==end)
{
return end;
}
typedef typename std::iterator_traits<Iterator>::value_type lock_type;
unique_lock<lock_type> guard(*begin,try_to_lock);
if(!guard.owns_lock())
{
return begin;
}
Iterator const failed=try_lock(++begin,end);
if(failed==end)
{
guard.release();
}
return failed;
}
}
namespace detail
template<typename Iterator>
typename disable_if<is_mutex_type<Iterator>, Iterator>::type try_lock(Iterator begin,Iterator end)
{
template<typename Iterator>
void lock_impl(Iterator begin,Iterator end,is_mutex_type_wrapper<false>)
if(begin==end)
{
typedef typename std::iterator_traits<Iterator>::value_type lock_type;
return end;
}
typedef typename std::iterator_traits<Iterator>::value_type lock_type;
unique_lock<lock_type> guard(*begin,try_to_lock);
if(begin==end)
{
return;
}
bool start_with_begin=true;
Iterator second=begin;
++second;
Iterator next=second;
if(!guard.owns_lock())
{
return begin;
}
Iterator const failed=try_lock(++begin,end);
if(failed==end)
{
guard.release();
}
for(;;)
return failed;
}
template<typename Iterator>
typename disable_if<is_mutex_type<Iterator>, void>::type lock(Iterator begin,Iterator end)
{
typedef typename std::iterator_traits<Iterator>::value_type lock_type;
if(begin==end)
{
return;
}
bool start_with_begin=true;
Iterator second=begin;
++second;
Iterator next=second;
for(;;)
{
unique_lock<lock_type> begin_lock(*begin,defer_lock);
if(start_with_begin)
{
unique_lock<lock_type> begin_lock(*begin,defer_lock);
if(start_with_begin)
begin_lock.lock();
Iterator const failed_lock=try_lock(next,end);
if(failed_lock==end)
{
begin_lock.lock();
Iterator const failed_lock=try_lock(next,end);
if(failed_lock==end)
begin_lock.release();
return;
}
start_with_begin=false;
next=failed_lock;
}
else
{
detail::range_lock_guard<Iterator> guard(next,end);
if(begin_lock.try_lock())
{
Iterator const failed_lock=try_lock(second,next);
if(failed_lock==next)
{
begin_lock.release();
guard.release();
return;
}
start_with_begin=false;
@@ -1399,28 +1121,11 @@ namespace boost
}
else
{
detail::range_lock_guard<Iterator> guard(next,end);
if(begin_lock.try_lock())
{
Iterator const failed_lock=try_lock(second,next);
if(failed_lock==next)
{
begin_lock.release();
guard.release();
return;
}
start_with_begin=false;
next=failed_lock;
}
else
{
start_with_begin=true;
next=second;
}
start_with_begin=true;
next=second;
}
}
}
}
}

View File

@@ -121,17 +121,6 @@ namespace boost
}
return true;
}
template<typename lock_type>
bool timed_wait(lock_type& m,xtime const& wait_until)
{
return timed_wait(m,system_time(wait_until));
}
template<typename lock_type,typename duration_type>
bool timed_wait(lock_type& m,duration_type const& wait_duration)
{
return timed_wait(m,get_system_time()+wait_duration);
}
template<typename lock_type,typename predicate_type>
bool timed_wait(lock_type& m,boost::system_time const& wait_until,predicate_type pred)

View File

@@ -47,16 +47,6 @@ namespace boost
}
bool timed_wait(unique_lock<mutex>& m,boost::system_time const& wait_until);
bool timed_wait(unique_lock<mutex>& m,xtime const& wait_until)
{
return timed_wait(m,system_time(wait_until));
}
template<typename duration_type>
bool timed_wait(unique_lock<mutex>& m,duration_type const& wait_duration)
{
return timed_wait(m,get_system_time()+wait_duration);
}
template<typename predicate_type>
bool timed_wait(unique_lock<mutex>& m,boost::system_time const& wait_until,predicate_type pred)

View File

@@ -10,7 +10,6 @@
#include <boost/thread/exceptions.hpp>
#include <boost/thread/locks.hpp>
#include <boost/thread/thread_time.hpp>
#include <boost/thread/xtime.hpp>
#include <boost/assert.hpp>
#include <errno.h>
#include "timespec.hpp"
@@ -114,10 +113,6 @@ namespace boost
{
return timed_lock(get_system_time()+relative_time);
}
bool timed_lock(boost::xtime const & absolute_time)
{
return timed_lock(system_time(absolute_time));
}
#ifdef BOOST_PTHREAD_HAS_TIMEDLOCK
void lock()

View File

@@ -177,7 +177,7 @@ namespace boost
{
struct timespec const timeout=detail::get_timespec(abs_time);
int const res=pthread_mutex_timedlock(&m,&timeout);
BOOST_ASSERT(!res || res==ETIMEDOUT);
BOOST_ASSERT(!res || res==EBUSY);
return !res;
}

View File

@@ -57,18 +57,18 @@ namespace boost
void lock_shared()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.exclusive || state.exclusive_waiting_blocked)
{
shared_cond.wait(lk);
shared_cond.wait(lock);
}
++state.shared_count;
}
bool try_lock_shared()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
if(state.exclusive || state.exclusive_waiting_blocked)
{
@@ -84,11 +84,11 @@ namespace boost
bool timed_lock_shared(system_time const& timeout)
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.exclusive || state.exclusive_waiting_blocked)
{
if(!shared_cond.timed_wait(lk,timeout))
if(!shared_cond.timed_wait(lock,timeout))
{
return false;
}
@@ -105,7 +105,7 @@ namespace boost
void unlock_shared()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
bool const last_reader=!--state.shared_count;
if(last_reader)
@@ -127,12 +127,12 @@ namespace boost
void lock()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.shared_count || state.exclusive)
{
state.exclusive_waiting_blocked=true;
exclusive_cond.wait(lk);
exclusive_cond.wait(lock);
}
state.exclusive=true;
}
@@ -140,12 +140,12 @@ namespace boost
bool timed_lock(system_time const& timeout)
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.shared_count || state.exclusive)
{
state.exclusive_waiting_blocked=true;
if(!exclusive_cond.timed_wait(lk,timeout))
if(!exclusive_cond.timed_wait(lock,timeout))
{
if(state.shared_count || state.exclusive)
{
@@ -168,7 +168,7 @@ namespace boost
bool try_lock()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
if(state.shared_count || state.exclusive)
{
@@ -184,7 +184,7 @@ namespace boost
void unlock()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
state.exclusive=false;
state.exclusive_waiting_blocked=false;
release_waiters();
@@ -193,10 +193,10 @@ namespace boost
void lock_upgrade()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.exclusive || state.exclusive_waiting_blocked || state.upgrade)
{
shared_cond.wait(lk);
shared_cond.wait(lock);
}
++state.shared_count;
state.upgrade=true;
@@ -205,10 +205,10 @@ namespace boost
bool timed_lock_upgrade(system_time const& timeout)
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
while(state.exclusive || state.exclusive_waiting_blocked || state.upgrade)
{
if(!shared_cond.timed_wait(lk,timeout))
if(!shared_cond.timed_wait(lock,timeout))
{
if(state.exclusive || state.exclusive_waiting_blocked || state.upgrade)
{
@@ -230,7 +230,7 @@ namespace boost
bool try_lock_upgrade()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
if(state.exclusive || state.exclusive_waiting_blocked || state.upgrade)
{
return false;
@@ -245,7 +245,7 @@ namespace boost
void unlock_upgrade()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
state.upgrade=false;
bool const last_reader=!--state.shared_count;
@@ -259,11 +259,11 @@ namespace boost
void unlock_upgrade_and_lock()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
--state.shared_count;
while(state.shared_count)
{
upgrade_cond.wait(lk);
upgrade_cond.wait(lock);
}
state.upgrade=false;
state.exclusive=true;
@@ -271,7 +271,7 @@ namespace boost
void unlock_and_lock_upgrade()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
state.exclusive=false;
state.upgrade=true;
++state.shared_count;
@@ -281,7 +281,7 @@ namespace boost
void unlock_and_lock_shared()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
state.exclusive=false;
++state.shared_count;
state.exclusive_waiting_blocked=false;
@@ -290,7 +290,7 @@ namespace boost
void unlock_upgrade_and_lock_shared()
{
boost::mutex::scoped_lock lk(state_change);
boost::mutex::scoped_lock lock(state_change);
state.upgrade=false;
state.exclusive_waiting_blocked=false;
release_waiters();

View File

@@ -1,111 +1,110 @@
#ifndef BOOST_THREAD_TSS_HPP
#define BOOST_THREAD_TSS_HPP
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// (C) Copyright 2007-8 Anthony Williams
#include <boost/thread/detail/config.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread/detail/thread_heap_alloc.hpp>
#include <boost/config/abi_prefix.hpp>
namespace boost
{
namespace detail
{
struct tss_cleanup_function
{
virtual ~tss_cleanup_function()
{}
virtual void operator()(void* data)=0;
};
BOOST_THREAD_DECL void set_tss_data(void const* key,boost::shared_ptr<tss_cleanup_function> func,void* tss_data,bool cleanup_existing);
BOOST_THREAD_DECL void* get_tss_data(void const* key);
}
template <typename T>
class thread_specific_ptr
{
private:
thread_specific_ptr(thread_specific_ptr&);
thread_specific_ptr& operator=(thread_specific_ptr&);
struct delete_data:
detail::tss_cleanup_function
{
void operator()(void* data)
{
delete static_cast<T*>(data);
}
};
struct run_custom_cleanup_function:
detail::tss_cleanup_function
{
void (*cleanup_function)(T*);
explicit run_custom_cleanup_function(void (*cleanup_function_)(T*)):
cleanup_function(cleanup_function_)
{}
void operator()(void* data)
{
cleanup_function(static_cast<T*>(data));
}
};
boost::shared_ptr<detail::tss_cleanup_function> cleanup;
public:
thread_specific_ptr():
cleanup(detail::heap_new<delete_data>(),detail::do_heap_delete<delete_data>())
{}
explicit thread_specific_ptr(void (*func_)(T*))
{
if(func_)
{
cleanup.reset(detail::heap_new<run_custom_cleanup_function>(func_),detail::do_heap_delete<run_custom_cleanup_function>());
}
}
~thread_specific_ptr()
{
reset();
}
T* get() const
{
return static_cast<T*>(detail::get_tss_data(this));
}
T* operator->() const
{
return get();
}
T& operator*() const
{
return *get();
}
T* release()
{
T* const temp=get();
detail::set_tss_data(this,boost::shared_ptr<detail::tss_cleanup_function>(),0,false);
return temp;
}
void reset(T* new_value=0)
{
T* const current_value=get();
if(current_value!=new_value)
{
detail::set_tss_data(this,cleanup,new_value,true);
}
}
};
}
#include <boost/config/abi_suffix.hpp>
#endif
#ifndef BOOST_THREAD_TSS_HPP
#define BOOST_THREAD_TSS_HPP
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// (C) Copyright 2007-8 Anthony Williams
#include <boost/shared_ptr.hpp>
#include <boost/thread/detail/thread_heap_alloc.hpp>
#include <boost/config/abi_prefix.hpp>
namespace boost
{
namespace detail
{
struct tss_cleanup_function
{
virtual ~tss_cleanup_function()
{}
virtual void operator()(void* data)=0;
};
BOOST_THREAD_DECL void set_tss_data(void const* key,boost::shared_ptr<tss_cleanup_function> func,void* tss_data,bool cleanup_existing);
BOOST_THREAD_DECL void* get_tss_data(void const* key);
}
template <typename T>
class thread_specific_ptr
{
private:
thread_specific_ptr(thread_specific_ptr&);
thread_specific_ptr& operator=(thread_specific_ptr&);
struct delete_data:
detail::tss_cleanup_function
{
void operator()(void* data)
{
delete static_cast<T*>(data);
}
};
struct run_custom_cleanup_function:
detail::tss_cleanup_function
{
void (*cleanup_function)(T*);
explicit run_custom_cleanup_function(void (*cleanup_function_)(T*)):
cleanup_function(cleanup_function_)
{}
void operator()(void* data)
{
cleanup_function(static_cast<T*>(data));
}
};
boost::shared_ptr<detail::tss_cleanup_function> cleanup;
public:
thread_specific_ptr():
cleanup(detail::heap_new<delete_data>(),detail::do_heap_delete<delete_data>())
{}
explicit thread_specific_ptr(void (*func_)(T*))
{
if(func_)
{
cleanup.reset(detail::heap_new<run_custom_cleanup_function>(func_),detail::do_heap_delete<run_custom_cleanup_function>());
}
}
~thread_specific_ptr()
{
reset();
}
T* get() const
{
return static_cast<T*>(detail::get_tss_data(this));
}
T* operator->() const
{
return get();
}
T& operator*() const
{
return *get();
}
T* release()
{
T* const temp=get();
detail::set_tss_data(this,boost::shared_ptr<detail::tss_cleanup_function>(),0,false);
return temp;
}
void reset(T* new_value=0)
{
T* const current_value=get();
if(current_value!=new_value)
{
detail::set_tss_data(this,cleanup,new_value,true);
}
}
};
}
#include <boost/config/abi_suffix.hpp>
#endif

View File

@@ -64,6 +64,11 @@ namespace boost
return timed_lock(get_system_time()+timeout);
}
long get_active_count()
{
return mutex.get_active_count();
}
void unlock()
{
if(!--recursion_count)
@@ -73,6 +78,11 @@ namespace boost
}
}
bool locked()
{
return mutex.locked();
}
private:
bool try_recursive_lock(long current_thread_id)
{

View File

@@ -13,7 +13,6 @@
#include "thread_primitives.hpp"
#include "interlocked_read.hpp"
#include <boost/thread/thread_time.hpp>
#include <boost/thread/xtime.hpp>
#include <boost/detail/interlocked.hpp>
#include <boost/config/abi_prefix.hpp>
@@ -118,9 +117,9 @@ namespace boost
return timed_lock(get_system_time()+timeout);
}
bool timed_lock(boost::xtime const& timeout)
long get_active_count()
{
return timed_lock(system_time(timeout));
return ::boost::detail::interlocked_read_acquire(&active_count);
}
void unlock()
@@ -136,6 +135,11 @@ namespace boost
}
}
bool locked()
{
return get_active_count()>=lock_flag_value;
}
private:
void* get_event()
{

View File

@@ -20,7 +20,7 @@ namespace boost
}
class mutex:
boost::noncopyable,
boost::noncopyable,
public ::boost::detail::underlying_mutex
{
public:

View File

@@ -91,7 +91,7 @@ namespace boost
bool try_lock_shared()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
if(!new_state.exclusive && !new_state.exclusive_waiting_blocked)
@@ -106,6 +106,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
return !(old_state.exclusive| old_state.exclusive_waiting_blocked);
}
@@ -122,10 +130,17 @@ namespace boost
bool timed_lock_shared(boost::system_time const& wait_until)
{
for(;;)
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true)
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
if(new_state.exclusive || new_state.exclusive_waiting_blocked)
@@ -144,6 +159,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
if(!(old_state.exclusive| old_state.exclusive_waiting_blocked))
{
@@ -153,7 +176,7 @@ namespace boost
unsigned long const res=detail::win32::WaitForSingleObject(unlock_sem,::boost::detail::get_milliseconds_until(wait_until));
if(res==detail::win32::timeout)
{
for(;;)
do
{
state_data new_state=old_state;
if(new_state.exclusive || new_state.exclusive_waiting_blocked)
@@ -175,6 +198,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
if(!(old_state.exclusive| old_state.exclusive_waiting_blocked))
{
@@ -190,7 +221,7 @@ namespace boost
void unlock_shared()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
bool const last_reader=!--new_state.shared_count;
@@ -231,6 +262,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
}
void lock()
@@ -244,39 +283,20 @@ namespace boost
return timed_lock(get_system_time()+relative_time);
}
bool try_lock()
{
state_data old_state=state;
for(;;)
{
state_data new_state=old_state;
if(new_state.shared_count || new_state.exclusive)
{
return false;
}
else
{
new_state.exclusive=true;
}
state_data const current_state=interlocked_compare_exchange(&state,new_state,old_state);
if(current_state==old_state)
{
break;
}
old_state=current_state;
}
return true;
}
bool timed_lock(boost::system_time const& wait_until)
{
for(;;)
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true)
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
if(new_state.shared_count || new_state.exclusive)
@@ -296,6 +316,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
if(!old_state.shared_count && !old_state.exclusive)
{
@@ -304,7 +332,7 @@ namespace boost
unsigned long const wait_res=detail::win32::WaitForMultipleObjects(2,semaphores,true,::boost::detail::get_milliseconds_until(wait_until));
if(wait_res==detail::win32::timeout)
{
for(;;)
do
{
state_data new_state=old_state;
if(new_state.shared_count || new_state.exclusive)
@@ -329,6 +357,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
if(!old_state.shared_count && !old_state.exclusive)
{
return true;
@@ -342,7 +378,7 @@ namespace boost
void unlock()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
new_state.exclusive=false;
@@ -360,15 +396,30 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
release_waiters(old_state);
}
void lock_upgrade()
{
for(;;)
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true)
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
if(new_state.exclusive || new_state.exclusive_waiting_blocked || new_state.upgrade)
@@ -388,6 +439,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
if(!(old_state.exclusive|| old_state.exclusive_waiting_blocked|| old_state.upgrade))
{
@@ -398,36 +457,10 @@ namespace boost
}
}
bool try_lock_upgrade()
{
state_data old_state=state;
for(;;)
{
state_data new_state=old_state;
if(new_state.exclusive || new_state.exclusive_waiting_blocked || new_state.upgrade)
{
return false;
}
else
{
++new_state.shared_count;
new_state.upgrade=true;
}
state_data const current_state=interlocked_compare_exchange(&state,new_state,old_state);
if(current_state==old_state)
{
break;
}
old_state=current_state;
}
return true;
}
void unlock_upgrade()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
new_state.upgrade=false;
@@ -454,12 +487,20 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
}
void unlock_upgrade_and_lock()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
bool const last_reader=!--new_state.shared_count;
@@ -481,12 +522,20 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
}
void unlock_and_lock_upgrade()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
new_state.exclusive=false;
@@ -506,13 +555,21 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
release_waiters(old_state);
}
void unlock_and_lock_shared()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
new_state.exclusive=false;
@@ -531,13 +588,21 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
release_waiters(old_state);
}
void unlock_upgrade_and_lock_shared()
{
state_data old_state=state;
for(;;)
do
{
state_data new_state=old_state;
new_state.upgrade=false;
@@ -555,6 +620,14 @@ namespace boost
}
old_state=current_state;
}
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:4127)
#endif
while(true);
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
release_waiters(old_state);
}

View File

@@ -64,7 +64,7 @@ namespace boost
# ifdef UNDER_CE
# ifndef WINAPI
# ifndef _WIN32_WCE_EMULATION
# define WINAPI __cdecl // Note this doesn't match the desktop definition
# define WINAPI __cdecl // Note this doesn't match the desktop definition
# else
# define WINAPI __stdcall
# endif
@@ -282,6 +282,16 @@ namespace boost
}
#if defined(BOOST_MSVC) && (_MSC_VER>=1400) && !defined(UNDER_CE)
#if _MSC_VER==1400
extern "C" unsigned char _interlockedbittestandset(long *a,long b);
extern "C" unsigned char _interlockedbittestandreset(long *a,long b);
#else
extern "C" unsigned char _interlockedbittestandset(volatile long *a,long b);
extern "C" unsigned char _interlockedbittestandreset(volatile long *a,long b);
#endif
#pragma intrinsic(_interlockedbittestandset)
#pragma intrinsic(_interlockedbittestandreset)
namespace boost
{
@@ -289,17 +299,6 @@ namespace boost
{
namespace win32
{
#if _MSC_VER==1400
extern "C" unsigned char _interlockedbittestandset(long *a,long b);
extern "C" unsigned char _interlockedbittestandreset(long *a,long b);
#else
extern "C" unsigned char _interlockedbittestandset(volatile long *a,long b);
extern "C" unsigned char _interlockedbittestandreset(volatile long *a,long b);
#endif
#pragma intrinsic(_interlockedbittestandset)
#pragma intrinsic(_interlockedbittestandreset)
inline bool interlocked_bit_test_and_set(long* x,long bit)
{
return _interlockedbittestandset(x,bit)!=0;

View File

@@ -1,42 +0,0 @@
// Copyright (C) 2002-2003
// David Moore, William E. Kempf
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#include <boost/thread/barrier.hpp>
#include <string> // see http://article.gmane.org/gmane.comp.lib.boost.devel/106981
namespace boost {
barrier::barrier(unsigned int count)
: m_threshold(count), m_count(count), m_generation(0)
{
if (count == 0)
throw std::invalid_argument("count cannot be zero.");
}
barrier::~barrier()
{
}
bool barrier::wait()
{
boost::mutex::scoped_lock lock(m_mutex);
unsigned int gen = m_generation;
if (--m_count == 0)
{
m_generation++;
m_count = m_threshold;
m_cond.notify_all();
return true;
}
while (gen == m_generation)
m_cond.wait(lock);
return false;
}
} // namespace boost

View File

@@ -1,705 +0,0 @@
// Copyright (C) 2001-2003
// William E. Kempf
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#include <boost/thread/condition.hpp>
#include <boost/thread/xtime.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/exceptions.hpp>
#include <boost/limits.hpp>
#include <cassert>
#include "timeconv.inl"
#if defined(BOOST_HAS_WINTHREADS)
# ifndef NOMINMAX
# define NOMINMAX
# endif
# include <windows.h>
#elif defined(BOOST_HAS_PTHREADS)
# include <errno.h>
#elif defined(BOOST_HAS_MPTASKS)
# include <MacErrors.h>
# include "mac/init.hpp"
# include "mac/safe.hpp"
#endif
// The following include can be removed after the bug on QNX
// has been tracked down. I need this only for debugging
//#if !defined(NDEBUG) && defined(BOOST_HAS_PTHREADS)
#include <iostream>
//#endif
namespace boost {
namespace detail {
#if defined(BOOST_HAS_WINTHREADS)
condition_impl::condition_impl()
: m_gone(0), m_blocked(0), m_waiting(0)
{
m_gate = reinterpret_cast<void*>(CreateSemaphore(0, 1, 1, 0));
m_queue = reinterpret_cast<void*>(
CreateSemaphore(0, 0, (std::numeric_limits<long>::max)(), 0));
m_mutex = reinterpret_cast<void*>(CreateMutex(0, 0, 0));
if (!m_gate || !m_queue || !m_mutex)
{
int res = 0;
if (m_gate)
{
res = CloseHandle(reinterpret_cast<HANDLE>(m_gate));
assert(res);
}
if (m_queue)
{
res = CloseHandle(reinterpret_cast<HANDLE>(m_queue));
assert(res);
}
if (m_mutex)
{
res = CloseHandle(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
}
throw thread_resource_error();
}
}
condition_impl::~condition_impl()
{
int res = 0;
res = CloseHandle(reinterpret_cast<HANDLE>(m_gate));
assert(res);
res = CloseHandle(reinterpret_cast<HANDLE>(m_queue));
assert(res);
res = CloseHandle(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
}
void condition_impl::notify_one()
{
unsigned signals = 0;
int res = 0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_mutex), INFINITE);
assert(res == WAIT_OBJECT_0);
if (m_waiting != 0) // the m_gate is already closed
{
if (m_blocked == 0)
{
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
return;
}
++m_waiting;
--m_blocked;
signals = 1;
}
else
{
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_gate), INFINITE);
assert(res == WAIT_OBJECT_0);
if (m_blocked > m_gone)
{
if (m_gone != 0)
{
m_blocked -= m_gone;
m_gone = 0;
}
signals = m_waiting = 1;
--m_blocked;
}
else
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
}
}
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
if (signals)
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_queue), signals, 0);
assert(res);
}
}
void condition_impl::notify_all()
{
unsigned signals = 0;
int res = 0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_mutex), INFINITE);
assert(res == WAIT_OBJECT_0);
if (m_waiting != 0) // the m_gate is already closed
{
if (m_blocked == 0)
{
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
return;
}
m_waiting += (signals = m_blocked);
m_blocked = 0;
}
else
{
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_gate), INFINITE);
assert(res == WAIT_OBJECT_0);
if (m_blocked > m_gone)
{
if (m_gone != 0)
{
m_blocked -= m_gone;
m_gone = 0;
}
signals = m_waiting = m_blocked;
m_blocked = 0;
}
else
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
}
}
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
if (signals)
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_queue), signals, 0);
assert(res);
}
}
void condition_impl::enter_wait()
{
int res = 0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_gate), INFINITE);
assert(res == WAIT_OBJECT_0);
++m_blocked;
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
}
void condition_impl::do_wait()
{
int res = 0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_queue), INFINITE);
assert(res == WAIT_OBJECT_0);
unsigned was_waiting=0;
unsigned was_gone=0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_mutex), INFINITE);
assert(res == WAIT_OBJECT_0);
was_waiting = m_waiting;
was_gone = m_gone;
if (was_waiting != 0)
{
if (--m_waiting == 0)
{
if (m_blocked != 0)
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1,
0); // open m_gate
assert(res);
was_waiting = 0;
}
else if (m_gone != 0)
m_gone = 0;
}
}
else if (++m_gone == ((std::numeric_limits<unsigned>::max)() / 2))
{
// timeout occured, normalize the m_gone count
// this may occur if many calls to wait with a timeout are made and
// no call to notify_* is made
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_gate), INFINITE);
assert(res == WAIT_OBJECT_0);
m_blocked -= m_gone;
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
m_gone = 0;
}
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
if (was_waiting == 1)
{
for (/**/ ; was_gone; --was_gone)
{
// better now than spurious later
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_queue),
INFINITE);
assert(res == WAIT_OBJECT_0);
}
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
}
}
bool condition_impl::do_timed_wait(const xtime& xt)
{
bool ret = false;
unsigned int res = 0;
for (;;)
{
int milliseconds;
to_duration(xt, milliseconds);
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_queue),
milliseconds);
assert(res != WAIT_FAILED && res != WAIT_ABANDONED);
ret = (res == WAIT_OBJECT_0);
if (res == WAIT_TIMEOUT)
{
xtime cur;
xtime_get(&cur, TIME_UTC);
if (xtime_cmp(xt, cur) > 0)
continue;
}
break;
}
unsigned was_waiting=0;
unsigned was_gone=0;
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_mutex), INFINITE);
assert(res == WAIT_OBJECT_0);
was_waiting = m_waiting;
was_gone = m_gone;
if (was_waiting != 0)
{
if (!ret) // timeout
{
if (m_blocked != 0)
--m_blocked;
else
++m_gone; // count spurious wakeups
}
if (--m_waiting == 0)
{
if (m_blocked != 0)
{
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1,
0); // open m_gate
assert(res);
was_waiting = 0;
}
else if (m_gone != 0)
m_gone = 0;
}
}
else if (++m_gone == ((std::numeric_limits<unsigned>::max)() / 2))
{
// timeout occured, normalize the m_gone count
// this may occur if many calls to wait with a timeout are made and
// no call to notify_* is made
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_gate), INFINITE);
assert(res == WAIT_OBJECT_0);
m_blocked -= m_gone;
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
m_gone = 0;
}
res = ReleaseMutex(reinterpret_cast<HANDLE>(m_mutex));
assert(res);
if (was_waiting == 1)
{
for (/**/ ; was_gone; --was_gone)
{
// better now than spurious later
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_queue),
INFINITE);
assert(res == WAIT_OBJECT_0);
}
res = ReleaseSemaphore(reinterpret_cast<HANDLE>(m_gate), 1, 0);
assert(res);
}
return ret;
}
#elif defined(BOOST_HAS_PTHREADS)
condition_impl::condition_impl()
{
int res = 0;
res = pthread_cond_init(&m_condition, 0);
if (res != 0)
throw thread_resource_error();
res = pthread_mutex_init(&m_mutex, 0);
if (res != 0)
throw thread_resource_error();
}
condition_impl::~condition_impl()
{
int res = 0;
res = pthread_cond_destroy(&m_condition);
assert(res == 0);
res = pthread_mutex_destroy(&m_mutex);
assert(res == 0);
}
void condition_impl::notify_one()
{
int res = 0;
res = pthread_mutex_lock(&m_mutex);
assert(res == 0);
res = pthread_cond_signal(&m_condition);
assert(res == 0);
res = pthread_mutex_unlock(&m_mutex);
assert(res == 0);
}
void condition_impl::notify_all()
{
int res = 0;
res = pthread_mutex_lock(&m_mutex);
assert(res == 0);
res = pthread_cond_broadcast(&m_condition);
assert(res == 0);
res = pthread_mutex_unlock(&m_mutex);
assert(res == 0);
}
void condition_impl::do_wait(pthread_mutex_t* pmutex)
{
int res = 0;
res = pthread_cond_wait(&m_condition, pmutex);
assert(res == 0);
}
bool condition_impl::do_timed_wait(const xtime& xt, pthread_mutex_t* pmutex)
{
timespec ts;
to_timespec(xt, ts);
int res = 0;
res = pthread_cond_timedwait(&m_condition, pmutex, &ts);
// Test code for QNX debugging, to get information during regressions
#ifndef NDEBUG
if (res == EINVAL) {
boost::xtime now;
boost::xtime_get(&now, boost::TIME_UTC);
std::cerr << "now: " << now.sec << " " << now.nsec << std::endl;
std::cerr << "time: " << time(0) << std::endl;
std::cerr << "xtime: " << xt.sec << " " << xt.nsec << std::endl;
std::cerr << "ts: " << ts.tv_sec << " " << ts.tv_nsec << std::endl;
std::cerr << "pmutex: " << pmutex << std::endl;
std::cerr << "condition: " << &m_condition << std::endl;
assert(res != EINVAL);
}
#endif
assert(res == 0 || res == ETIMEDOUT);
return res != ETIMEDOUT;
}
#elif defined(BOOST_HAS_MPTASKS)
using threads::mac::detail::safe_enter_critical_region;
using threads::mac::detail::safe_wait_on_semaphore;
condition_impl::condition_impl()
: m_gone(0), m_blocked(0), m_waiting(0)
{
threads::mac::detail::thread_init();
OSStatus lStatus = noErr;
lStatus = MPCreateSemaphore(1, 1, &m_gate);
if(lStatus == noErr)
lStatus = MPCreateSemaphore(ULONG_MAX, 0, &m_queue);
if(lStatus != noErr || !m_gate || !m_queue)
{
if (m_gate)
{
lStatus = MPDeleteSemaphore(m_gate);
assert(lStatus == noErr);
}
if (m_queue)
{
lStatus = MPDeleteSemaphore(m_queue);
assert(lStatus == noErr);
}
throw thread_resource_error();
}
}
condition_impl::~condition_impl()
{
OSStatus lStatus = noErr;
lStatus = MPDeleteSemaphore(m_gate);
assert(lStatus == noErr);
lStatus = MPDeleteSemaphore(m_queue);
assert(lStatus == noErr);
}
void condition_impl::notify_one()
{
unsigned signals = 0;
OSStatus lStatus = noErr;
lStatus = safe_enter_critical_region(m_mutex, kDurationForever,
m_mutex_mutex);
assert(lStatus == noErr);
if (m_waiting != 0) // the m_gate is already closed
{
if (m_blocked == 0)
{
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
return;
}
++m_waiting;
--m_blocked;
}
else
{
lStatus = safe_wait_on_semaphore(m_gate, kDurationForever);
assert(lStatus == noErr);
if (m_blocked > m_gone)
{
if (m_gone != 0)
{
m_blocked -= m_gone;
m_gone = 0;
}
signals = m_waiting = 1;
--m_blocked;
}
else
{
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
}
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
while (signals)
{
lStatus = MPSignalSemaphore(m_queue);
assert(lStatus == noErr);
--signals;
}
}
}
void condition_impl::notify_all()
{
unsigned signals = 0;
OSStatus lStatus = noErr;
lStatus = safe_enter_critical_region(m_mutex, kDurationForever,
m_mutex_mutex);
assert(lStatus == noErr);
if (m_waiting != 0) // the m_gate is already closed
{
if (m_blocked == 0)
{
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
return;
}
m_waiting += (signals = m_blocked);
m_blocked = 0;
}
else
{
lStatus = safe_wait_on_semaphore(m_gate, kDurationForever);
assert(lStatus == noErr);
if (m_blocked > m_gone)
{
if (m_gone != 0)
{
m_blocked -= m_gone;
m_gone = 0;
}
signals = m_waiting = m_blocked;
m_blocked = 0;
}
else
{
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
}
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
while (signals)
{
lStatus = MPSignalSemaphore(m_queue);
assert(lStatus == noErr);
--signals;
}
}
}
void condition_impl::enter_wait()
{
OSStatus lStatus = noErr;
lStatus = safe_wait_on_semaphore(m_gate, kDurationForever);
assert(lStatus == noErr);
++m_blocked;
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
}
void condition_impl::do_wait()
{
OSStatus lStatus = noErr;
lStatus = safe_wait_on_semaphore(m_queue, kDurationForever);
assert(lStatus == noErr);
unsigned was_waiting=0;
unsigned was_gone=0;
lStatus = safe_enter_critical_region(m_mutex, kDurationForever,
m_mutex_mutex);
assert(lStatus == noErr);
was_waiting = m_waiting;
was_gone = m_gone;
if (was_waiting != 0)
{
if (--m_waiting == 0)
{
if (m_blocked != 0)
{
lStatus = MPSignalSemaphore(m_gate); // open m_gate
assert(lStatus == noErr);
was_waiting = 0;
}
else if (m_gone != 0)
m_gone = 0;
}
}
else if (++m_gone == ((std::numeric_limits<unsigned>::max)() / 2))
{
// timeout occured, normalize the m_gone count
// this may occur if many calls to wait with a timeout are made and
// no call to notify_* is made
lStatus = safe_wait_on_semaphore(m_gate, kDurationForever);
assert(lStatus == noErr);
m_blocked -= m_gone;
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
m_gone = 0;
}
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
if (was_waiting == 1)
{
for (/**/ ; was_gone; --was_gone)
{
// better now than spurious later
lStatus = safe_wait_on_semaphore(m_queue, kDurationForever);
assert(lStatus == noErr);
}
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
}
}
bool condition_impl::do_timed_wait(const xtime& xt)
{
int milliseconds;
to_duration(xt, milliseconds);
OSStatus lStatus = noErr;
lStatus = safe_wait_on_semaphore(m_queue, milliseconds);
assert(lStatus == noErr || lStatus == kMPTimeoutErr);
bool ret = (lStatus == noErr);
unsigned was_waiting=0;
unsigned was_gone=0;
lStatus = safe_enter_critical_region(m_mutex, kDurationForever,
m_mutex_mutex);
assert(lStatus == noErr);
was_waiting = m_waiting;
was_gone = m_gone;
if (was_waiting != 0)
{
if (!ret) // timeout
{
if (m_blocked != 0)
--m_blocked;
else
++m_gone; // count spurious wakeups
}
if (--m_waiting == 0)
{
if (m_blocked != 0)
{
lStatus = MPSignalSemaphore(m_gate); // open m_gate
assert(lStatus == noErr);
was_waiting = 0;
}
else if (m_gone != 0)
m_gone = 0;
}
}
else if (++m_gone == ((std::numeric_limits<unsigned>::max)() / 2))
{
// timeout occured, normalize the m_gone count
// this may occur if many calls to wait with a timeout are made and
// no call to notify_* is made
lStatus = safe_wait_on_semaphore(m_gate, kDurationForever);
assert(lStatus == noErr);
m_blocked -= m_gone;
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
m_gone = 0;
}
lStatus = MPExitCriticalRegion(m_mutex);
assert(lStatus == noErr);
if (was_waiting == 1)
{
for (/**/ ; was_gone; --was_gone)
{
// better now than spurious later
lStatus = safe_wait_on_semaphore(m_queue, kDurationForever);
assert(lStatus == noErr);
}
lStatus = MPSignalSemaphore(m_gate);
assert(lStatus == noErr);
}
return ret;
}
#endif
} // namespace detail
} // namespace boost
// Change Log:
// 8 Feb 01 WEKEMPF Initial version.
// 22 May 01 WEKEMPF Modified to use xtime for time outs.
// 3 Jan 03 WEKEMPF Modified for DLL implementation.

View File

@@ -1,124 +0,0 @@
// Copyright (C) 2001-2003
// William E. Kempf
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#include <boost/thread/exceptions.hpp>
#include <cstring>
#include <string>
namespace boost {
thread_exception::thread_exception()
: m_sys_err(0)
{
}
thread_exception::thread_exception(int sys_err_code)
: m_sys_err(sys_err_code)
{
}
thread_exception::~thread_exception() throw()
{
}
int thread_exception::native_error() const
{
return m_sys_err;
}
lock_error::lock_error()
{
}
lock_error::lock_error(int sys_err_code)
: thread_exception(sys_err_code)
{
}
lock_error::~lock_error() throw()
{
}
const char* lock_error::what() const throw()
{
return "boost::lock_error";
}
thread_resource_error::thread_resource_error()
{
}
thread_resource_error::thread_resource_error(int sys_err_code)
: thread_exception(sys_err_code)
{
}
thread_resource_error::~thread_resource_error() throw()
{
}
const char* thread_resource_error::what() const throw()
{
return "boost::thread_resource_error";
}
unsupported_thread_option::unsupported_thread_option()
{
}
unsupported_thread_option::unsupported_thread_option(int sys_err_code)
: thread_exception(sys_err_code)
{
}
unsupported_thread_option::~unsupported_thread_option() throw()
{
}
const char* unsupported_thread_option::what() const throw()
{
return "boost::unsupported_thread_option";
}
invalid_thread_argument::invalid_thread_argument()
{
}
invalid_thread_argument::invalid_thread_argument(int sys_err_code)
: thread_exception(sys_err_code)
{
}
invalid_thread_argument::~invalid_thread_argument() throw()
{
}
const char* invalid_thread_argument::what() const throw()
{
return "boost::invalid_thread_argument";
}
thread_permission_error::thread_permission_error()
{
}
thread_permission_error::thread_permission_error(int sys_err_code)
: thread_exception(sys_err_code)
{
}
thread_permission_error::~thread_permission_error() throw()
{
}
const char* thread_permission_error::what() const throw()
{
return "boost::thread_permission_error";
}
} // namespace boost

View File

@@ -1,8 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#define TARGET_CARBON 1

View File

@@ -1,66 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "delivery_man.hpp"
#include "os.hpp"
#include "execution_context.hpp"
namespace boost {
namespace threads {
namespace mac {
namespace detail {
delivery_man::delivery_man():
m_pPackage(NULL),
m_pSemaphore(kInvalidID),
m_bPackageWaiting(false)
{
assert(at_st());
OSStatus lStatus = MPCreateSemaphore(1UL, 0UL, &m_pSemaphore);
// TODO - throw on error here
assert(lStatus == noErr);
}
delivery_man::~delivery_man()
{
assert(m_bPackageWaiting == false);
OSStatus lStatus = MPDeleteSemaphore(m_pSemaphore);
assert(lStatus == noErr);
}
void delivery_man::accept_deliveries()
{
if(m_bPackageWaiting)
{
assert(m_pPackage != NULL);
m_pPackage->accept();
m_pPackage = NULL;
m_bPackageWaiting = false;
// signal to the thread making the call that we're done
OSStatus lStatus = MPSignalSemaphore(m_pSemaphore);
assert(lStatus == noErr);
}
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,84 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_DELIVERY_MAN_MJM012402_HPP
#define BOOST_DELIVERY_MAN_MJM012402_HPP
#include <boost/function.hpp>
#include <boost/utility.hpp>
#include <boost/thread/mutex.hpp>
#include "package.hpp"
#include <Multiprocessing.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class delivery_man is intended to move boost::function objects from MP tasks to
// other execution contexts (such as deferred task time or system task time).
class delivery_man: private noncopyable
{
public:
delivery_man();
~delivery_man();
public:
template<class R>
R deliver(function<R> &rFunctor);
void accept_deliveries();
private:
base_package *m_pPackage;
mutex m_oMutex;
MPSemaphoreID m_pSemaphore;
bool m_bPackageWaiting;
};
template<class R>
R delivery_man::deliver(function<R> &rFunctor)
{
assert(at_mp());
// lock our mutex
mutex::scoped_lock oLock(m_oMutex);
// create a package and save it
package<R> oPackage(rFunctor);
m_pPackage = &oPackage;
m_bPackageWaiting = true;
// wait on the semaphore
OSStatus lStatus = MPWaitOnSemaphore(m_pSemaphore, kDurationForever);
assert(lStatus == noErr);
return(oPackage.return_value());
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_DELIVERY_MAN_MJM012402_HPP

View File

@@ -1,93 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "dt_scheduler.hpp"
#include "ot_context.hpp"
#include <boost/thread/detail/singleton.hpp>
#include <OpenTransportProtocol.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
const OTTimeout k_ulTimerTaskDelay = 1UL;
dt_scheduler::dt_scheduler():
m_bReschedule(false),
m_uppTask(NULL),
m_lTask(0UL)
{
using ::boost::detail::thread::singleton;
ot_context &rContext(singleton<ot_context>::instance());
m_uppTask = NewOTProcessUPP(task_entry);
m_lTask = OTCreateTimerTaskInContext(m_uppTask, this, rContext.get_context());
}
dt_scheduler::~dt_scheduler()
{
OTDestroyTimerTask(m_lTask);
m_lTask = 0UL;
DisposeOTProcessUPP(m_uppTask);
m_uppTask = NULL;
}
void dt_scheduler::start_polling()
{
m_bReschedule = true;
schedule_task();
}
void dt_scheduler::stop_polling()
{
m_bReschedule = false;
}
void dt_scheduler::schedule_task()
{
if(m_bReschedule)
{
OTScheduleTimerTask(m_lTask, k_ulTimerTaskDelay);
}
}
/*static*/ pascal void dt_scheduler::task_entry(void *pRefCon)
{
dt_scheduler *pThis = reinterpret_cast<dt_scheduler *>(pRefCon);
assert(pThis != NULL);
pThis->task();
}
void dt_scheduler::task()
{
periodic_function();
schedule_task();
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,63 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_DT_SCHEDULER_MJM012402_HPP
#define BOOST_DT_SCHEDULER_MJM012402_HPP
#include "periodical.hpp"
#include <OpenTransport.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class dt_scheduler calls its pure-virtual periodic_function method periodically at
// deferred task time. This is generally 1kHz under Mac OS 9.
class dt_scheduler
{
public:
dt_scheduler();
virtual ~dt_scheduler();
protected:
void start_polling();
void stop_polling();
private:
virtual void periodic_function() = 0;
private:
void schedule_task();
static pascal void task_entry(void *pRefCon);
void task();
private:
bool m_bReschedule;
OTProcessUPP m_uppTask;
long m_lTask;
};
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_DT_SCHEDULER_MJM012402_HPP

View File

@@ -1,60 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include <Debugging.h>
#include <Multiprocessing.h>
#include "execution_context.hpp"
#include "init.hpp"
namespace boost {
namespace threads {
namespace mac {
execution_context_t execution_context()
{
// make sure that MP services are available the first time through
static bool bIgnored = detail::thread_init();
// first check if we're an MP task
if(MPTaskIsPreemptive(kInvalidID))
{
return(k_eExecutionContextMPTask);
}
#if TARGET_CARBON
// Carbon has TaskLevel
UInt32 ulLevel = TaskLevel();
if(ulLevel == 0UL)
{
return(k_eExecutionContextSystemTask);
}
if(ulLevel & kInDeferredTaskMask)
{
return(k_eExecutionContextDeferredTask);
}
return(k_eExecutionContextOther);
#else
// this can be implemented using TaskLevel if you don't mind linking against
// DebugLib (and therefore breaking Mac OS 8.6 support), or CurrentExecutionLevel.
# error execution_context unimplimented
#endif
}
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,47 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_EXECUTION_CONTEXT_MJM012402_HPP
#define BOOST_EXECUTION_CONTEXT_MJM012402_HPP
namespace boost {
namespace threads {
namespace mac {
// utility functions for figuring out what context your code is executing in.
// Bear in mind that at_mp and in_blue are the only functions guarenteed by
// Apple to work. There is simply no way of being sure that you will not get
// false readings about task level at interrupt time in blue.
typedef enum {
k_eExecutionContextSystemTask,
k_eExecutionContextDeferredTask,
k_eExecutionContextMPTask,
k_eExecutionContextOther
} execution_context_t;
execution_context_t execution_context();
inline bool at_st()
{ return(execution_context() == k_eExecutionContextSystemTask); }
inline bool at_mp()
{ return(execution_context() == k_eExecutionContextMPTask); }
inline bool in_blue()
{ return(!at_mp()); }
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_EXECUTION_CONTEXT_MJM012402_HPP

View File

@@ -1,58 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "init.hpp"
#include "remote_call_manager.hpp"
#include <boost/thread/detail/singleton.hpp>
#include <Multiprocessing.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
namespace {
// force these to get called by the end of static initialization time.
static bool g_bInitialized = (thread_init() && create_singletons());
}
bool thread_init()
{
static bool bResult = MPLibraryIsLoaded();
return(bResult);
}
bool create_singletons()
{
using ::boost::detail::thread::singleton;
singleton<remote_call_manager>::instance();
return(true);
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,34 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_INIT_MJM012402_HPP
#define BOOST_INIT_MJM012402_HPP
namespace boost {
namespace threads {
namespace mac {
namespace detail {
bool thread_init();
bool create_singletons();
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_INIT_MJM012402_HPP

View File

@@ -1,24 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include <cassert>
#include <cstdio>
#include <MacTypes.h>
#include "remote_calls.hpp"
// this function will be called when an assertion fails. We redirect the assertion
// to DebugStr (MacsBug under Mac OS 1.x-9.x, Console under Mac OS X).
void __assertion_failed(char const *pszAssertion, char const *pszFile, int nLine)
{
using std::snprintf;
unsigned char strlDebug[sizeof(Str255) + 1];
char *pszDebug = reinterpret_cast<char *>(&strlDebug[1]);
strlDebug[0] = snprintf(pszDebug, sizeof(Str255), "assertion failed: \"%s\", %s, line %d", pszAssertion, pszFile, nLine);
boost::threads::mac::dt_remote_call(DebugStr, static_cast<ConstStringPtr>(strlDebug));
}

View File

@@ -1,128 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
//
// includes
//
#include <abort_exit.h>
#include <console.h>
#include <console_io.h>
#include <misc_io.h>
#include <SIOUX.h>
#include "remote_calls.hpp"
//
// using declarations
//
using std::__file_handle;
using std::__idle_proc;
using std::__io_error;
using std::__no_io_error;
using std::size_t;
using boost::threads::mac::st_remote_call;
//
// prototypes
//
static bool check_console();
static int do_read_console(__file_handle ulHandle, unsigned char *pBuffer, size_t *pCount, __idle_proc pfnIdleProc);
static int do_write_console(__file_handle ulHandle, unsigned char *pBuffer, size_t *pCount, __idle_proc pfnIdleProc);
//
// MSL function replacements
//
// these two functions are called by cin and cout, respectively, as well as by (all?)
// other functions in MSL that do console I/O. All that they do is as the remote
// call manager to ensure that their guts are called at system task time.
int __read_console(__file_handle handle, unsigned char * buffer, size_t * count, __idle_proc idle_proc)
{
return(st_remote_call(do_read_console, handle, buffer, count, idle_proc));
}
int __write_console(__file_handle handle, unsigned char * buffer, size_t * count, __idle_proc idle_proc)
{
return(st_remote_call(do_write_console, handle, buffer, count, idle_proc));
}
//
// implementations
//
static bool check_console()
{
static bool s_bHaveConsole(false);
static bool s_bWontHaveConsole(false);
if(s_bHaveConsole)
{
return(true);
}
if(s_bWontHaveConsole == false)
{
__stdio_atexit();
if(InstallConsole(0) != 0)
{
s_bWontHaveConsole = true;
return(false);
}
__console_exit = RemoveConsole;
s_bHaveConsole = true;
return(true);
}
return(false);
}
int do_read_console(__file_handle /*ulHandle*/, unsigned char *pBuffer, size_t *pCount, __idle_proc /*pfnIdleProc*/)
{
assert(pCount != NULL);
assert(pBuffer != NULL || *pCount == 0UL);
if(check_console() == false)
{
return(__io_error);
}
std::fflush(stdout);
long lCount = ReadCharsFromConsole(reinterpret_cast<char *>(pBuffer), static_cast<long>(*pCount));
*pCount = static_cast<size_t>(lCount);
if(lCount == -1L)
{
return(__io_error);
}
return(__no_io_error);
}
int do_write_console(__file_handle /*ulHandle*/, unsigned char *pBuffer, size_t *pCount, __idle_proc /*pfnIdleProc*/)
{
if(check_console() == false)
{
return(__io_error);
}
long lCount = WriteCharsToConsole(reinterpret_cast<char *>(pBuffer), static_cast<long>(*pCount));
*pCount = static_cast<size_t>(lCount);
if(lCount == -1L)
{
return(__io_error);
}
return(__no_io_error);
}

View File

@@ -1,52 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
//
// includes
//
#include <cstdlib>
#include <Multiprocessing.h>
//
// using declarations
//
using std::size_t;
extern "C" {
//
// prototypes
//
void *malloc(size_t ulSize);
void free(void *pBlock);
}
//
// MSL function replacements
//
// all allocation/deallocation currently goes through MPAllocateAligned/MPFree. This
// solution is sub-optimal at best, but will have to do for now.
void *malloc(size_t ulSize)
{
static bool bIgnored = MPLibraryIsLoaded();
return(MPAllocateAligned(ulSize, kMPAllocateDefaultAligned, 0UL));
}
void free(void *pBlock)
{
if(pBlock == NULL) return;
MPFree(pBlock);
}

View File

@@ -1,99 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
//
// includes
//
#include <new>
#include <Multiprocessing.h>
//
// using declarations
//
using std::size_t;
using std::bad_alloc;
using std::nothrow_t;
using std::nothrow;
//
// local utility functions
//
// all allocation/deallocation currently goes through MPAllocateAligned/MPFree. This
// solution is sub-optimal at best, but will have to do for now.
inline static void *allocate(size_t ulSize, const nothrow_t &)
{
static bool bIgnored = MPLibraryIsLoaded();
return(MPAllocateAligned(ulSize, kMPAllocateDefaultAligned, 0UL));
}
inline static void *allocate(size_t ulSize)
{
void *pBlock = allocate(ulSize, nothrow);
if(pBlock == NULL)
throw(bad_alloc());
return(pBlock);
}
inline static void deallocate(void *pBlock)
{
if(pBlock == NULL) return;
MPFree(pBlock);
}
//
// global operators
//
void *operator new(size_t ulSize)
{
return(allocate(ulSize));
}
void *operator new[](size_t ulSize)
{
return(allocate(ulSize));
}
void *operator new(size_t ulSize, const nothrow_t &rNoThrow)
{
return(allocate(ulSize, rNoThrow));
}
void *operator new[](size_t ulSize, const nothrow_t &rNoThrow)
{
return(allocate(ulSize, rNoThrow));
}
void operator delete(void *pBlock)
{
deallocate(pBlock);
}
void operator delete[](void *pBlock)
{
deallocate(pBlock);
}
void operator delete(void *pBlock, const nothrow_t &)
{
deallocate(pBlock);
}
void operator delete[](void *pBlock, const nothrow_t &)
{
deallocate(pBlock);
}

View File

@@ -1,150 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include <cassert>
// we include timesize.mac.h to get whether or not __TIMESIZE_DOUBLE__ is
// defined. This is not safe, given that __TIMESIZE_DOUBLE__ affects MSL
// at MSL's compile time, not ours, so be forgiving if you have changed it
// since you have built MSL.
#include <timesize.mac.h>
#include <time.h>
#include <boost/thread/detail/force_cast.hpp>
#include <boost/thread/xtime.hpp>
#include "execution_context.hpp"
#include <DriverServices.h>
extern "C"
{
clock_t __get_clock();
time_t __get_time();
int __to_gm_time(time_t *pTime);
int __is_dst();
}
static inline uint64_t get_nanoseconds()
{
using boost::detail::thread::force_cast;
return(force_cast<uint64_t>(AbsoluteToNanoseconds(UpTime())));
}
#ifdef __TIMESIZE_DOUBLE__
// return number of microseconds since startup as a double
clock_t __get_clock()
{
static const double k_dNanosecondsPerMicrosecond(1000.0);
return(get_nanoseconds() / k_dNanosecondsPerMicrosecond);
}
#else
// return number of ticks (60th of a second) since startup as a long
clock_t __get_clock()
{
static const uint64_t k_ullTicksPerSecond(60ULL);
static const uint64_t k_ullNanosecondsPerSecond(1000ULL * 1000ULL * 1000ULL);
static const uint64_t k_ullNanosecondsPerTick(k_ullNanosecondsPerSecond / k_ullTicksPerSecond);
return(get_nanoseconds() / k_ullNanosecondsPerTick);
}
#endif
// return number of seconds elapsed since Jan 1, 1970
time_t __get_time()
{
boost::xtime sTime;
int nType = boost::xtime_get(&sTime, boost::TIME_UTC);
assert(nType == boost::TIME_UTC);
return(static_cast<time_t>(sTime.sec));
}
static inline MachineLocation &read_location()
{
static MachineLocation s_sLocation;
assert(boost::threads::mac::at_st());
ReadLocation(&s_sLocation);
return(s_sLocation);
}
static inline MachineLocation &get_location()
{
static MachineLocation &s_rLocation(read_location());
return(s_rLocation);
}
// force the machine location to be cached at static initlialization
static MachineLocation &g_rIgnored(get_location());
static inline long calculate_delta()
{
MachineLocation &rLocation(get_location());
// gmtDelta is a 24-bit, signed integer. We need to strip out the lower 24 bits,
// then sign-extend what we have.
long lDelta = rLocation.u.gmtDelta & 0x00ffffffL;
if((lDelta & 0x00800000L) != 0L)
{
lDelta |= 0xFF000000;
}
return(lDelta);
}
static inline bool check_if_location_is_broken()
{
MachineLocation &rLocation(get_location());
if(rLocation.latitude == 0 && rLocation.longitude == 0 && rLocation.u.gmtDelta == 0)
return(true);
return(false);
}
static inline bool location_is_broken()
{
static bool s_bLocationIsBroken(check_if_location_is_broken());
return(s_bLocationIsBroken);
}
// translate time to GMT
int __to_gm_time(time_t *pTime)
{
if(location_is_broken())
{
return(0);
}
static long s_lDelta(calculate_delta());
*pTime -= s_lDelta;
return(1);
}
static inline bool is_daylight_savings_time()
{
MachineLocation &rLocation(get_location());
return(rLocation.u.dlsDelta != 0);
}
// check if we're in daylight savings time
int __is_dst()
{
if(location_is_broken())
{
return(-1);
}
static bool bIsDaylightSavingsTime(is_daylight_savings_time());
return(static_cast<int>(bIsDaylightSavingsTime));
}

View File

@@ -1,57 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "os.hpp"
#include <cassert>
#include <Gestalt.h>
namespace boost {
namespace threads {
namespace mac {
namespace os {
// read the OS version from Gestalt
static inline long get_version()
{
long lVersion;
OSErr nErr = Gestalt(gestaltSystemVersion, &lVersion);
assert(nErr == noErr);
return(lVersion);
}
// check if we're running under Mac OS X and cache that information
bool x()
{
static bool bX = (version() >= 0x1000);
return(bX);
}
// read the OS version and cache it
long version()
{
static long lVersion = get_version();
return(lVersion);
}
} // namespace os
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,37 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_OS_MJM012402_HPP
#define BOOST_OS_MJM012402_HPP
namespace boost {
namespace threads {
namespace mac {
namespace os {
// functions to determine the OS environment. With namespaces, you get a cute call:
// mac::os::x
bool x();
long version();
} // namespace os
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_OS_MJM012402_HPP

View File

@@ -1,46 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "ot_context.hpp"
#include "execution_context.hpp"
#include <cassert>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
ot_context::ot_context()
{
assert(at_st());
OSStatus lStatus = InitOpenTransportInContext(0UL, &m_pContext);
// TODO - throw on error
assert(lStatus == noErr);
}
ot_context::~ot_context()
{
CloseOpenTransportInContext(m_pContext);
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,58 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_OT_CONTEXT_MJM012402_HPP
#define BOOST_OT_CONTEXT_MJM012402_HPP
#include <OpenTransport.h>
#include <boost/utility.hpp>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class ot_context is intended to be used only as a singleton. All that this class
// does is ask OpenTransport to create him an OTClientContextPtr, and then doles
// this out to anyone who wants it. ot_context should only be instantiated at
// system task time.
class ot_context: private noncopyable
{
protected:
ot_context();
~ot_context();
public:
OTClientContextPtr get_context();
private:
OTClientContextPtr m_pContext;
};
inline OTClientContextPtr ot_context::get_context()
{ return(m_pContext); }
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_OT_CONTEXT_MJM012402_HPP

View File

@@ -1,76 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_PACKAGE_MJM012402_HPP
#define BOOST_PACKAGE_MJM012402_HPP
namespace boost {
namespace threads {
namespace mac {
namespace detail {
class base_package: private noncopyable
{
public:
virtual void accept() = 0;
};
template<class R>
class package: public base_package
{
public:
inline package(function<R> &rFunctor):
m_rFunctor(rFunctor)
{ /* no-op */ }
inline ~package()
{ /* no-op */ }
virtual void accept()
{ m_oR = m_rFunctor(); }
inline R return_value()
{ return(m_oR); }
private:
function<R> &m_rFunctor;
R m_oR;
};
template<>
class package<void>: public base_package
{
public:
inline package(function<void> &rFunctor):
m_rFunctor(rFunctor)
{ /* no-op */ }
inline ~package()
{ /* no-op */ }
virtual void accept()
{ m_rFunctor(); }
inline void return_value()
{ return; }
private:
function<void> &m_rFunctor;
};
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_PACKAGE_MJM012402_HPP

View File

@@ -1,97 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_PERIODICAL_MJM012402_HPP
#define BOOST_PERIODICAL_MJM012402_HPP
#include <boost/function.hpp>
#include <boost/utility.hpp>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class periodical inherits from its template parameter, which should follow the
// pattern set by classes dt_scheduler and st_scheduler. periodical knows how to
// call a boost::function, where the xx_scheduler classes only know to to call a
// member periodically.
template<class Scheduler>
class periodical: private noncopyable, private Scheduler
{
public:
periodical(function<void> &rFunction);
~periodical();
public:
void start();
void stop();
protected:
virtual void periodic_function();
private:
function<void> m_oFunction;
};
template<class Scheduler>
periodical<Scheduler>::periodical(function<void> &rFunction):
m_oFunction(rFunction)
{
// no-op
}
template<class Scheduler>
periodical<Scheduler>::~periodical()
{
stop();
}
template<class Scheduler>
void periodical<Scheduler>::start()
{
start_polling();
}
template<class Scheduler>
void periodical<Scheduler>::stop()
{
stop_polling();
}
template<class Scheduler>
inline void periodical<Scheduler>::periodic_function()
{
try
{
m_oFunction();
}
catch(...)
{
}
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_PERIODICAL_MJM012402_HPP

View File

@@ -1,9 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#define NDEBUG
#define TARGET_CARBON 1

View File

@@ -1,48 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "remote_call_manager.hpp"
#include <boost/bind.hpp>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
using detail::delivery_man;
remote_call_manager::remote_call_manager():
m_oDTDeliveryMan(),
m_oSTDeliveryMan(),
m_oDTFunction(bind(&delivery_man::accept_deliveries, &m_oDTDeliveryMan)),
m_oSTFunction(bind(&delivery_man::accept_deliveries, &m_oSTDeliveryMan)),
m_oDTPeriodical(m_oDTFunction),
m_oSTPeriodical(m_oSTFunction)
{
m_oDTPeriodical.start();
m_oSTPeriodical.start();
}
remote_call_manager::~remote_call_manager()
{
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,102 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_REMOTE_CALL_MANAGER_MJM012402_HPP
#define BOOST_REMOTE_CALL_MANAGER_MJM012402_HPP
#include <boost/utility.hpp>
#include "delivery_man.hpp"
#include "dt_scheduler.hpp"
#include "periodical.hpp"
#include "execution_context.hpp"
#include "st_scheduler.hpp"
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class remote_call_manager is used by the remote call functions (dt_remote_call and
// st_remote_call) to execute functions in non-MP contexts.
class remote_call_manager: private noncopyable
{
protected:
remote_call_manager();
~remote_call_manager();
public:
template<class R>
R execute_at_dt(function<R> &rFunctor);
template<class R>
R execute_at_st(function<R> &rFunctor);
private:
template<class R>
static R execute_now(function<R> &rFunctor);
private:
delivery_man m_oDTDeliveryMan;
delivery_man m_oSTDeliveryMan;
function<void> m_oDTFunction;
function<void> m_oSTFunction;
periodical<dt_scheduler> m_oDTPeriodical;
periodical<st_scheduler> m_oSTPeriodical;
};
template<class R>
/*static*/ inline R remote_call_manager::execute_now(function<R> &rFunctor)
{
return(rFunctor());
}
template<>
/*static*/ inline void remote_call_manager::execute_now<void>(function<void> &rFunctor)
{
rFunctor();
}
template<class R>
inline R remote_call_manager::execute_at_dt(function<R> &rFunctor)
{
if(at_mp())
{
return(m_oDTDeliveryMan.deliver(rFunctor));
}
return(execute_now(rFunctor));
}
template<class R>
inline R remote_call_manager::execute_at_st(function<R> &rFunctor)
{
if(at_mp())
{
return(m_oSTDeliveryMan.deliver(rFunctor));
}
assert(at_st());
return(execute_now(rFunctor));
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_REMOTE_CALL_MANAGER_MJM012402_HPP

View File

@@ -1,157 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_REMOTE_CALLS_MJM012402_HPP
#define BOOST_REMOTE_CALLS_MJM012402_HPP
#include <boost/bind.hpp>
#include "remote_call_manager.hpp"
#include <boost/thread/detail/singleton.hpp>
// this file contains macros to generate functions with the signatures:
// ReturnType st_remote_call([pascal] ReturnType (*pfnFunction)(
// [Argument1Type[, Argument2Type[...]]])
// [, Argument1Type oArgument1[, Argument2Type oArgument2[...]]])
// and
// ReturnType dt_remote_call([pascal] ReturnType (*pfnFunction)(
// [Argument1Type[, Argument2Type[...]]])
// [, Argument1Type oArgument1[, Argument2Type oArgument2[...]]])
// in other words, identical to the function pointer versions of boost::bind, but
// with the return type returned. The purpose of these functions is to be able to
// request that a function be called at system task time or deferred task time, then
// sleep until it is called, and finally get back its return value.
#define BOOST_REMOTE_CALL_CLASS_LIST_0
#define BOOST_REMOTE_CALL_CLASS_LIST_1 BOOST_REMOTE_CALL_CLASS_LIST_0, class A1
#define BOOST_REMOTE_CALL_CLASS_LIST_2 BOOST_REMOTE_CALL_CLASS_LIST_1, class A2
#define BOOST_REMOTE_CALL_CLASS_LIST_3 BOOST_REMOTE_CALL_CLASS_LIST_2, class A3
#define BOOST_REMOTE_CALL_CLASS_LIST_4 BOOST_REMOTE_CALL_CLASS_LIST_3, class A4
#define BOOST_REMOTE_CALL_CLASS_LIST_5 BOOST_REMOTE_CALL_CLASS_LIST_4, class A5
#define BOOST_REMOTE_CALL_CLASS_LIST_6 BOOST_REMOTE_CALL_CLASS_LIST_5, class A6
#define BOOST_REMOTE_CALL_CLASS_LIST_7 BOOST_REMOTE_CALL_CLASS_LIST_6, class A7
#define BOOST_REMOTE_CALL_CLASS_LIST_8 BOOST_REMOTE_CALL_CLASS_LIST_7, class A8
#define BOOST_REMOTE_CALL_CLASS_LIST_9 BOOST_REMOTE_CALL_CLASS_LIST_8, class A9
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_0
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_1 BOOST_REMOTE_CALL_ARGUMENT_LIST_0 A1 oA1
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_2 BOOST_REMOTE_CALL_ARGUMENT_LIST_1, A2 oA2
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_3 BOOST_REMOTE_CALL_ARGUMENT_LIST_2, A3 oA3
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_4 BOOST_REMOTE_CALL_ARGUMENT_LIST_3, A4 oA4
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_5 BOOST_REMOTE_CALL_ARGUMENT_LIST_4, A5 oA5
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_6 BOOST_REMOTE_CALL_ARGUMENT_LIST_5, A6 oA6
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_7 BOOST_REMOTE_CALL_ARGUMENT_LIST_6, A7 oA7
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_8 BOOST_REMOTE_CALL_ARGUMENT_LIST_7, A8 oA8
#define BOOST_REMOTE_CALL_ARGUMENT_LIST_9 BOOST_REMOTE_CALL_ARGUMENT_LIST_8, A9 oA9
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_0
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_1 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_0, oA1
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_2 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_1, oA2
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_3 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_2, oA3
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_4 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_3, oA4
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_5 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_4, oA5
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_6 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_5, oA6
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_7 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_6, oA7
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_8 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_7, oA8
#define BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_9 BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_8, oA9
#define BOOST_REMOTE_CALL_COMMA_0
#define BOOST_REMOTE_CALL_COMMA_1 ,
#define BOOST_REMOTE_CALL_COMMA_2 ,
#define BOOST_REMOTE_CALL_COMMA_3 ,
#define BOOST_REMOTE_CALL_COMMA_4 ,
#define BOOST_REMOTE_CALL_COMMA_5 ,
#define BOOST_REMOTE_CALL_COMMA_6 ,
#define BOOST_REMOTE_CALL_COMMA_7 ,
#define BOOST_REMOTE_CALL_COMMA_8 ,
#define BOOST_REMOTE_CALL_COMMA_9 ,
// this is the macro that ties it all together. From here, we generate all forms of
// dt_remote_call and st_remote_call.
#define BOOST_REMOTE_CALL(context, stack, n) \
template<class R BOOST_REMOTE_CALL_CLASS_LIST_ ## n> \
inline R context ## _remote_call(stack R (*pfnF)( \
BOOST_REMOTE_CALL_ARGUMENT_LIST_ ## n) \
BOOST_REMOTE_CALL_COMMA_ ## n \
BOOST_REMOTE_CALL_ARGUMENT_LIST_ ## n) \
{ \
using ::boost::detail::thread::singleton; \
using detail::remote_call_manager; \
function<R> oFunc(bind(pfnF BOOST_REMOTE_CALL_FUNCTION_ARGUMENT_LIST_ ## n)); \
remote_call_manager &rManager(singleton<remote_call_manager>::instance()); \
return(rManager.execute_at_ ## context(oFunc)); \
}
namespace boost {
namespace threads {
namespace mac {
BOOST_REMOTE_CALL(st, , 0)
BOOST_REMOTE_CALL(st, , 1)
BOOST_REMOTE_CALL(st, , 2)
BOOST_REMOTE_CALL(st, , 3)
BOOST_REMOTE_CALL(st, , 4)
BOOST_REMOTE_CALL(st, , 5)
BOOST_REMOTE_CALL(st, , 6)
BOOST_REMOTE_CALL(st, , 7)
BOOST_REMOTE_CALL(st, , 8)
BOOST_REMOTE_CALL(st, , 9)
BOOST_REMOTE_CALL(dt, , 0)
BOOST_REMOTE_CALL(dt, , 1)
BOOST_REMOTE_CALL(dt, , 2)
BOOST_REMOTE_CALL(dt, , 3)
BOOST_REMOTE_CALL(dt, , 4)
BOOST_REMOTE_CALL(dt, , 5)
BOOST_REMOTE_CALL(dt, , 6)
BOOST_REMOTE_CALL(dt, , 7)
BOOST_REMOTE_CALL(dt, , 8)
BOOST_REMOTE_CALL(dt, , 9)
BOOST_REMOTE_CALL(st, pascal, 0)
BOOST_REMOTE_CALL(st, pascal, 1)
BOOST_REMOTE_CALL(st, pascal, 2)
BOOST_REMOTE_CALL(st, pascal, 3)
BOOST_REMOTE_CALL(st, pascal, 4)
BOOST_REMOTE_CALL(st, pascal, 5)
BOOST_REMOTE_CALL(st, pascal, 6)
BOOST_REMOTE_CALL(st, pascal, 7)
BOOST_REMOTE_CALL(st, pascal, 8)
BOOST_REMOTE_CALL(st, pascal, 9)
BOOST_REMOTE_CALL(dt, pascal, 0)
BOOST_REMOTE_CALL(dt, pascal, 1)
BOOST_REMOTE_CALL(dt, pascal, 2)
BOOST_REMOTE_CALL(dt, pascal, 3)
BOOST_REMOTE_CALL(dt, pascal, 4)
BOOST_REMOTE_CALL(dt, pascal, 5)
BOOST_REMOTE_CALL(dt, pascal, 6)
BOOST_REMOTE_CALL(dt, pascal, 7)
BOOST_REMOTE_CALL(dt, pascal, 8)
BOOST_REMOTE_CALL(dt, pascal, 9)
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_REMOTE_CALLS_MJM012402_HPP

View File

@@ -1,210 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include <DriverServices.h>
#include <Events.h>
#include <Multiprocessing.h>
#include <Threads.h>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/thread/detail/force_cast.hpp>
#include <limits>
#include "execution_context.hpp"
using boost::detail::thread::force_cast;
namespace boost {
namespace threads {
namespace mac {
namespace detail {
static OSStatus safe_wait(function<OSStatus, Duration> &rFunction, Duration lDuration);
// we call WNE to allow tasks that own the resource the blue is waiting on system
// task time, in case they are blocked on an ST remote call (or a memory allocation
// for that matter).
static void idle()
{
if(at_st())
{
EventRecord sEvent;
bool bEvent = WaitNextEvent(0U, &sEvent, 0UL, NULL);
}
}
OSStatus safe_wait_on_semaphore(MPSemaphoreID pSemaphoreID, Duration lDuration)
{
function<OSStatus, Duration> oWaitOnSemaphore;
oWaitOnSemaphore = bind(MPWaitOnSemaphore, pSemaphoreID, _1);
return(safe_wait(oWaitOnSemaphore, lDuration));
}
OSStatus safe_enter_critical_region(MPCriticalRegionID pCriticalRegionID, Duration lDuration, MPCriticalRegionID pCriticalRegionCriticalRegionID/* = kInvalidID*/)
{
if(pCriticalRegionCriticalRegionID != kInvalidID)
{
if(at_mp())
{
// enter the critical region's critical region
OSStatus lStatus = noErr;
AbsoluteTime sExpiration;
if(lDuration != kDurationImmediate && lDuration != kDurationForever)
{
sExpiration = AddDurationToAbsolute(lDuration, UpTime());
}
lStatus = MPEnterCriticalRegion(pCriticalRegionCriticalRegionID, lDuration);
assert(lStatus == noErr || lStatus == kMPTimeoutErr);
if(lStatus == noErr)
{
// calculate a new duration
if(lDuration != kDurationImmediate && lDuration != kDurationForever)
{
// check if we have any time left
AbsoluteTime sUpTime(UpTime());
if(force_cast<uint64_t>(sExpiration) > force_cast<uint64_t>(sUpTime))
{
// reset our duration to our remaining time
lDuration = AbsoluteDeltaToDuration(sExpiration, sUpTime);
}
else
{
// no time left
lDuration = kDurationImmediate;
}
}
// if we entered the critical region, exit it again
lStatus = MPExitCriticalRegion(pCriticalRegionCriticalRegionID);
assert(lStatus == noErr);
}
else
{
// otherwise, give up
return(lStatus);
}
}
else
{
// if we're at system task time, try to enter the critical region's critical
// region until we succeed. MP tasks will block on this until we let it go.
OSStatus lStatus;
do
{
lStatus = MPEnterCriticalRegion(pCriticalRegionCriticalRegionID, kDurationImmediate);
} while(lStatus == kMPTimeoutErr);
assert(lStatus == noErr);
}
}
// try to enter the critical region
function<OSStatus, Duration> oEnterCriticalRegion;
oEnterCriticalRegion = bind(MPEnterCriticalRegion, pCriticalRegionID, _1);
OSStatus lStatus = safe_wait(oEnterCriticalRegion, lDuration);
// if we entered the critical region's critical region to get the critical region,
// exit the critical region's critical region.
if(pCriticalRegionCriticalRegionID != kInvalidID && at_mp() == false)
{
lStatus = MPExitCriticalRegion(pCriticalRegionCriticalRegionID);
assert(lStatus == noErr);
}
return(lStatus);
}
OSStatus safe_wait_on_queue(MPQueueID pQueueID, void **pParam1, void **pParam2, void **pParam3, Duration lDuration)
{
function<OSStatus, Duration> oWaitOnQueue;
oWaitOnQueue = bind(MPWaitOnQueue, pQueueID, pParam1, pParam2, pParam3, _1);
return(safe_wait(oWaitOnQueue, lDuration));
}
OSStatus safe_delay_until(AbsoluteTime *pWakeUpTime)
{
if(execution_context() == k_eExecutionContextMPTask)
{
return(MPDelayUntil(pWakeUpTime));
}
else
{
uint64_t ullWakeUpTime = force_cast<uint64_t>(*pWakeUpTime);
while(force_cast<uint64_t>(UpTime()) < ullWakeUpTime)
{
idle();
}
return(noErr);
}
}
OSStatus safe_wait(function<OSStatus, Duration> &rFunction, Duration lDuration)
{
if(execution_context() == k_eExecutionContextMPTask)
{
return(rFunction(lDuration));
}
else
{
uint64_t ullExpiration = 0ULL;
// get the expiration time in UpTime units
if(lDuration == kDurationForever)
{
ullExpiration = (::std::numeric_limits<uint64_t>::max)();
}
else if(lDuration == kDurationImmediate)
{
ullExpiration = force_cast<uint64_t>(UpTime());
}
else
{
AbsoluteTime sExpiration = AddDurationToAbsolute(lDuration, UpTime());
ullExpiration = force_cast<uint64_t>(sExpiration);
}
OSStatus lStatus;
bool bExpired = false;
do
{
lStatus = rFunction(kDurationImmediate);
// mm - "if" #if 0'd out to allow task time to threads blocked on I/O
#if 0
if(lStatus == kMPTimeoutErr)
#endif
{
idle();
}
if(lDuration != kDurationForever)
{
bExpired = (force_cast<uint64_t>(UpTime()) < ullExpiration);
}
} while(lStatus == kMPTimeoutErr && bExpired == false);
return(lStatus);
}
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,41 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_SAFE_MJM012402_HPP
#define BOOST_SAFE_MJM012402_HPP
#include <Multiprocessing.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// these functions are used to wain in an execution context-independent manor. All of these
// functions are both MP- and ST-safe.
OSStatus safe_wait_on_semaphore(MPSemaphoreID pSemaphoreID, Duration lDuration);
OSStatus safe_enter_critical_region(MPCriticalRegionID pCriticalRegionID, Duration lDuration, MPCriticalRegionID pCriticalRegionCriticalRegionID = kInvalidID);
OSStatus safe_wait_on_queue(MPQueueID pQueueID, void **pParam1, void **pParam2, void **pParam3, Duration lDuration);
OSStatus safe_delay_until(AbsoluteTime *pWakeUpTime);
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_SAFE_MJM012402_HPP

View File

@@ -1,47 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "scoped_critical_region.hpp"
#include "init.hpp"
#include <cassert>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
scoped_critical_region::scoped_critical_region():
m_pCriticalRegionID(kInvalidID)
{
static bool bIgnored = thread_init();
OSStatus lStatus = MPCreateCriticalRegion(&m_pCriticalRegionID);
if(lStatus != noErr || m_pCriticalRegionID == kInvalidID)
throw(thread_resource_error());
}
scoped_critical_region::~scoped_critical_region()
{
OSStatus lStatus = MPDeleteCriticalRegion(m_pCriticalRegionID);
assert(lStatus == noErr);
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,63 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_SCOPED_CRITICAL_REGION_MJM012402_HPP
#define BOOST_SCOPED_CRITICAL_REGION_MJM012402_HPP
#include <boost/thread/exceptions.hpp>
#include <Multiprocessing.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class scoped_critical_region probably needs a new name. Although the current name
// is accurate, it can be read to mean that a critical region is entered for the
// current scope. In reality, a critical region is _created_ for the current scope.
// This class is intended as a replacement for MPCriticalRegionID that will
// automatically create and dispose of itself.
class scoped_critical_region
{
public:
scoped_critical_region();
~scoped_critical_region();
public:
operator const MPCriticalRegionID &() const;
const MPCriticalRegionID &get() const;
private:
MPCriticalRegionID m_pCriticalRegionID;
};
// these are inlined for speed.
inline scoped_critical_region::operator const MPCriticalRegionID &() const
{ return(m_pCriticalRegionID); }
inline const MPCriticalRegionID &scoped_critical_region::get() const
{ return(m_pCriticalRegionID); }
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_SCOPED_CRITICAL_REGION_MJM012402_HPP

View File

@@ -1,85 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "st_scheduler.hpp"
#include <cassert>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
#if TARGET_CARBON
st_scheduler::st_scheduler():
m_uppTask(NULL),
m_pTimer(NULL)
{
m_uppTask = NewEventLoopTimerUPP(task_entry);
// TODO - throw on error
assert(m_uppTask != NULL);
}
st_scheduler::~st_scheduler()
{
DisposeEventLoopTimerUPP(m_uppTask);
m_uppTask = NULL;
}
void st_scheduler::start_polling()
{
assert(m_pTimer == NULL);
OSStatus lStatus = InstallEventLoopTimer(GetMainEventLoop(),
0 * kEventDurationSecond,
kEventDurationMillisecond,
m_uppTask,
this,
&m_pTimer);
// TODO - throw on error
assert(lStatus == noErr);
}
void st_scheduler::stop_polling()
{
assert(m_pTimer != NULL);
OSStatus lStatus = RemoveEventLoopTimer(m_pTimer);
assert(lStatus == noErr);
m_pTimer = NULL;
}
/*static*/ pascal void st_scheduler::task_entry(EventLoopTimerRef /*pTimer*/, void *pRefCon)
{
st_scheduler *pThis = reinterpret_cast<st_scheduler *>(pRefCon);
assert(pThis != NULL);
pThis->task();
}
void st_scheduler::task()
{
periodic_function();
}
#else
# error st_scheduler unimplemented!
#endif
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,67 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_ST_SCHEDULER_MJM012402_HPP
#define BOOST_ST_SCHEDULER_MJM012402_HPP
#include <CarbonEvents.h>
namespace boost {
namespace threads {
namespace mac {
namespace detail {
// class st_scheduler calls its pure-virtual periodic_function method periodically at
// system task time. This is generally 40Hz under Mac OS 9.
class st_scheduler
{
public:
st_scheduler();
virtual ~st_scheduler();
protected:
void start_polling();
void stop_polling();
private:
virtual void periodic_function() = 0;
#if TARGET_CARBON
// use event loop timers under Carbon
private:
static pascal void task_entry(EventLoopTimerRef pTimer, void *pRefCon);
void task();
private:
EventLoopTimerUPP m_uppTask;
EventLoopTimerRef m_pTimer;
#else
// this can be implemented using OT system tasks. This would be mostly a copy-and-
// paste of the dt_scheduler code, replacing DeferredTask with SystemTask and DT
// with ST.
# error st_scheduler unimplemented!
#endif
};
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_ST_SCHEDULER_MJM012402_HPP

View File

@@ -1,56 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#include "thread_cleanup.hpp"
namespace boost {
namespace threads {
namespace mac {
namespace detail {
namespace {
TaskStorageIndex g_ulIndex(0UL);
} // anonymous namespace
void do_thread_startup()
{
if(g_ulIndex == 0UL)
{
OSStatus lStatus = MPAllocateTaskStorageIndex(&g_ulIndex);
assert(lStatus == noErr);
}
set_thread_cleanup_task(NULL);
}
void do_thread_cleanup()
{
void (*pfnTask)() = MPGetTaskValue(g_ulIndex)
}
void set_thread_cleanup_task(void (*pfnTask)())
{
lStatus = MPSetTaskValue(g_ulIndex, reinterpret_cast<TaskStorageValue>(pfnTask));
assert(lStatus == noErr);
}
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost

View File

@@ -1,36 +0,0 @@
// (C) Copyright Mac Murrett 2001.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for most recent version.
#ifndef BOOST_THREAD_CLEANUP_MJM012402_HPP
#define BOOST_THREAD_CLEANUP_MJM012402_HPP
namespace boost {
namespace threads {
namespace mac {
namespace detail {
void do_thread_startup();
void do_thread_cleanup();
void set_thread_cleanup_task();
} // namespace detail
} // namespace mac
} // namespace threads
} // namespace boost
#endif // BOOST_THREAD_CLEANUP_MJM012402_HPP

View File

@@ -18,7 +18,7 @@
#elif defined(__APPLE__) || defined(__FreeBSD__)
#include <sys/types.h>
#include <sys/sysctl.h>
#elif defined BOOST_HAS_UNISTD_H
#elif defined(__sun) || defined(__CYGWIN__)
#include <unistd.h>
#endif
@@ -394,7 +394,7 @@ namespace boost
int count;
size_t size=sizeof(count);
return sysctlbyname("hw.ncpu",&count,&size,NULL,0)?0:count;
#elif defined(BOOST_HAS_UNISTD_H) && defined(_SC_NPROCESSORS_ONLN)
#elif defined(__sun) || defined(__CYGWIN__)
int const count=sysconf(_SC_NPROCESSORS_ONLN);
return (count>0)?count:0;
#else

View File

@@ -1,364 +0,0 @@
// Copyright (C) 2001-2003
// William E. Kempf
// Copyright (C) 2007 Anthony Williams
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/xtime.hpp>
#include <boost/thread/condition.hpp>
#include <boost/thread/locks.hpp>
#include <cassert>
#if defined(BOOST_HAS_WINTHREADS)
# include <windows.h>
# if !defined(BOOST_NO_THREADEX)
# include <process.h>
# endif
#elif defined(BOOST_HAS_MPTASKS)
# include <DriverServices.h>
# include "init.hpp"
# include "safe.hpp"
# include <boost/thread/tss.hpp>
#endif
#include "timeconv.inl"
#if defined(BOOST_HAS_WINTHREADS)
# include "boost/thread/detail/tss_hooks.hpp"
#endif
namespace {
#if defined(BOOST_HAS_WINTHREADS) && defined(BOOST_NO_THREADEX)
// Windows CE doesn't define _beginthreadex
struct ThreadProxyData
{
typedef unsigned (__stdcall* func)(void*);
func start_address_;
void* arglist_;
ThreadProxyData(func start_address,void* arglist) : start_address_(start_address), arglist_(arglist) {}
};
DWORD WINAPI ThreadProxy(LPVOID args)
{
ThreadProxyData* data=reinterpret_cast<ThreadProxyData*>(args);
DWORD ret=data->start_address_(data->arglist_);
delete data;
return ret;
}
inline unsigned _beginthreadex(void* security, unsigned stack_size, unsigned (__stdcall* start_address)(void*),
void* arglist, unsigned initflag,unsigned* thrdaddr)
{
DWORD threadID;
HANDLE hthread=CreateThread(static_cast<LPSECURITY_ATTRIBUTES>(security),stack_size,ThreadProxy,
new ThreadProxyData(start_address,arglist),initflag,&threadID);
if (hthread!=0)
*thrdaddr=threadID;
return reinterpret_cast<unsigned>(hthread);
}
#endif
class thread_param
{
public:
thread_param(const boost::function0<void>& threadfunc)
: m_threadfunc(threadfunc), m_started(false)
{
}
void wait()
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
while (!m_started)
m_condition.wait(scoped_lock);
}
void started()
{
boost::mutex::scoped_lock scoped_lock(m_mutex);
m_started = true;
m_condition.notify_one();
}
boost::mutex m_mutex;
boost::condition m_condition;
const boost::function0<void>& m_threadfunc;
bool m_started;
};
} // unnamed namespace
extern "C" {
#if defined(BOOST_HAS_WINTHREADS)
unsigned __stdcall thread_proxy(void* param)
#elif defined(BOOST_HAS_PTHREADS)
static void* thread_proxy(void* param)
#elif defined(BOOST_HAS_MPTASKS)
static OSStatus thread_proxy(void* param)
#endif
{
thread_param* p = static_cast<thread_param*>(param);
boost::function0<void> threadfunc = p->m_threadfunc;
p->started();
threadfunc();
#if defined(BOOST_HAS_WINTHREADS)
on_thread_exit();
#endif
#if defined(BOOST_HAS_MPTASKS)
::boost::detail::thread_cleanup();
#endif
return 0;
}
}
namespace boost {
thread::thread()
: m_joinable(false)
{
#if defined(BOOST_HAS_WINTHREADS)
m_thread = reinterpret_cast<void*>(GetCurrentThread());
m_id = GetCurrentThreadId();
#elif defined(BOOST_HAS_PTHREADS)
m_thread = pthread_self();
#elif defined(BOOST_HAS_MPTASKS)
threads::mac::detail::thread_init();
threads::mac::detail::create_singletons();
m_pTaskID = MPCurrentTaskID();
m_pJoinQueueID = kInvalidID;
#endif
}
thread::thread(const function0<void>& threadfunc)
: m_joinable(true)
{
thread_param param(threadfunc);
#if defined(BOOST_HAS_WINTHREADS)
m_thread = reinterpret_cast<void*>(_beginthreadex(0, 0, &thread_proxy,
&param, 0, &m_id));
if (!m_thread)
throw thread_resource_error();
#elif defined(BOOST_HAS_PTHREADS)
int res = 0;
res = pthread_create(&m_thread, 0, &thread_proxy, &param);
if (res != 0)
throw thread_resource_error();
#elif defined(BOOST_HAS_MPTASKS)
threads::mac::detail::thread_init();
threads::mac::detail::create_singletons();
OSStatus lStatus = noErr;
m_pJoinQueueID = kInvalidID;
m_pTaskID = kInvalidID;
lStatus = MPCreateQueue(&m_pJoinQueueID);
if (lStatus != noErr) throw thread_resource_error();
lStatus = MPCreateTask(&thread_proxy, &param, 0UL, m_pJoinQueueID, NULL,
NULL, 0UL, &m_pTaskID);
if (lStatus != noErr)
{
lStatus = MPDeleteQueue(m_pJoinQueueID);
assert(lStatus == noErr);
throw thread_resource_error();
}
#endif
param.wait();
}
thread::~thread()
{
if (m_joinable)
{
#if defined(BOOST_HAS_WINTHREADS)
int res = 0;
res = CloseHandle(reinterpret_cast<HANDLE>(m_thread));
assert(res);
#elif defined(BOOST_HAS_PTHREADS)
pthread_detach(m_thread);
#elif defined(BOOST_HAS_MPTASKS)
assert(m_pJoinQueueID != kInvalidID);
OSStatus lStatus = MPDeleteQueue(m_pJoinQueueID);
assert(lStatus == noErr);
#endif
}
}
bool thread::operator==(const thread& other) const
{
#if defined(BOOST_HAS_WINTHREADS)
return other.m_id == m_id;
#elif defined(BOOST_HAS_PTHREADS)
return pthread_equal(m_thread, other.m_thread) != 0;
#elif defined(BOOST_HAS_MPTASKS)
return other.m_pTaskID == m_pTaskID;
#endif
}
bool thread::operator!=(const thread& other) const
{
return !operator==(other);
}
void thread::join()
{
assert(m_joinable); //See race condition comment below
int res = 0;
#if defined(BOOST_HAS_WINTHREADS)
res = WaitForSingleObject(reinterpret_cast<HANDLE>(m_thread), INFINITE);
assert(res == WAIT_OBJECT_0);
res = CloseHandle(reinterpret_cast<HANDLE>(m_thread));
assert(res);
#elif defined(BOOST_HAS_PTHREADS)
res = pthread_join(m_thread, 0);
assert(res == 0);
#elif defined(BOOST_HAS_MPTASKS)
OSStatus lStatus = threads::mac::detail::safe_wait_on_queue(
m_pJoinQueueID, NULL, NULL, NULL, kDurationForever);
assert(lStatus == noErr);
#endif
// This isn't a race condition since any race that could occur would
// have us in undefined behavior territory any way.
m_joinable = false;
}
void thread::sleep(const xtime& xt)
{
for (int foo=0; foo < 5; ++foo)
{
#if defined(BOOST_HAS_WINTHREADS)
int milliseconds;
to_duration(xt, milliseconds);
Sleep(milliseconds);
#elif defined(BOOST_HAS_PTHREADS)
# if defined(BOOST_HAS_PTHREAD_DELAY_NP)
timespec ts;
to_timespec_duration(xt, ts);
int res = 0;
res = pthread_delay_np(&ts);
assert(res == 0);
# elif defined(BOOST_HAS_NANOSLEEP)
timespec ts;
to_timespec_duration(xt, ts);
// nanosleep takes a timespec that is an offset, not
// an absolute time.
nanosleep(&ts, 0);
# else
mutex mx;
mutex::scoped_lock lock(mx);
condition cond;
cond.timed_wait(lock, xt);
# endif
#elif defined(BOOST_HAS_MPTASKS)
int microseconds;
to_microduration(xt, microseconds);
Duration lMicroseconds(kDurationMicrosecond * microseconds);
AbsoluteTime sWakeTime(DurationToAbsolute(lMicroseconds));
threads::mac::detail::safe_delay_until(&sWakeTime);
#endif
xtime cur;
xtime_get(&cur, TIME_UTC);
if (xtime_cmp(xt, cur) <= 0)
return;
}
}
void thread::yield()
{
#if defined(BOOST_HAS_WINTHREADS)
Sleep(0);
#elif defined(BOOST_HAS_PTHREADS)
# if defined(BOOST_HAS_SCHED_YIELD)
int res = 0;
res = sched_yield();
assert(res == 0);
# elif defined(BOOST_HAS_PTHREAD_YIELD)
int res = 0;
res = pthread_yield();
assert(res == 0);
# else
xtime xt;
xtime_get(&xt, TIME_UTC);
sleep(xt);
# endif
#elif defined(BOOST_HAS_MPTASKS)
MPYield();
#endif
}
thread_group::thread_group()
{
}
thread_group::~thread_group()
{
// We shouldn't have to scoped_lock here, since referencing this object
// from another thread while we're deleting it in the current thread is
// going to lead to undefined behavior any way.
for (std::list<thread*>::iterator it = m_threads.begin();
it != m_threads.end(); ++it)
{
delete (*it);
}
}
thread* thread_group::create_thread(const function0<void>& threadfunc)
{
// No scoped_lock required here since the only "shared data" that's
// modified here occurs inside add_thread which does scoped_lock.
std::auto_ptr<thread> thrd(new thread(threadfunc));
add_thread(thrd.get());
return thrd.release();
}
void thread_group::add_thread(thread* thrd)
{
mutex::scoped_lock scoped_lock(m_mutex);
// For now we'll simply ignore requests to add a thread object multiple
// times. Should we consider this an error and either throw or return an
// error value?
std::list<thread*>::iterator it = std::find(m_threads.begin(),
m_threads.end(), thrd);
assert(it == m_threads.end());
if (it == m_threads.end())
m_threads.push_back(thrd);
}
void thread_group::remove_thread(thread* thrd)
{
mutex::scoped_lock scoped_lock(m_mutex);
// For now we'll simply ignore requests to remove a thread object that's
// not in the group. Should we consider this an error and either throw or
// return an error value?
std::list<thread*>::iterator it = std::find(m_threads.begin(),
m_threads.end(), thrd);
assert(it != m_threads.end());
if (it != m_threads.end())
m_threads.erase(it);
}
void thread_group::join_all()
{
mutex::scoped_lock scoped_lock(m_mutex);
for (std::list<thread*>::iterator it = m_threads.begin();
it != m_threads.end(); ++it)
{
(*it)->join();
}
}
int thread_group::size() const
{
return m_threads.size();
}
} // namespace boost

View File

@@ -1,130 +0,0 @@
// Copyright (C) 2001-2003
// William E. Kempf
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
// boostinspect:nounnamed
namespace {
const int MILLISECONDS_PER_SECOND = 1000;
const int NANOSECONDS_PER_SECOND = 1000000000;
const int NANOSECONDS_PER_MILLISECOND = 1000000;
const int MICROSECONDS_PER_SECOND = 1000000;
const int NANOSECONDS_PER_MICROSECOND = 1000;
inline void to_time(int milliseconds, boost::xtime& xt)
{
int res = 0;
res = boost::xtime_get(&xt, boost::TIME_UTC);
assert(res == boost::TIME_UTC);
xt.sec += (milliseconds / MILLISECONDS_PER_SECOND);
xt.nsec += ((milliseconds % MILLISECONDS_PER_SECOND) *
NANOSECONDS_PER_MILLISECOND);
if (xt.nsec >= NANOSECONDS_PER_SECOND)
{
++xt.sec;
xt.nsec -= NANOSECONDS_PER_SECOND;
}
}
#if defined(BOOST_HAS_PTHREADS)
inline void to_timespec(const boost::xtime& xt, timespec& ts)
{
ts.tv_sec = static_cast<int>(xt.sec);
ts.tv_nsec = static_cast<int>(xt.nsec);
if(ts.tv_nsec >= NANOSECONDS_PER_SECOND)
{
ts.tv_sec += ts.tv_nsec / NANOSECONDS_PER_SECOND;
ts.tv_nsec %= NANOSECONDS_PER_SECOND;
}
}
inline void to_time(int milliseconds, timespec& ts)
{
boost::xtime xt;
to_time(milliseconds, xt);
to_timespec(xt, ts);
}
inline void to_timespec_duration(const boost::xtime& xt, timespec& ts)
{
boost::xtime cur;
int res = 0;
res = boost::xtime_get(&cur, boost::TIME_UTC);
assert(res == boost::TIME_UTC);
if (boost::xtime_cmp(xt, cur) <= 0)
{
ts.tv_sec = 0;
ts.tv_nsec = 0;
}
else
{
ts.tv_sec = xt.sec - cur.sec;
ts.tv_nsec = xt.nsec - cur.nsec;
if( ts.tv_nsec < 0 )
{
ts.tv_sec -= 1;
ts.tv_nsec += NANOSECONDS_PER_SECOND;
}
if(ts.tv_nsec >= NANOSECONDS_PER_SECOND)
{
ts.tv_sec += ts.tv_nsec / NANOSECONDS_PER_SECOND;
ts.tv_nsec %= NANOSECONDS_PER_SECOND;
}
}
}
#endif
inline void to_duration(boost::xtime xt, int& milliseconds)
{
boost::xtime cur;
int res = 0;
res = boost::xtime_get(&cur, boost::TIME_UTC);
assert(res == boost::TIME_UTC);
if (boost::xtime_cmp(xt, cur) <= 0)
milliseconds = 0;
else
{
if (cur.nsec > xt.nsec)
{
xt.nsec += NANOSECONDS_PER_SECOND;
--xt.sec;
}
milliseconds = (int)((xt.sec - cur.sec) * MILLISECONDS_PER_SECOND) +
(((xt.nsec - cur.nsec) + (NANOSECONDS_PER_MILLISECOND/2)) /
NANOSECONDS_PER_MILLISECOND);
}
}
inline void to_microduration(boost::xtime xt, int& microseconds)
{
boost::xtime cur;
int res = 0;
res = boost::xtime_get(&cur, boost::TIME_UTC);
assert(res == boost::TIME_UTC);
if (boost::xtime_cmp(xt, cur) <= 0)
microseconds = 0;
else
{
if (cur.nsec > xt.nsec)
{
xt.nsec += NANOSECONDS_PER_SECOND;
--xt.sec;
}
microseconds = (int)((xt.sec - cur.sec) * MICROSECONDS_PER_SECOND) +
(((xt.nsec - cur.nsec) + (NANOSECONDS_PER_MICROSECOND/2)) /
NANOSECONDS_PER_MICROSECOND);
}
}
}
// Change Log:
// 1 Jun 01 Initial creation.

View File

@@ -1,251 +0,0 @@
// Copyright (C) 2001-2003 William E. Kempf
// Copyright (C) 2006 Roland Schwarz
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#include <boost/thread/tss.hpp>
#ifndef BOOST_THREAD_NO_TSS_CLEANUP
#include <boost/thread/once.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/exceptions.hpp>
#include <vector>
#include <string>
#include <stdexcept>
#include <cassert>
#if defined(BOOST_HAS_WINTHREADS)
# include <windows.h>
# include <boost/thread/detail/tss_hooks.hpp>
#endif
namespace {
typedef std::vector<void*> tss_slots;
typedef std::vector<boost::function1<void, void*>*> tss_data_cleanup_handlers_type;
boost::once_flag tss_data_once = BOOST_ONCE_INIT;
boost::mutex* tss_data_mutex = 0;
tss_data_cleanup_handlers_type* tss_data_cleanup_handlers = 0;
#if defined(BOOST_HAS_WINTHREADS)
DWORD tss_data_native_key=TLS_OUT_OF_INDEXES;
#elif defined(BOOST_HAS_PTHREADS)
pthread_key_t tss_data_native_key;
#elif defined(BOOST_HAS_MPTASKS)
TaskStorageIndex tss_data_native_key;
#endif
int tss_data_use = 0;
void tss_data_inc_use(boost::mutex::scoped_lock& lk)
{
++tss_data_use;
}
void tss_data_dec_use(boost::mutex::scoped_lock& lk)
{
if (0 == --tss_data_use)
{
tss_data_cleanup_handlers_type::size_type i;
for (i = 0; i < tss_data_cleanup_handlers->size(); ++i)
{
delete (*tss_data_cleanup_handlers)[i];
}
delete tss_data_cleanup_handlers;
tss_data_cleanup_handlers = 0;
lk.unlock();
delete tss_data_mutex;
tss_data_mutex = 0;
#if defined(BOOST_HAS_WINTHREADS)
TlsFree(tss_data_native_key);
tss_data_native_key=TLS_OUT_OF_INDEXES;
#elif defined(BOOST_HAS_PTHREADS)
pthread_key_delete(tss_data_native_key);
#elif defined(BOOST_HAS_MPTASKS)
// Don't know what to put here.
// But MPTASKS isn't currently maintained anyways...
#endif
}
}
extern "C" void cleanup_slots(void* p)
{
tss_slots* slots = static_cast<tss_slots*>(p);
boost::mutex::scoped_lock lock(*tss_data_mutex);
for (tss_slots::size_type i = 0; i < slots->size(); ++i)
{
(*(*tss_data_cleanup_handlers)[i])((*slots)[i]);
(*slots)[i] = 0;
}
#if defined(BOOST_HAS_WINTHREADS)
TlsSetValue(tss_data_native_key,0);
#endif
tss_data_dec_use(lock);
delete slots;
}
void init_tss_data()
{
std::auto_ptr<tss_data_cleanup_handlers_type>
temp(new tss_data_cleanup_handlers_type);
std::auto_ptr<boost::mutex> temp_mutex(new boost::mutex);
if (temp_mutex.get() == 0)
throw boost::thread_resource_error();
#if defined(BOOST_HAS_WINTHREADS)
//Force the cleanup implementation library to be linked in
tss_cleanup_implemented();
//Allocate tls slot
tss_data_native_key = TlsAlloc();
if (tss_data_native_key == TLS_OUT_OF_INDEXES)
return;
#elif defined(BOOST_HAS_PTHREADS)
int res = pthread_key_create(&tss_data_native_key, &cleanup_slots);
if (res != 0)
return;
#elif defined(BOOST_HAS_MPTASKS)
OSStatus status = MPAllocateTaskStorageIndex(&tss_data_native_key);
if (status != noErr)
return;
#endif
// The life time of cleanup handlers and mutex is beeing
// managed by a reference counting technique.
// This avoids a memory leak by releasing the global data
// after last use only, since the execution order of cleanup
// handlers is unspecified on any platform with regards to
// C++ destructor ordering rules.
tss_data_cleanup_handlers = temp.release();
tss_data_mutex = temp_mutex.release();
}
#if defined(BOOST_HAS_WINTHREADS)
tss_slots* get_slots(bool alloc);
void __cdecl tss_thread_exit()
{
tss_slots* slots = get_slots(false);
if (slots)
cleanup_slots(slots);
}
#endif
tss_slots* get_slots(bool alloc)
{
tss_slots* slots = 0;
#if defined(BOOST_HAS_WINTHREADS)
slots = static_cast<tss_slots*>(
TlsGetValue(tss_data_native_key));
#elif defined(BOOST_HAS_PTHREADS)
slots = static_cast<tss_slots*>(
pthread_getspecific(tss_data_native_key));
#elif defined(BOOST_HAS_MPTASKS)
slots = static_cast<tss_slots*>(
MPGetTaskStorageValue(tss_data_native_key));
#endif
if (slots == 0 && alloc)
{
std::auto_ptr<tss_slots> temp(new tss_slots);
#if defined(BOOST_HAS_WINTHREADS)
if (at_thread_exit(&tss_thread_exit) == -1)
return 0;
if (!TlsSetValue(tss_data_native_key, temp.get()))
return 0;
#elif defined(BOOST_HAS_PTHREADS)
if (pthread_setspecific(tss_data_native_key, temp.get()) != 0)
return 0;
#elif defined(BOOST_HAS_MPTASKS)
if (MPSetTaskStorageValue(tss_data_native_key, temp.get()) != noErr)
return 0;
#endif
{
boost::mutex::scoped_lock lock(*tss_data_mutex);
tss_data_inc_use(lock);
}
slots = temp.release();
}
return slots;
}
} // namespace
namespace boost {
namespace detail {
void tss::init(boost::function1<void, void*>* pcleanup)
{
boost::call_once(tss_data_once, &init_tss_data);
if (tss_data_cleanup_handlers == 0)
throw thread_resource_error();
boost::mutex::scoped_lock lock(*tss_data_mutex);
try
{
tss_data_cleanup_handlers->push_back(pcleanup);
m_slot = tss_data_cleanup_handlers->size() - 1;
tss_data_inc_use(lock);
}
catch (...)
{
throw thread_resource_error();
}
}
tss::~tss()
{
boost::mutex::scoped_lock lock(*tss_data_mutex);
tss_data_dec_use(lock);
}
void* tss::get() const
{
tss_slots* slots = get_slots(false);
if (!slots)
return 0;
if (m_slot >= slots->size())
return 0;
return (*slots)[m_slot];
}
void tss::set(void* value)
{
tss_slots* slots = get_slots(true);
if (!slots)
throw boost::thread_resource_error();
if (m_slot >= slots->size())
{
try
{
slots->resize(m_slot + 1);
}
catch (...)
{
throw boost::thread_resource_error();
}
}
(*slots)[m_slot] = value;
}
void tss::cleanup(void* value)
{
boost::mutex::scoped_lock lock(*tss_data_mutex);
(*(*tss_data_cleanup_handlers)[m_slot])(value);
}
} // namespace detail
} // namespace boost
#endif //BOOST_THREAD_NO_TSS_CLEANUP

View File

@@ -1,72 +0,0 @@
// (C) Copyright Michael Glassford 2004.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#if defined(BOOST_HAS_WINTHREADS) && defined(BOOST_THREAD_BUILD_DLL)
#include <boost/thread/detail/tss_hooks.hpp>
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#if defined(__BORLANDC__)
extern "C" BOOL WINAPI DllEntryPoint(HINSTANCE /*hInstance*/, DWORD dwReason, LPVOID /*lpReserved*/)
#elif defined(_WIN32_WCE)
extern "C" BOOL WINAPI DllMain(HANDLE /*hInstance*/, DWORD dwReason, LPVOID /*lpReserved*/)
#else
extern "C" BOOL WINAPI DllMain(HINSTANCE /*hInstance*/, DWORD dwReason, LPVOID /*lpReserved*/)
#endif
{
switch(dwReason)
{
case DLL_PROCESS_ATTACH:
{
on_process_enter();
on_thread_enter();
break;
}
case DLL_THREAD_ATTACH:
{
on_thread_enter();
break;
}
case DLL_THREAD_DETACH:
{
on_thread_exit();
break;
}
case DLL_PROCESS_DETACH:
{
on_thread_exit();
on_process_exit();
break;
}
}
return TRUE;
}
extern "C" void tss_cleanup_implemented(void)
{
/*
This function's sole purpose is to cause a link error in cases where
automatic tss cleanup is not implemented by Boost.Threads as a
reminder that user code is responsible for calling the necessary
functions at the appropriate times (and for implementing an a
tss_cleanup_implemented() function to eliminate the linker's
missing symbol error).
If Boost.Threads later implements automatic tss cleanup in cases
where it currently doesn't (which is the plan), the duplicate
symbol error will warn the user that their custom solution is no
longer needed and can be removed.
*/
}
#endif //defined(BOOST_HAS_WINTHREADS) && defined(BOOST_THREAD_BUILD_DLL)

View File

@@ -1,212 +0,0 @@
// Copyright (C) 2004 Michael Glassford
// Copyright (C) 2006 Roland Schwarz
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#if defined(BOOST_HAS_WINTHREADS)
#include <boost/thread/detail/tss_hooks.hpp>
#include <boost/assert.hpp>
// #include <boost/thread/mutex.hpp>
#include <boost/thread/once.hpp>
#include <list>
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
namespace
{
class CScopedCSLock
{
public:
CScopedCSLock(LPCRITICAL_SECTION cs) : cs(cs), lk(true) {
::EnterCriticalSection(cs);
}
~CScopedCSLock() {
if (lk) ::LeaveCriticalSection(cs);
}
void Unlock() {
lk = false;
::LeaveCriticalSection(cs);
}
private:
bool lk;
LPCRITICAL_SECTION cs;
};
typedef std::list<thread_exit_handler> thread_exit_handlers;
boost::once_flag once_init_threadmon_mutex = BOOST_ONCE_INIT;
//boost::mutex* threadmon_mutex;
// We don't use boost::mutex here, to avoid a memory leak report,
// because we cannot delete it again easily.
CRITICAL_SECTION threadmon_mutex;
void init_threadmon_mutex(void)
{
//threadmon_mutex = new boost::mutex;
//if (!threadmon_mutex)
// throw boost::thread_resource_error();
::InitializeCriticalSection(&threadmon_mutex);
}
const DWORD invalid_tls_key = TLS_OUT_OF_INDEXES;
DWORD tls_key = invalid_tls_key;
unsigned long attached_thread_count = 0;
}
/*
Calls to DllMain() and tls_callback() are serialized by the OS;
however, calls to at_thread_exit are not, so it must be protected
by a mutex. Since we already need a mutex for at_thread_exit(),
and since there is no guarantee that on_process_enter(),
on_process_exit(), on_thread_enter(), and on_thread_exit()
will be called only from DllMain() or tls_callback(), it makes
sense to protect those, too.
*/
extern "C" BOOST_THREAD_DECL int at_thread_exit(
thread_exit_handler exit_handler
)
{
boost::call_once(once_init_threadmon_mutex, init_threadmon_mutex);
//boost::mutex::scoped_lock lock(*threadmon_mutex);
CScopedCSLock lock(&threadmon_mutex);
//Allocate a tls slot if necessary.
if (tls_key == invalid_tls_key)
tls_key = TlsAlloc();
if (tls_key == invalid_tls_key)
return -1;
//Get the exit handlers list for the current thread from tls.
thread_exit_handlers* exit_handlers =
static_cast<thread_exit_handlers*>(TlsGetValue(tls_key));
if (!exit_handlers)
{
//No exit handlers list was created yet.
try
{
//Attempt to create a new exit handlers list.
exit_handlers = new thread_exit_handlers;
if (!exit_handlers)
return -1;
//Attempt to store the list pointer in tls.
if (TlsSetValue(tls_key, exit_handlers))
++attached_thread_count;
else
{
delete exit_handlers;
return -1;
}
}
catch (...)
{
return -1;
}
}
//Like the C runtime library atexit() function,
//functions should be called in the reverse of
//the order they are added, so push them on the
//front of the list.
try
{
exit_handlers->push_front(exit_handler);
}
catch (...)
{
return -1;
}
//Like the atexit() function, a result of zero
//indicates success.
return 0;
}
extern "C" BOOST_THREAD_DECL void on_process_enter(void)
{
boost::call_once(once_init_threadmon_mutex, init_threadmon_mutex);
// boost::mutex::scoped_lock lock(*threadmon_mutex);
CScopedCSLock lock(&threadmon_mutex);
BOOST_ASSERT(attached_thread_count == 0);
}
extern "C" BOOST_THREAD_DECL void on_process_exit(void)
{
boost::call_once(once_init_threadmon_mutex, init_threadmon_mutex);
// boost::mutex::scoped_lock lock(*threadmon_mutex);
CScopedCSLock lock(&threadmon_mutex);
BOOST_ASSERT(attached_thread_count == 0);
//Free the tls slot if one was allocated.
if (tls_key != invalid_tls_key)
{
TlsFree(tls_key);
tls_key = invalid_tls_key;
}
}
extern "C" BOOST_THREAD_DECL void on_thread_enter(void)
{
//boost::call_once(init_threadmon_mutex, once_init_threadmon_mutex);
//boost::mutex::scoped_lock lock(*threadmon_mutex);
}
extern "C" BOOST_THREAD_DECL void on_thread_exit(void)
{
boost::call_once(once_init_threadmon_mutex, init_threadmon_mutex);
// boost::mutex::scoped_lock lock(*threadmon_mutex);
CScopedCSLock lock(&threadmon_mutex);
//Get the exit handlers list for the current thread from tls.
if (tls_key == invalid_tls_key)
return;
thread_exit_handlers* exit_handlers =
static_cast<thread_exit_handlers*>(TlsGetValue(tls_key));
//If a handlers list was found, use it.
if (exit_handlers && TlsSetValue(tls_key, 0))
{
BOOST_ASSERT(attached_thread_count > 0);
--attached_thread_count;
//lock.unlock();
lock.Unlock();
//Call each handler and remove it from the list
while (!exit_handlers->empty())
{
if (thread_exit_handler exit_handler = *exit_handlers->begin())
(*exit_handler)();
exit_handlers->pop_front();
}
delete exit_handlers;
exit_handlers = 0;
}
}
#endif //defined(BOOST_HAS_WINTHREADS)

View File

@@ -1,179 +0,0 @@
// (C) Copyright Aaron W. LaFramboise, Roland Schwarz, Michael Glassford 2004.
// Use, modification and distribution are subject to the
// Boost Software License, Version 1.0. (See accompanying file
// LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/thread/detail/config.hpp>
#if defined(BOOST_HAS_WINTHREADS) && defined(BOOST_THREAD_BUILD_LIB) && defined(_MSC_VER)
#include <boost/thread/detail/tss_hooks.hpp>
#include <stdlib.h>
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
//Definitions required by implementation
#if (_MSC_VER < 1300) // 1300 == VC++ 7.0
typedef void (__cdecl *_PVFV)(void);
#define INIRETSUCCESS
#define PVAPI void
#else
typedef int (__cdecl *_PVFV)(void);
#define INIRETSUCCESS 0
#define PVAPI int
#endif
typedef void (NTAPI* _TLSCB)(HINSTANCE, DWORD, PVOID);
//Symbols for connection to the runtime environment
extern "C"
{
extern DWORD _tls_used; //the tls directory (located in .rdata segment)
extern _TLSCB __xl_a[], __xl_z[]; //tls initializers */
}
namespace
{
//Forward declarations
static PVAPI on_tls_prepare(void);
static PVAPI on_process_init(void);
static PVAPI on_process_term(void);
static void NTAPI on_tls_callback(HINSTANCE, DWORD, PVOID);
//The .CRT$Xxx information is taken from Codeguru:
//http://www.codeguru.com/Cpp/misc/misc/threadsprocesses/article.php/c6945__2/
#if (_MSC_VER >= 1300) // 1300 == VC++ 7.0
# pragma data_seg(push, old_seg)
#endif
//Callback to run tls glue code first.
//I don't think it is necessary to run it
//at .CRT$XIB level, since we are only
//interested in thread detachement. But
//this could be changed easily if required.
#pragma data_seg(".CRT$XIU")
static _PVFV p_tls_prepare = on_tls_prepare;
#pragma data_seg()
//Callback after all global ctors.
#pragma data_seg(".CRT$XCU")
static _PVFV p_process_init = on_process_init;
#pragma data_seg()
//Callback for tls notifications.
#pragma data_seg(".CRT$XLB")
_TLSCB p_thread_callback = on_tls_callback;
#pragma data_seg()
//Callback for termination.
#pragma data_seg(".CRT$XTU")
static _PVFV p_process_term = on_process_term;
#pragma data_seg()
#if (_MSC_VER >= 1300) // 1300 == VC++ 7.0
# pragma data_seg(pop, old_seg)
#endif
PVAPI on_tls_prepare(void)
{
//The following line has an important side effect:
//if the TLS directory is not already there, it will
//be created by the linker. In other words, it forces a tls
//directory to be generated by the linker even when static tls
//(i.e. __declspec(thread)) is not used.
//The volatile should prevent the optimizer
//from removing the reference.
DWORD volatile dw = _tls_used;
#if (_MSC_VER < 1300) // 1300 == VC++ 7.0
_TLSCB* pfbegin = __xl_a;
_TLSCB* pfend = __xl_z;
_TLSCB* pfdst = pfbegin;
//pfdst = (_TLSCB*)_tls_used.AddressOfCallBacks;
//The following loop will merge the address pointers
//into a contiguous area, since the tlssup code seems
//to require this (at least on MSVC 6)
while (pfbegin < pfend)
{
if (*pfbegin != 0)
{
*pfdst = *pfbegin;
++pfdst;
}
++pfbegin;
}
*pfdst = 0;
#endif
return INIRETSUCCESS;
}
PVAPI on_process_init(void)
{
//Schedule on_thread_exit() to be called for the main
//thread before destructors of global objects have been
//called.
//It will not be run when 'quick' exiting the
//library; however, this is the standard behaviour
//for destructors of global objects, so that
//shouldn't be a problem.
atexit(on_thread_exit);
//Call Boost process entry callback here
on_process_enter();
return INIRETSUCCESS;
}
PVAPI on_process_term(void)
{
on_process_exit();
return INIRETSUCCESS;
}
void NTAPI on_tls_callback(HINSTANCE h, DWORD dwReason, PVOID pv)
{
switch (dwReason)
{
case DLL_THREAD_DETACH:
{
on_thread_exit();
break;
}
}
}
} //namespace
extern "C" void tss_cleanup_implemented(void)
{
/*
This function's sole purpose is to cause a link error in cases where
automatic tss cleanup is not implemented by Boost.Threads as a
reminder that user code is responsible for calling the necessary
functions at the appropriate times (and for implementing an a
tss_cleanup_implemented() function to eliminate the linker's
missing symbol error).
If Boost.Threads later implements automatic tss cleanup in cases
where it currently doesn't (which is the plan), the duplicate
symbol error will warn the user that their custom solution is no
longer needed and can be removed.
*/
}
#endif //defined(BOOST_HAS_WINTHREADS) && defined(BOOST_THREAD_BUILD_LIB)

View File

@@ -29,26 +29,13 @@ namespace boost
void create_current_thread_tls_key()
{
tss_cleanup_implemented(); // if anyone uses TSS, we need the cleanup linked in
current_thread_tls_key=TlsAlloc();
BOOST_ASSERT(current_thread_tls_key!=TLS_OUT_OF_INDEXES);
}
void cleanup_tls_key()
{
if(current_thread_tls_key)
{
TlsFree(current_thread_tls_key);
current_thread_tls_key=0;
}
}
detail::thread_data_base* get_current_thread_data()
{
if(!current_thread_tls_key)
{
return 0;
}
boost::call_once(current_thread_tls_init_flag,create_current_thread_tls_key);
return (detail::thread_data_base*)TlsGetValue(current_thread_tls_key);
}
@@ -154,8 +141,8 @@ namespace boost
}
}
set_current_thread_data(0);
}
set_current_thread_data(0);
}
unsigned __stdcall thread_start_function(void* param)
@@ -557,6 +544,7 @@ namespace boost
void set_tss_data(void const* key,boost::shared_ptr<tss_cleanup_function> func,void* tss_data,bool cleanup_existing)
{
tss_cleanup_implemented(); // if anyone uses TSS, we need the cleanup linked in
if(tss_data_node* const current_node=find_tss_data(key))
{
if(cleanup_existing && current_node->func.get())
@@ -584,9 +572,7 @@ extern "C" BOOST_THREAD_DECL void on_thread_enter()
{}
extern "C" BOOST_THREAD_DECL void on_process_exit()
{
boost::cleanup_tls_key();
}
{}
extern "C" BOOST_THREAD_DECL void on_thread_exit()
{

Some files were not shown because too many files have changed in this diff Show More