2
0
mirror of https://github.com/boostorg/mpi.git synced 2026-01-19 04:22:10 +00:00

Move back to library style for doc.

Put thread doc in its own file
This commit is contained in:
Alain Miniussi
2018-09-10 12:36:23 +02:00
parent 85ae8ecee3
commit afb0d1460a
3 changed files with 37 additions and 38 deletions

View File

@@ -1,4 +1,4 @@
[book MPI
[library Boost.MPI
[quickbook 1.6]
[authors [Gregor, Douglas], [Troyer, Matthias] ]
[copyright 2005 2006 2007 Douglas Gregor, Matthias Troyer, Trustees of Indiana University]
@@ -73,42 +73,6 @@ the amount of effort required to interface between Boost.MPI
and the C MPI library.
[endsect]
[section:threading Threads]
There are an increasing number of hybrid parallel applications that mix
distributed and shared memory parallelism. To know how to support that model,
one need to know what level of threading support is guaranteed by the MPI
implementation. There are 4 ordered level of possible threading support described
by [enumref boost::mpi::threading::level mpi::threading::level].
At the lowest level, you should not use threads at all, at the highest level, any
thread can perform MPI call.
If you want to use multi-threading in your MPI application, you should indicate
in the environment constructor your preferred threading support. Then probe the
one the library did provide, and decide what you can do with it (it could be
nothing, then aborting is a valid option):
#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <iostream>
namespace mpi = boost::mpi;
namespace mt = mpi::threading;
int main()
{
mpi::environment env(mt::funneled);
if (env.thread_level() < mt::funneled) {
env.abort(-1);
}
mpi::communicator world;
std::cout << "I am process " << world.rank() << " of " << world.size()
<< "." << std::endl;
return 0;
}
[endsect:threading]
[section:performance Performance Evaluation]
Message-passing performance is crucial in high-performance distributed

35
doc/threading.qbk Normal file
View File

@@ -0,0 +1,35 @@
[section:threading Threads]
There are an increasing number of hybrid parallel applications that mix
distributed and shared memory parallelism. To know how to support that model,
one need to know what level of threading support is guaranteed by the MPI
implementation. There are 4 ordered level of possible threading support described
by [enumref boost::mpi::threading::level mpi::threading::level].
At the lowest level, you should not use threads at all, at the highest level, any
thread can perform MPI call.
If you want to use multi-threading in your MPI application, you should indicate
in the environment constructor your preferred threading support. Then probe the
one the library did provide, and decide what you can do with it (it could be
nothing, then aborting is a valid option):
#include <boost/mpi/environment.hpp>
#include <boost/mpi/communicator.hpp>
#include <iostream>
namespace mpi = boost::mpi;
namespace mt = mpi::threading;
int main()
{
mpi::environment env(mt::funneled);
if (env.thread_level() < mt::funneled) {
env.abort(-1);
}
mpi::communicator world;
std::cout << "I am process " << world.rank() << " of " << world.size()
<< "." << std::endl;
return 0;
}
[endsect:threading]

View File

@@ -830,8 +830,8 @@ A communicator can be organised as a cartesian grid, here a basic example:
return 0;
}
[endsect:cartesian_communicator]
[include threading.qbk]
[section:skeleton_and_content Separating structure from content]
When communicating data types over MPI that are not fundamental to MPI