Page MenuHomec4science

manual-parallel.tex
No OneTemporary

File Metadata

Created
Thu, Jun 6, 23:50

manual-parallel.tex

\chapter{Parallel Computation}
This section explains how to launch a parallel computation.
The strategy adopted by \akantu uses a mesh partitioning
where elements are mapped to processors. Mesh partitions are
then distributed to available processors by adequate routines
as will be described below.
The sequence of additional operations to be performed by the user are:
\begin{itemize}
\item Initializing the parallel context
\item Partitioning the mesh
\item Distributing mesh partitions
\end{itemize}
After these steps, the \code{Model}
object proceeds with the interprocess communication automatically
without the user having to explicitly take care of them.
In what follows we show how it works on a \code{SolidMechanics} model.
\section{Initializing the Parallel Context}
The user must initialize \akantu by forwarding the arguments passed to the
program by using the function \code{initialize}, and close \akantu instances
at the end of the program by calling the \code{finalize} function.
\note{This step does not change from the sequential case as it was stated in
Section \ref{sect:common:main}. It only gives a additional motivation in the parallel/MPI context.}
The \code{initialize} function builds a \code{StaticCommunicator} object
responsible for handling the interprocess communications later on. The
\code{StaticCommunicator} can, for instance, be used to ask the total number of
declared processors available for computations as well as the process rank
through the functions \code{getNbProc} and \code{whoAmI} respectively.
An example of the initializing sequence and basic usage of the
\code{StaticCommunicator} is:
\begin{cpp}
int main(int argc, char *argv[]) {
initialize("material.dat", argc, argv);
const auto & comm = Communicator::getStaticCommunicator();
Int psize = comm.getNbProc();
Int prank = comm.whoAmI();
...
finalize();
}
\end{cpp}
\section{Partitioning and distributing the Mesh}
The mesh is partitioned after the correct initialization of the processes
playing a role in the computation. We assume that a \code{Mesh} object is
constructed as presented in Section~\ref{sect:common:mesh}. Then it can be
distributed among the different processes. At the moment, the only partitioner
available is \code{MeshPartitionScotch} which is used by the function
\code{distribute} using the \textbf{Scotch}\cite{scotch} program. This is
achieved by the following code:
\begin{cpp}
Mesh mesh(spatial_dimension);
if(prank == 0) {
mesh.read("my_mesh.msh");
}
mesh.distribute();
\end{cpp}
The algorithm that partition the mesh needs the generation of a random
distribution of values. Therefore, in order to run several time a
simulation with the same partition of the mesh, the \emph{seed} has to
be set manually. This can be done either by adding the following line
to the input file \emph{outside} the material parameters environments:
\begin{cpp}
seed = 1
\end{cpp}
where the value 1.0 can be substituted with any number, or by setting
it directly in the code with the command:
\begin{cpp}
RandGenerator:: seed(1)
\end{cpp}
The latter command, with empty brackets, can be used to check the value
of the \emph{seed} used in the simulation.
\note{Only the processor of rank $0$ should load the mesh file to
partition it. Nevertheless, the \code{Mesh} object must by declared
for all processors since the mesh distribution will store mesh pieces to that object.}
An example of an explicit dynamic 2D bar in compression in a parallel
context can be found in \shellcode{\examplesdir/parallel\_2d}.
\section{Launching a Parallel Program}
Using \textbf{MPI} a parallel run can be launched from a shell
using the command
\begin{cpp}
mpirun -np #procs program_name parameter1 parameter2 ...
\end{cpp}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "manual"
%%% End:

Event Timeline