<li>Px,Py,Pz = # of processors in each dimension of 3d grid overlaying the simulation domain</li>
<li>zero or more keyword/arg pairs may be appended</li>
<li>keyword = <em>grid</em> or <em>map</em> or <em>part</em> or <em>file</em></li>
</ul>
<preclass="literal-block">
<em>grid</em> arg = gstyle params ...
gstyle = <em>onelevel</em> or <em>twolevel</em> or <em>numa</em> or <em>custom</em>
onelevel params = none
twolevel params = Nc Cx Cy Cz
Nc = number of cores per node
Cx,Cy,Cz = # of cores in each dimension of 3d sub-grid assigned to each node
numa params = none
custom params = infile
infile = file containing grid layout
<em>map</em> arg = <em>cart</em> or <em>cart/reorder</em> or <em>xyz</em> or <em>xzy</em> or <em>yxz</em> or <em>yzx</em> or <em>zxy</em> or <em>zyx</em>
cart = use MPI_Cart() methods to map processors to 3d grid with reorder = 0
cart/reorder = use MPI_Cart() methods to map processors to 3d grid with reorder = 1
xyz,xzy,yxz,yzx,zxy,zyx = map procesors to 3d grid in IJK ordering
<em>numa</em> arg = none
<em>part</em> args = Psend Precv cstyle
Psend = partition # (1 to Np) which will send its processor layout
Precv = partition # (1 to Np) which will recv the processor layout
cstyle = <em>multiple</em>
<em>multiple</em> = Psend grid will be multiple of Precv grid in each dimension
<em>file</em> arg = outfile
outfile = name of file to write 3d grid of processors to
<pclass="last">This command only affects the initial regular 3d grid created
when the simulation box is first specified via a
<aclass="reference internal"href="create_box.html"><spanclass="doc">create_box</span></a> or <aclass="reference internal"href="read_data.html"><spanclass="doc">read_data</span></a> or
<aclass="reference internal"href="read_restart.html"><spanclass="doc">read_restart</span></a> command. Or if the simulation box is
re-created via the <aclass="reference internal"href="replicate.html"><spanclass="doc">replicate</span></a> command. The same
regular grid is initially created, regardless of which
<aclass="reference internal"href="comm_style.html"><spanclass="doc">comm_style</span></a> command is in effect.</p>
</div>
<p>If load-balancing is never invoked via the <aclass="reference internal"href="balance.html"><spanclass="doc">balance</span></a> or
<aclass="reference internal"href="fix_balance.html"><spanclass="doc">fix balance</span></a> commands, then the initial regular grid
will persist for all simulations. If balancing is performed, some of
the methods invoked by those commands retain the logical toplogy of
the initial 3d grid, and the mapping of processors to the grid
specified by the processors command. However the grid spacings in
different dimensions may change, so that processors own sub-domains of
different sizes. If the <aclass="reference internal"href="comm_style.html"><spanclass="doc">comm_style tiled</span></a> command is
used, methods invoked by the balancing commands may discard the 3d
grid of processors and tile the simulation domain with sub-domains of
different sizes and shapes which no longer have a logical 3d
connectivity. If that occurs, all the information specified by the
processors command is ignored.</p>
<hrclass="docutils"/>
<p>The <em>grid</em> keyword affects the factorization of P into Px,Py,Pz and it
can also affect how the P processor IDs are mapped to the 3d grid of
processors.</p>
<p>The <em>onelevel</em> style creates a 3d grid that is compatible with the
Px,Py,Pz settings, and which minimizes the surface-to-volume ratio of
each processor’s sub-domain, as described above. The mapping of
processors to the grid is determined by the <em>map</em> keyword setting.</p>
<p>The <em>twolevel</em> style can be used on machines with multicore nodes to
minimize off-node communication. It insures that contiguous
sub-sections of the 3d grid are assigned to all the cores of a node.
For example if <em>Nc</em> is 4, then 2x2x1 or 2x1x2 or 1x2x2 sub-sections of
the 3d grid will correspond to the cores of each node. This affects
both the factorization and mapping steps.</p>
<p>The <em>Cx</em>, <em>Cy</em>, <em>Cz</em> settings are similar to the <em>Px</em>, <em>Py</em>, <em>Pz</em>
settings, only their product should equal <em>Nc</em>. Any of the 3
parameters can be specified with an asterisk “*”, which means LAMMPS
will choose the number of cores in that dimension of the node’s
sub-grid. As with Px,Py,Pz, it will do this based on the size and
shape of the global simulation box so as to minimize the
surface-to-volume ratio of each processor’s sub-domain.</p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<pclass="last">For the <em>twolevel</em> style to work correctly, it assumes the MPI
ranks of processors LAMMPS is running on are ordered by core and then
by node. E.g. if you are running on 2 quad-core nodes, for a total of
8 processors, then it assumes processors 0,1,2,3 are on node 1, and
processors 4,5,6,7 are on node 2. This is the default rank ordering
for most MPI implementations, but some MPIs provide options for this
ordering, e.g. via environment variable settings.</p>
</div>
<p>The <em>numa</em> style operates similar to the <em>twolevel</em> keyword except
that it auto-detects which cores are running on which nodes.
Currently, it does this in only 2 levels, but it may be extended in
the future to account for socket topology and other non-uniform memory
access (NUMA) costs. It also uses a different algorithm than the
<em>twolevel</em> keyword for doing the two-level factorization of the
simulation box into a 3d processor grid to minimize off-node
communication, and it does its own MPI-based mapping of nodes and
cores to the regular 3d grid. Thus it may produce a different layout
of the processors than the <em>twolevel</em> options.</p>
<p>The <em>numa</em> style will give an error if the number of MPI processes is
not divisible by the number of cores used per node, or any of the Px
or Py of Pz values is greater than 1.</p>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<pclass="last">Unlike the <em>twolevel</em> style, the <em>numa</em> style does not require
any particular ordering of MPI ranks i norder to work correctly. This
is because it auto-detects which processes are running on which nodes.</p>
</div>
<p>The <em>custom</em> style uses the file <em>infile</em> to define both the 3d
factorization and the mapping of processors to the grid.</p>
<p>The file should have the following format. Any number of initial
blank or comment lines (starting with a “#” character) can be present.
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.