Page MenuHomec4science

Section_start.html
No OneTemporary

File Metadata

Created
Sun, Dec 29, 09:54

Section_start.html

<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>2. Getting Started &mdash; LAMMPS documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/sphinxcontrib-images/LightBox2/lightbox2/css/lightbox.css" type="text/css" />
<link rel="top" title="LAMMPS documentation" href="index.html"/>
<link rel="next" title="3. Commands" href="Section_commands.html"/>
<link rel="prev" title="1. Introduction" href="Section_intro.html"/>
<script src="_static/js/modernizr.min.js"></script>
</head>
<body class="wy-body-for-nav" role="document">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-nav-search">
<a href="Manual.html" class="icon icon-home"> LAMMPS
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="Section_intro.html">1. Introduction</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">2. Getting Started</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#what-s-in-the-lammps-distribution">2.1. What&#8217;s in the LAMMPS distribution</a></li>
<li class="toctree-l2"><a class="reference internal" href="#making-lammps">2.2. Making LAMMPS</a></li>
<li class="toctree-l2"><a class="reference internal" href="#making-lammps-with-optional-packages">2.3. Making LAMMPS with optional packages</a></li>
<li class="toctree-l2"><a class="reference internal" href="#building-lammps-via-the-make-py-tool">2.4. Building LAMMPS via the Make.py tool</a></li>
<li class="toctree-l2"><a class="reference internal" href="#building-lammps-as-a-library">2.5. Building LAMMPS as a library</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#static-library">2.5.1. <strong>Static library:</strong></a></li>
<li class="toctree-l3"><a class="reference internal" href="#shared-library">2.5.2. <strong>Shared library:</strong></a></li>
<li class="toctree-l3"><a class="reference internal" href="#additional-requirement-for-using-a-shared-library">2.5.3. <strong>Additional requirement for using a shared library:</strong></a></li>
<li class="toctree-l3"><a class="reference internal" href="#calling-the-lammps-library">2.5.4. <strong>Calling the LAMMPS library:</strong></a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#running-lammps">2.6. Running LAMMPS</a></li>
<li class="toctree-l2"><a class="reference internal" href="#command-line-options">2.7. Command-line options</a></li>
<li class="toctree-l2"><a class="reference internal" href="#lammps-screen-output">2.8. LAMMPS screen output</a></li>
<li class="toctree-l2"><a class="reference internal" href="#tips-for-users-of-previous-lammps-versions">2.9. Tips for users of previous LAMMPS versions</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="Section_commands.html">3. Commands</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_packages.html">4. Packages</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_accelerate.html">5. Accelerating LAMMPS performance</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_howto.html">6. How-to discussions</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_example.html">7. Example problems</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_perf.html">8. Performance &amp; scalability</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_tools.html">9. Additional tools</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_modify.html">10. Modifying &amp; extending LAMMPS</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_python.html">11. Python interface to LAMMPS</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_errors.html">12. Errors</a></li>
<li class="toctree-l1"><a class="reference internal" href="Section_history.html">13. Future and history</a></li>
</ul>
</div>
&nbsp;
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" role="navigation" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="Manual.html">LAMMPS</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="Manual.html">Docs</a> &raquo;</li>
<li>2. Getting Started</li>
<li class="wy-breadcrumbs-aside">
<a href="http://lammps.sandia.gov">Website</a>
<a href="Section_commands.html#comm">Commands</a>
</li>
</ul>
<hr/>
<div class="rst-footer-buttons" style="margin-bottom: 1em" role="navigation" aria-label="footer navigation">
<a href="Section_commands.html" class="btn btn-neutral float-right" title="3. Commands" accesskey="n">Next <span class="fa fa-arrow-circle-right"></span></a>
<a href="Section_intro.html" class="btn btn-neutral" title="1. Introduction" accesskey="p"><span class="fa fa-arrow-circle-left"></span> Previous</a>
</div>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="getting-started">
<h1>2. Getting Started</h1>
<p>This section describes how to build and run LAMMPS, for both new and
experienced users.</p>
<div class="line-block">
<div class="line">2.1 <a class="reference internal" href="#start-1"><span class="std std-ref">What&#8217;s in the LAMMPS distribution</span></a></div>
<div class="line">2.2 <a class="reference internal" href="#start-2"><span class="std std-ref">Making LAMMPS</span></a></div>
<div class="line">2.3 <a class="reference internal" href="#start-3"><span class="std std-ref">Making LAMMPS with optional packages</span></a></div>
<div class="line">2.4 <a class="reference internal" href="#start-4"><span class="std std-ref">Building LAMMPS via the Make.py script</span></a></div>
<div class="line">2.5 <a class="reference internal" href="#start-5"><span class="std std-ref">Building LAMMPS as a library</span></a></div>
<div class="line">2.6 <a class="reference internal" href="#start-6"><span class="std std-ref">Running LAMMPS</span></a></div>
<div class="line">2.7 <a class="reference internal" href="#start-7"><span class="std std-ref">Command-line options</span></a></div>
<div class="line">2.8 <a class="reference internal" href="#start-8"><span class="std std-ref">Screen output</span></a></div>
<div class="line">2.9 <a class="reference internal" href="#start-9"><span class="std std-ref">Tips for users of previous versions</span></a></div>
<div class="line"><br /></div>
</div>
<div class="section" id="what-s-in-the-lammps-distribution">
<span id="start-1"></span><h2>2.1. What&#8217;s in the LAMMPS distribution</h2>
<p>When you download a LAMMPS tarball you will need to unzip and untar
the downloaded file with the following commands, after placing the
tarball in an appropriate directory.</p>
<pre class="literal-block">
gunzip lammps*.tar.gz
tar xvf lammps*.tar
</pre>
<p>This will create a LAMMPS directory containing two files and several
sub-directories:</p>
<table border="1" class="docutils">
<colgroup>
<col width="21%" />
<col width="79%" />
</colgroup>
<tbody valign="top">
<tr class="row-odd"><td>README</td>
<td>text file</td>
</tr>
<tr class="row-even"><td>LICENSE</td>
<td>the GNU General Public License (GPL)</td>
</tr>
<tr class="row-odd"><td>bench</td>
<td>benchmark problems</td>
</tr>
<tr class="row-even"><td>doc</td>
<td>documentation</td>
</tr>
<tr class="row-odd"><td>examples</td>
<td>simple test problems</td>
</tr>
<tr class="row-even"><td>potentials</td>
<td>embedded atom method (EAM) potential files</td>
</tr>
<tr class="row-odd"><td>src</td>
<td>source files</td>
</tr>
<tr class="row-even"><td>tools</td>
<td>pre- and post-processing tools</td>
</tr>
</tbody>
</table>
<p>Note that the <a class="reference external" href="http://lammps.sandia.gov/download.html">download page</a> also has links to download
Windows exectubles and installers, as well as pre-built executables
for a few specific Linux distributions. It also has instructions for
how to download/install LAMMPS for Macs (via Homebrew), and to
download and update LAMMPS from SVN and Git repositories, which gives
you the same files that are in the download tarball.</p>
<p>The Windows and Linux executables for serial or parallel only include
certain packages and bug-fixes/upgrades listed on <a class="reference external" href="http://lammps.sandia.gov/bug.html">this page</a> up to a certain date, as
stated on the download page. If you want an executable with
non-included packages or that is more current, then you&#8217;ll need to
build LAMMPS yourself, as discussed in the next section.</p>
<p>Skip to the <a class="reference internal" href="#start-6"><span class="std std-ref">Running LAMMPS</span></a> sections for info on how to
launch a LAMMPS Windows executable on a Windows box.</p>
<hr class="docutils" />
</div>
<div class="section" id="making-lammps">
<span id="start-2"></span><h2>2.2. Making LAMMPS</h2>
<p>This section has the following sub-sections:</p>
<ul class="simple">
<li><a class="reference internal" href="#start-2-1"><span class="std std-ref">Read this first</span></a></li>
<li><a class="reference internal" href="#start-2-2"><span class="std std-ref">Steps to build a LAMMPS executable</span></a></li>
<li><a class="reference internal" href="#start-2-3"><span class="std std-ref">Common errors that can occur when making LAMMPS</span></a></li>
<li><a class="reference internal" href="#start-2-4"><span class="std std-ref">Additional build tips</span></a></li>
<li><a class="reference internal" href="#start-2-5"><span class="std std-ref">Building for a Mac</span></a></li>
<li><a class="reference internal" href="#start-2-6"><span class="std std-ref">Building for Windows</span></a></li>
</ul>
<hr class="docutils" />
<p id="start-2-1"><a href="#id1"><span class="problematic" id="id2">**</span></a><em>Read this first:</em>**</p>
<p>If you want to avoid building LAMMPS yourself, read the preceeding
section about options available for downloading and installing
executables. Details are discussed on the <a class="reference external" href="http://lammps.sandia.gov/download.html">download</a> page.</p>
<p>Building LAMMPS can be simple or not-so-simple. If all you need are
the default packages installed in LAMMPS, and MPI is already installed
on your machine, or you just want to run LAMMPS in serial, then you
can typically use the Makefile.mpi or Makefile.serial files in
src/MAKE by typing one of these lines (from the src dir):</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">mpi</span>
<span class="n">make</span> <span class="n">serial</span>
</pre></div>
</div>
<p>Note that on a facility supercomputer, there are often &#8220;modules&#8221;
loaded in your environment that provide the compilers and MPI you
should use. In this case, the &#8220;mpicxx&#8221; compile/link command in
Makefile.mpi should just work by accessing those modules.</p>
<p>It may be the case that one of the other Makefile.machine files in the
src/MAKE sub-directories is a better match to your system (type &#8220;make&#8221;
to see a list), you can use it as-is by typing (for example):</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">stampede</span>
</pre></div>
</div>
<p>If any of these builds (with an existing Makefile.machine) works on
your system, then you&#8217;re done!</p>
<p>If you want to do one of the following:</p>
<ul class="simple">
<li>use optional LAMMPS features that require additional libraries</li>
<li>use optional packages that require additional libraries</li>
<li>use optional accelerator packages that require special compiler/linker settings</li>
<li>run on a specialized platform that has its own compilers, settings, or other libs to use</li>
</ul>
<p>then building LAMMPS is more complicated. You may need to find where
auxiliary libraries exist on your machine or install them if they
don&#8217;t. You may need to build additional libraries that are part of
the LAMMPS package, before building LAMMPS. You may need to edit a
Makefile.machine file to make it compatible with your system.</p>
<p>Note that there is a Make.py tool in the src directory that automates
several of these steps, but you still have to know what you are doing.
<a class="reference internal" href="#start-4"><span class="std std-ref">Section 2.4</span></a> below describes the tool. It is a convenient
way to work with installing/un-installing various packages, the
Makefile.machine changes required by some packages, and the auxiliary
libraries some of them use.</p>
<p>Please read the following sections carefully. If you are not
comfortable with makefiles, or building codes on a Unix platform, or
running an MPI job on your machine, please find a local expert to help
you. Many compilation, linking, and run problems that users have are
often not really LAMMPS issues - they are peculiar to the user&#8217;s
system, compilers, libraries, etc. Such questions are better answered
by a local expert.</p>
<p>If you have a build problem that you are convinced is a LAMMPS issue
(e.g. the compiler complains about a line of LAMMPS source code), then
please post the issue to the <a class="reference external" href="http://lammps.sandia.gov/mail.html">LAMMPS mail list</a>.</p>
<p>If you succeed in building LAMMPS on a new kind of machine, for which
there isn&#8217;t a similar machine Makefile included in the
src/MAKE/MACHINES directory, then send it to the developers and we can
include it in the LAMMPS distribution.</p>
<hr class="docutils" />
<p id="start-2-2"><a href="#id3"><span class="problematic" id="id4">**</span></a><em>Steps to build a LAMMPS executable:</em>**</p>
<p><strong>Step 0</strong></p>
<p>The src directory contains the C++ source and header files for LAMMPS.
It also contains a top-level Makefile and a MAKE sub-directory with
low-level Makefile.* files for many systems and machines. See the
src/MAKE/README file for a quick overview of what files are available
and what sub-directories they are in.</p>
<p>The src/MAKE dir has a few files that should work as-is on many
platforms. The src/MAKE/OPTIONS dir has more that invoke additional
compiler, MPI, and other setting options commonly used by LAMMPS, to
illustrate their syntax. The src/MAKE/MACHINES dir has many more that
have been tweaked or optimized for specific machines. These files are
all good starting points if you find you need to change them for your
machine. Put any file you edit into the src/MAKE/MINE directory and
it will be never be touched by any LAMMPS updates.</p>
<p>&gt;From within the src directory, type &#8220;make&#8221; or &#8220;gmake&#8221;. You should see
a list of available choices from src/MAKE and all of its
sub-directories. If one of those has the options you want or is the
machine you want, you can type a command like:</p>
<pre class="literal-block">
make mpi
or
make serial_icc
or
gmake mac
</pre>
<p>Note that the corresponding Makefile.machine can exist in src/MAKE or
any of its sub-directories. If a file with the same name appears in
multiple places (not a good idea), the order they are used is as
follows: src/MAKE/MINE, src/MAKE, src/MAKE/OPTIONS, src/MAKE/MACHINES.
This gives preference to a file you have created/edited and put in
src/MAKE/MINE.</p>
<p>Note that on a multi-processor or multi-core platform you can launch a
parallel make, by using the &#8220;-j&#8221; switch with the make command, which
will build LAMMPS more quickly.</p>
<p>If you get no errors and an executable like lmp_mpi or lmp_g++_serial
or lmp_mac is produced, then you&#8217;re done; it&#8217;s your lucky day.</p>
<p>Note that by default only a few of LAMMPS optional packages are
installed. To build LAMMPS with optional packages, see <a class="reference internal" href="#start-3"><span class="std std-ref">this section</span></a> below.</p>
<p><strong>Step 1</strong></p>
<p>If Step 0 did not work, you will need to create a low-level Makefile
for your machine, like Makefile.foo. You should make a copy of an
existing Makefile.* in src/MAKE or one of its sub-directories as a
starting point. The only portions of the file you need to edit are
the first line, the &#8220;compiler/linker settings&#8221; section, and the
&#8220;LAMMPS-specific settings&#8221; section. When it works, put the edited
file in src/MAKE/MINE and it will not be altered by any future LAMMPS
updates.</p>
<p><strong>Step 2</strong></p>
<p>Change the first line of Makefile.foo to list the word &#8220;foo&#8221; after the
&#8220;#&#8221;, and whatever other options it will set. This is the line you
will see if you just type &#8220;make&#8221;.</p>
<p><strong>Step 3</strong></p>
<p>The &#8220;compiler/linker settings&#8221; section lists compiler and linker
settings for your C++ compiler, including optimization flags. You can
use g++, the open-source GNU compiler, which is available on all Unix
systems. You can also use mpicxx which will typically be available if
MPI is installed on your system, though you should check which actual
compiler it wraps. Vendor compilers often produce faster code. On
boxes with Intel CPUs, we suggest using the Intel icc compiler, which
can be downloaded from <a class="reference external" href="http://www.intel.com/software/products/noncom">Intel&#8217;s compiler site</a>.</p>
<p>If building a C++ code on your machine requires additional libraries,
then you should list them as part of the LIB variable. You should
not need to do this if you use mpicxx.</p>
<p>The DEPFLAGS setting is what triggers the C++ compiler to create a
dependency list for a source file. This speeds re-compilation when
source (*.cpp) or header (*.h) files are edited. Some compilers do
not support dependency file creation, or may use a different switch
than -D. GNU g++ and Intel icc works with -D. If your compiler can&#8217;t
create dependency files, then you&#8217;ll need to create a Makefile.foo
patterned after Makefile.storm, which uses different rules that do not
involve dependency files. Note that when you build LAMMPS for the
first time on a new platform, a long list of *.d files will be printed
out rapidly. This is not an error; it is the Makefile doing its
normal creation of dependencies.</p>
<p><strong>Step 4</strong></p>
<p>The &#8220;system-specific settings&#8221; section has several parts. Note that
if you change any -D setting in this section, you should do a full
re-compile, after typing &#8220;make clean&#8221; (which will describe different
clean options).</p>
<p>The LMP_INC variable is used to include options that turn on ifdefs
within the LAMMPS code. The options that are currently recogized are:</p>
<ul class="simple">
<li>-DLAMMPS_GZIP</li>
<li>-DLAMMPS_JPEG</li>
<li>-DLAMMPS_PNG</li>
<li>-DLAMMPS_FFMPEG</li>
<li>-DLAMMPS_MEMALIGN</li>
<li>-DLAMMPS_XDR</li>
<li>-DLAMMPS_SMALLBIG</li>
<li>-DLAMMPS_BIGBIG</li>
<li>-DLAMMPS_SMALLSMALL</li>
<li>-DLAMMPS_LONGLONG_TO_LONG</li>
<li>-DLAMMPS_EXCEPTIONS</li>
<li>-DPACK_ARRAY</li>
<li>-DPACK_POINTER</li>
<li>-DPACK_MEMCPY</li>
</ul>
<p>The read_data and dump commands will read/write gzipped files if you
compile with -DLAMMPS_GZIP. It requires that your machine supports
the &#8220;popen()&#8221; function in the standard runtime library and that a gzip
executable can be found by LAMMPS during a run.</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">on some clusters with high-speed networks, using the fork()
library calls (required by popen()) can interfere with the fast
communication library and lead to simulations using compressed output
or input to hang or crash. For selected operations, compressed file
I/O is also available using a compression library instead, which are
provided in the COMPRESS package. From more details about compiling
LAMMPS with packages, please see below.</p>
</div>
<p>If you use -DLAMMPS_JPEG, the <a class="reference internal" href="dump_image.html"><span class="doc">dump image</span></a> command
will be able to write out JPEG image files. For JPEG files, you must
also link LAMMPS with a JPEG library, as described below. If you use
-DLAMMPS_PNG, the <a class="reference internal" href="dump.html"><span class="doc">dump image</span></a> command will be able to write
out PNG image files. For PNG files, you must also link LAMMPS with a
PNG library, as described below. If neither of those two defines are
used, LAMMPS will only be able to write out uncompressed PPM image
files.</p>
<p>If you use -DLAMMPS_FFMPEG, the <a class="reference internal" href="dump_image.html"><span class="doc">dump movie</span></a> command
will be available to support on-the-fly generation of rendered movies
the need to store intermediate image files. It requires that your
machines supports the &#8220;popen&#8221; function in the standard runtime library
and that an FFmpeg executable can be found by LAMMPS during the run.</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">Similar to the note above, this option can conflict with
high-speed networks, because it uses popen().</p>
</div>
<p>Using -DLAMMPS_MEMALIGN=&lt;bytes&gt; enables the use of the
posix_memalign() call instead of malloc() when large chunks or memory
are allocated by LAMMPS. This can help to make more efficient use of
vector instructions of modern CPUS, since dynamically allocated memory
has to be aligned on larger than default byte boundaries (e.g. 16
bytes instead of 8 bytes on x86 type platforms) for optimal
performance.</p>
<p>If you use -DLAMMPS_XDR, the build will include XDR compatibility
files for doing particle dumps in XTC format. This is only necessary
if your platform does have its own XDR files available. See the
Restrictions section of the <a class="reference internal" href="dump.html"><span class="doc">dump</span></a> command for details.</p>
<p>Use at most one of the -DLAMMPS_SMALLBIG, -DLAMMPS_BIGBIG,
-DLAMMPS_SMALLSMALL settings. The default is -DLAMMPS_SMALLBIG. These
settings refer to use of 4-byte (small) vs 8-byte (big) integers
within LAMMPS, as specified in src/lmptype.h. The only reason to use
the BIGBIG setting is to enable simulation of huge molecular systems
(which store bond topology info) with more than 2 billion atoms, or to
track the image flags of moving atoms that wrap around a periodic box
more than 512 times. Normally, the only reason to use SMALLSMALL is
if your machine does not support 64-bit integers, though you can use
SMALLSMALL setting if you are running in serial or on a desktop
machine or small cluster where you will never run large systems or for
long time (more than 2 billion atoms, more than 2 billion timesteps).
See the <a class="reference internal" href="#start-2-4"><span class="std std-ref">Additional build tips</span></a> section below for more
details on these settings.</p>
<p>Note that the USER-ATC package is not currently compatible with
-DLAMMPS_BIGBIG. Also the GPU package requires the lib/gpu library to
be compiled with the same setting, or the link will fail.</p>
<p>The -DLAMMPS_LONGLONG_TO_LONG setting may be needed if your system or
MPI version does not recognize &#8220;long long&#8221; data types. In this case a
&#8220;long&#8221; data type is likely already 64-bits, in which case this setting
will convert to that data type.</p>
<p>The -DLAMMPS_EXCEPTIONS setting can be used to activate alternative
versions of error handling inside of LAMMPS. This is useful when
external codes drive LAMMPS as a library. Using this option, LAMMPS
errors do not kill the caller. Instead, the call stack is unwound and
control returns to the caller. The library interface provides the
lammps_has_error() and lammps_get_last_error_message() functions to
detect and find out more about a LAMMPS error.</p>
<p>Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY
options can make for faster parallel FFTs (in the PPPM solver) on some
platforms. The -DPACK_ARRAY setting is the default. See the
<a class="reference internal" href="kspace_style.html"><span class="doc">kspace_style</span></a> command for info about PPPM. See
Step 6 below for info about building LAMMPS with an FFT library.</p>
<p><strong>Step 5</strong></p>
<p>The 3 MPI variables are used to specify an MPI library to build LAMMPS
with. Note that you do not need to set these if you use the MPI
compiler mpicxx for your CC and LINK setting in the section above.
The MPI wrapper knows where to find the needed files.</p>
<p>If you want LAMMPS to run in parallel, you must have an MPI library
installed on your platform. If MPI is installed on your system in the
usual place (under /usr/local), you also may not need to specify these
3 variables, assuming /usr/local is in your path. On some large
parallel machines which use &#8220;modules&#8221; for their compile/link
environements, you may simply need to include the correct module in
your build environment, before building LAMMPS. Or the parallel
machine may have a vendor-provided MPI which the compiler has no
trouble finding.</p>
<p>Failing this, these 3 variables can be used to specify where the mpi.h
file (MPI_INC) and the MPI library file (MPI_PATH) are found and the
name of the library file (MPI_LIB).</p>
<p>If you are installing MPI yourself, we recommend Argonne&#8217;s MPICH2
or OpenMPI. MPICH can be downloaded from the <a class="reference external" href="http://www.mcs.anl.gov/research/projects/mpich2/">Argonne MPI site</a>. OpenMPI can
be downloaded from the <a class="reference external" href="http://www.open-mpi.org">OpenMPI site</a>.
Other MPI packages should also work. If you are running on a big
parallel platform, your system people or the vendor should have
already installed a version of MPI, which is likely to be faster
than a self-installed MPICH or OpenMPI, so find out how to build
and link with it. If you use MPICH or OpenMPI, you will have to
configure and build it for your platform. The MPI configure script
should have compiler options to enable you to use the same compiler
you are using for the LAMMPS build, which can avoid problems that can
arise when linking LAMMPS to the MPI library.</p>
<p>If you just want to run LAMMPS on a single processor, you can use the
dummy MPI library provided in src/STUBS, since you don&#8217;t need a true
MPI library installed on your system. See src/MAKE/Makefile.serial
for how to specify the 3 MPI variables in this case. You will also
need to build the STUBS library for your platform before making LAMMPS
itself. Note that if you are building with src/MAKE/Makefile.serial,
e.g. by typing &#8220;make serial&#8221;, then the STUBS library is built for you.</p>
<p>To build the STUBS library from the src directory, type &#8220;make
mpi-stubs&#8221;, or from the src/STUBS dir, type &#8220;make&#8221;. This should
create a libmpi_stubs.a file suitable for linking to LAMMPS. If the
build fails, you will need to edit the STUBS/Makefile for your
platform.</p>
<p>The file STUBS/mpi.c provides a CPU timer function called MPI_Wtime()
that calls gettimeofday() . If your system doesn&#8217;t support
gettimeofday() , you&#8217;ll need to insert code to call another timer.
Note that the ANSI-standard function clock() rolls over after an hour
or so, and is therefore insufficient for timing long LAMMPS
simulations.</p>
<p><strong>Step 6</strong></p>
<p>The 3 FFT variables allow you to specify an FFT library which LAMMPS
uses (for performing 1d FFTs) when running the particle-particle
particle-mesh (PPPM) option for long-range Coulombics via the
<a class="reference internal" href="kspace_style.html"><span class="doc">kspace_style</span></a> command.</p>
<p>LAMMPS supports various open-source or vendor-supplied FFT libraries
for this purpose. If you leave these 3 variables blank, LAMMPS will
use the open-source <a class="reference external" href="http://kissfft.sf.net">KISS FFT library</a>, which is
included in the LAMMPS distribution. This library is portable to all
platforms and for typical LAMMPS simulations is almost as fast as FFTW
or vendor optimized libraries. If you are not including the KSPACE
package in your build, you can also leave the 3 variables blank.</p>
<p>Otherwise, select which kinds of FFTs to use as part of the FFT_INC
setting by a switch of the form -DFFT_XXX. Recommended values for XXX
are: MKL, SCSL, FFTW2, and FFTW3. Legacy options are: INTEL, SGI,
ACML, and T3E. For backward compatability, using -DFFT_FFTW will use
the FFTW2 library. Using -DFFT_NONE will use the KISS library
described above.</p>
<p>You may also need to set the FFT_INC, FFT_PATH, and FFT_LIB variables,
so the compiler and linker can find the needed FFT header and library
files. Note that on some large parallel machines which use &#8220;modules&#8221;
for their compile/link environements, you may simply need to include
the correct module in your build environment. Or the parallel machine
may have a vendor-provided FFT library which the compiler has no
trouble finding.</p>
<p>FFTW is a fast, portable library that should also work on any
platform. You can download it from
<a class="reference external" href="http://www.fftw.org">www.fftw.org</a>. Both the legacy version 2.1.X and
the newer 3.X versions are supported as -DFFT_FFTW2 or -DFFT_FFTW3.
Building FFTW for your box should be as simple as ./configure; make.
Note that on some platforms FFTW2 has been pre-installed, and uses
renamed files indicating the precision it was compiled with,
e.g. sfftw.h, or dfftw.h instead of fftw.h. In this case, you can
specify an additional define variable for FFT_INC called -DFFTW_SIZE,
which will select the correct include file. In this case, for FFT_LIB
you must also manually specify the correct library, namely -lsfftw or
-ldfftw.</p>
<p>The FFT_INC variable also allows for a -DFFT_SINGLE setting that will
use single-precision FFTs with PPPM, which can speed-up long-range
calulations, particularly in parallel or on GPUs. Fourier transform
and related PPPM operations are somewhat insensitive to floating point
truncation errors and thus do not always need to be performed in
double precision. Using the -DFFT_SINGLE setting trades off a little
accuracy for reduced memory use and parallel communication costs for
transposing 3d FFT data. Note that single precision FFTs have only
been tested with the FFTW3, FFTW2, MKL, and KISS FFT options.</p>
<p><strong>Step 7</strong></p>
<p>The 3 JPG variables allow you to specify a JPEG and/or PNG library
which LAMMPS uses when writing out JPEG or PNG files via the <a class="reference internal" href="dump_image.html"><span class="doc">dump image</span></a> command. These can be left blank if you do not
use the -DLAMMPS_JPEG or -DLAMMPS_PNG switches discussed above in Step
4, since in that case JPEG/PNG output will be disabled.</p>
<p>A standard JPEG library usually goes by the name libjpeg.a or
libjpeg.so and has an associated header file jpeglib.h. Whichever
JPEG library you have on your platform, you&#8217;ll need to set the
appropriate JPG_INC, JPG_PATH, and JPG_LIB variables, so that the
compiler and linker can find it.</p>
<p>A standard PNG library usually goes by the name libpng.a or libpng.so
and has an associated header file png.h. Whichever PNG library you
have on your platform, you&#8217;ll need to set the appropriate JPG_INC,
JPG_PATH, and JPG_LIB variables, so that the compiler and linker can
find it.</p>
<p>As before, if these header and library files are in the usual place on
your machine, you may not need to set these variables.</p>
<p><strong>Step 8</strong></p>
<p>Note that by default only a few of LAMMPS optional packages are
installed. To build LAMMPS with optional packages, see <a class="reference internal" href="#start-3"><span class="std std-ref">this section</span></a> below, before proceeding to Step 9.</p>
<p><strong>Step 9</strong></p>
<p>That&#8217;s it. Once you have a correct Makefile.foo, and you have
pre-built any other needed libraries (e.g. MPI, FFT, etc) all you need
to do from the src directory is type something like this:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">foo</span>
<span class="n">make</span> <span class="o">-</span><span class="n">j</span> <span class="n">N</span> <span class="n">foo</span>
<span class="n">gmake</span> <span class="n">foo</span>
<span class="n">gmake</span> <span class="o">-</span><span class="n">j</span> <span class="n">N</span> <span class="n">foo</span>
</pre></div>
</div>
<p>The -j or -j N switches perform a parallel build which can be much
faster, depending on how many cores your compilation machine has. N
is the number of cores the build runs on.</p>
<p>You should get the executable lmp_foo when the build is complete.</p>
<hr class="docutils" />
<p id="start-2-3"><a href="#id5"><span class="problematic" id="id6">**</span></a><em>Errors that can occur when making LAMMPS:</em>**</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">If an error occurs when building LAMMPS, the compiler or linker
will state very explicitly what the problem is. The error message
should give you a hint as to which of the steps above has failed, and
what you need to do in order to fix it. Building a code with a
Makefile is a very logical process. The compiler and linker need to
find the appropriate files and those files need to be compatible with
LAMMPS source files. When a make fails, there is usually a very
simple reason, which you or a local expert will need to fix.</p>
</div>
<p>Here are two non-obvious errors that can occur:</p>
<p>(1) If the make command breaks immediately with errors that indicate
it can&#8217;t find files with a &#8220;*&#8221; in their names, this can be because
your machine&#8217;s native make doesn&#8217;t support wildcard expansion in a
makefile. Try gmake instead of make. If that doesn&#8217;t work, try using
a -f switch with your make command to use a pre-generated
Makefile.list which explicitly lists all the needed files, e.g.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">makelist</span>
<span class="n">make</span> <span class="o">-</span><span class="n">f</span> <span class="n">Makefile</span><span class="o">.</span><span class="n">list</span> <span class="n">linux</span>
<span class="n">gmake</span> <span class="o">-</span><span class="n">f</span> <span class="n">Makefile</span><span class="o">.</span><span class="n">list</span> <span class="n">mac</span>
</pre></div>
</div>
<p>The first &#8220;make&#8221; command will create a current Makefile.list with all
the file names in your src dir. The 2nd &#8220;make&#8221; command (make or
gmake) will use it to build LAMMPS. Note that you should
include/exclude any desired optional packages before using the &#8220;make
makelist&#8221; command.</p>
<p>(2) If you get an error that says something like &#8216;identifier &#8220;atoll&#8221;
is undefined&#8217;, then your machine does not support &#8220;long long&#8221;
integers. Try using the -DLAMMPS_LONGLONG_TO_LONG setting described
above in Step 4.</p>
<hr class="docutils" />
<p id="start-2-4"><a href="#id7"><span class="problematic" id="id8">**</span></a><em>Additional build tips:</em>**</p>
<ol class="arabic simple">
<li>Building LAMMPS for multiple platforms.</li>
</ol>
<p>You can make LAMMPS for multiple platforms from the same src
directory. Each target creates its own object sub-directory called
Obj_target where it stores the system-specific *.o files.</p>
<ol class="arabic simple" start="2">
<li>Cleaning up.</li>
</ol>
<p>Typing &#8220;make clean-all&#8221; or &#8220;make clean-machine&#8221; will delete *.o object
files created when LAMMPS is built, for either all builds or for a
particular machine.</p>
<p>(3) Changing the LAMMPS size limits via -DLAMMPS_SMALLBIG or
-DLAMMPS_BIGBIG or -DLAMMPS_SMALLSMALL</p>
<p>As explained above, any of these 3 settings can be specified on the
LMP_INC line in your low-level src/MAKE/Makefile.foo.</p>
<p>The default is -DLAMMPS_SMALLBIG which allows for systems with up to
2^63 atoms and 2^63 timesteps (about 9e18). The atom limit is for
atomic systems which do not store bond topology info and thus do not
require atom IDs. If you use atom IDs for atomic systems (which is
the default) or if you use a molecular model, which stores bond
topology info and thus requires atom IDs, the limit is 2^31 atoms
(about 2 billion). This is because the IDs are stored in 32-bit
integers.</p>
<p>Likewise, with this setting, the 3 image flags for each atom (see the
<a class="reference internal" href="dump.html"><span class="doc">dump</span></a> doc page for a discussion) are stored in a 32-bit
integer, which means the atoms can only wrap around a periodic box (in
each dimension) at most 512 times. If atoms move through the periodic
box more than this many times, the image flags will &#8220;roll over&#8221;,
e.g. from 511 to -512, which can cause diagnostics like the
mean-squared displacement, as calculated by the <a class="reference internal" href="compute_msd.html"><span class="doc">compute msd</span></a> command, to be faulty.</p>
<p>To allow for larger atomic systems with atom IDs or larger molecular
systems or larger image flags, compile with -DLAMMPS_BIGBIG. This
stores atom IDs and image flags in 64-bit integers. This enables
atomic or molecular systems with atom IDS of up to 2^63 atoms (about
9e18). And image flags will not &#8220;roll over&#8221; until they reach 2^20 =
1048576.</p>
<p>If your system does not support 8-byte integers, you will need to
compile with the -DLAMMPS_SMALLSMALL setting. This will restrict the
total number of atoms (for atomic or molecular systems) and timesteps
to 2^31 (about 2 billion). Image flags will roll over at 2^9 = 512.</p>
<p>Note that in src/lmptype.h there are definitions of all these data
types as well as the MPI data types associated with them. The MPI
types need to be consistent with the associated C data types, or else
LAMMPS will generate a run-time error. As far as we know, the
settings defined in src/lmptype.h are portable and work on every
current system.</p>
<p>In all cases, the size of problem that can be run on a per-processor
basis is limited by 4-byte integer storage to 2^31 atoms per processor
(about 2 billion). This should not normally be a limitation since such
a problem would have a huge per-processor memory footprint due to
neighbor lists and would run very slowly in terms of CPU secs/timestep.</p>
<hr class="docutils" />
<p id="start-2-5"><a href="#id9"><span class="problematic" id="id10">**</span></a><em>Building for a Mac:</em>**</p>
<p>OS X is BSD Unix, so it should just work. See the
src/MAKE/MACHINES/Makefile.mac and Makefile.mac_mpi files.</p>
<hr class="docutils" />
<p id="start-2-6"><a href="#id11"><span class="problematic" id="id12">**</span></a><em>Building for Windows:</em>**</p>
<p>The LAMMPS download page has an option to download both a serial and
parallel pre-built Windows executable. See the <a class="reference internal" href="#start-6"><span class="std std-ref">Running LAMMPS</span></a> section for instructions on running these executables
on a Windows box.</p>
<p>The pre-built executables hosted on the <a class="reference external" href="http://lammps.sandia.gov/download.html">LAMMPS download page</a> are built with a subset
of the available packages; see the download page for the list. These
are single executable files. No examples or documentation in
included. You will need to download the full source code package to
obtain those.</p>
<p>As an alternative, you can download &#8220;daily builds&#8221; (and some older
versions) of the installer packages from
<a class="reference external" href="http://rpm.lammps.org/windows.html">rpm.lammps.org/windows.html</a>.
These executables are built with most optional packages and the
download includes documentation, some tools and most examples.</p>
<p>If you want a Windows version with specific packages included and
excluded, you can build it yourself.</p>
<p>One way to do this is install and use cygwin to build LAMMPS with a
standard unix style make program, just as you would on a Linux box;
see src/MAKE/MACHINES/Makefile.cygwin.</p>
<hr class="docutils" />
</div>
<div class="section" id="making-lammps-with-optional-packages">
<span id="start-3"></span><h2>2.3. Making LAMMPS with optional packages</h2>
<p>This section has the following sub-sections:</p>
<ul class="simple">
<li><a class="reference internal" href="#start-3-1"><span class="std std-ref">Package basics</span></a></li>
<li><a class="reference internal" href="#start-3-2"><span class="std std-ref">Including/excluding packages</span></a></li>
<li><a class="reference internal" href="#start-3-3"><span class="std std-ref">Packages that require extra libraries</span></a></li>
<li><a class="reference internal" href="#start-3-4"><span class="std std-ref">Packages that require Makefile.machine settings</span></a></li>
</ul>
<p>Note that the following <a class="reference internal" href="#start-4"><span class="std std-ref">Section 2.4</span></a> describes the Make.py
tool which can be used to install/un-install packages and build the
auxiliary libraries which some of them use. It can also auto-edit a
Makefile.machine to add settings needed by some packages.</p>
<hr class="docutils" />
<p id="start-3-1"><a href="#id13"><span class="problematic" id="id14">**</span></a><em>Package basics:</em>**</p>
<p>The source code for LAMMPS is structured as a set of core files which
are always included, plus optional packages. Packages are groups of
files that enable a specific set of features. For example, force
fields for molecular systems or granular systems are in packages.</p>
<p><a class="reference internal" href="Section_packages.html"><span class="doc">Section packages</span></a> in the manual has details
about all the packages, including specific instructions for building
LAMMPS with each package, which are covered in a more general manner
below.</p>
<p>You can see the list of all packages by typing &#8220;make package&#8221; from
within the src directory of the LAMMPS distribution. This also lists
various make commands that can be used to manipulate packages.</p>
<p>If you use a command in a LAMMPS input script that is part of a
package, you must have built LAMMPS with that package, else you will
get an error that the style is invalid or the command is unknown.
Every command&#8217;s doc page specfies if it is part of a package. You can
also type</p>
<pre class="literal-block">
lmp_machine -h
</pre>
<p>to run your executable with the optional <a class="reference internal" href="#start-7"><span class="std std-ref">-h command-line switch</span></a> for &#8220;help&#8221;, which will simply list the styles and
commands known to your executable, and immediately exit.</p>
<p>There are two kinds of packages in LAMMPS, standard and user packages.
More information about the contents of standard and user packages is
given in <a class="reference internal" href="Section_packages.html"><span class="doc">Section_packages</span></a> of the manual. The
difference between standard and user packages is as follows:</p>
<p>Standard packages, such as molecule or kspace, are supported by the
LAMMPS developers and are written in a syntax and style consistent
with the rest of LAMMPS. This means we will answer questions about
them, debug and fix them if necessary, and keep them compatible with
future changes to LAMMPS.</p>
<p>User packages, such as user-atc or user-omp, have been contributed by
users, and always begin with the user prefix. If they are a single
command (single file), they are typically in the user-misc package.
Otherwise, they are a a set of files grouped together which add a
specific functionality to the code.</p>
<p>User packages don&#8217;t necessarily meet the requirements of the standard
packages. If you have problems using a feature provided in a user
package, you may need to contact the contributor directly to get help.
Information on how to submit additions you make to LAMMPS as single
files or either a standard or user-contributed package are given in
<a class="reference internal" href="Section_modify.html#mod-15"><span class="std std-ref">this section</span></a> of the documentation.</p>
<hr class="docutils" />
<p id="start-3-2"><a href="#id15"><span class="problematic" id="id16">**</span></a><em>Including/excluding packages:</em>**</p>
<p>To use (or not use) a package you must include it (or exclude it)
before building LAMMPS. From the src directory, this is typically as
simple as:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">yes</span><span class="o">-</span><span class="n">colloid</span>
<span class="n">make</span> <span class="n">g</span><span class="o">++</span>
</pre></div>
</div>
<p>or</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">no</span><span class="o">-</span><span class="n">manybody</span>
<span class="n">make</span> <span class="n">g</span><span class="o">++</span>
</pre></div>
</div>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">You should NOT include/exclude packages and build LAMMPS in a
single make command using multiple targets, e.g. make yes-colloid g++.
This is because the make procedure creates a list of source files that
will be out-of-date for the build if the package configuration changes
within the same command.</p>
</div>
<p>Some packages have individual files that depend on other packages
being included. LAMMPS checks for this and does the right thing.
I.e. individual files are only included if their dependencies are
already included. Likewise, if a package is excluded, other files
dependent on that package are also excluded.</p>
<p>If you will never run simulations that use the features in a
particular packages, there is no reason to include it in your build.
For some packages, this will keep you from having to build auxiliary
libraries (see below), and will also produce a smaller executable
which may run a bit faster.</p>
<p>When you download a LAMMPS tarball, these packages are pre-installed
in the src directory: KSPACE, MANYBODY,MOLECULE, because they are so
commonly used. When you download LAMMPS source files from the SVN or
Git repositories, no packages are pre-installed.</p>
<p>Packages are included or excluded by typing &#8220;make yes-name&#8221; or &#8220;make
no-name&#8221;, where &#8220;name&#8221; is the name of the package in lower-case, e.g.
name = kspace for the KSPACE package or name = user-atc for the
USER-ATC package. You can also type &#8220;make yes-standard&#8221;, &#8220;make
no-standard&#8221;, &#8220;make yes-std&#8221;, &#8220;make no-std&#8221;, &#8220;make yes-user&#8221;, &#8220;make
no-user&#8221;, &#8220;make yes-lib&#8221;, &#8220;make no-lib&#8221;, &#8220;make yes-all&#8221;, or &#8220;make
no-all&#8221; to include/exclude various sets of packages. Type &#8220;make
package&#8221; to see all of the package-related make options.</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">Inclusion/exclusion of a package works by simply moving files
back and forth between the main src directory and sub-directories with
the package name (e.g. src/KSPACE, src/USER-ATC), so that the files
are seen or not seen when LAMMPS is built. After you have included or
excluded a package, you must re-build LAMMPS.</p>
</div>
<p>Additional package-related make options exist to help manage LAMMPS
files that exist in both the src directory and in package
sub-directories. You do not normally need to use these commands
unless you are editing LAMMPS files or have downloaded a patch from
the LAMMPS WWW site.</p>
<p>Typing &#8220;make package-update&#8221; or &#8220;make pu&#8221; will overwrite src files
with files from the package sub-directories if the package has been
included. It should be used after a patch is installed, since patches
only update the files in the package sub-directory, but not the src
files. Typing &#8220;make package-overwrite&#8221; will overwrite files in the
package sub-directories with src files.</p>
<p>Typing &#8220;make package-status&#8221; or &#8220;make ps&#8221; will show which packages are
currently included. For those that are included, it will list any
files that are different in the src directory and package
sub-directory. Typing &#8220;make package-diff&#8221; lists all differences
between these files. Again, type &#8220;make package&#8221; to see all of the
package-related make options.</p>
<hr class="docutils" />
<p id="start-3-3"><a href="#id17"><span class="problematic" id="id18">**</span></a><em>Packages that require extra libraries:</em>**</p>
<p>A few of the standard and user packages require additional auxiliary
libraries. Many of them are provided with LAMMPS, in which case they
must be compiled first, before LAMMPS is built, if you wish to include
that package. If you get a LAMMPS build error about a missing
library, this is likely the reason. See the
<a class="reference internal" href="Section_packages.html"><span class="doc">Section_packages</span></a> doc page for a list of
packages that have these kinds of auxiliary libraries.</p>
<p>The lib directory in the distribution has sub-directories with package
names that correspond to the needed auxiliary libs, e.g. lib/gpu.
Each sub-directory has a README file that gives more details. Code
for most of the auxiliary libraries is included in that directory.
Examples are the USER-ATC and MEAM packages.</p>
<p>A few of the lib sub-directories do not include code, but do include
instructions (and sometimes scripts) that automate the process of
downloading the auxiliary library and installing it so LAMMPS can link
to it. Examples are the KIM, VORONOI, USER-MOLFILE, and USER-SMD
packages.</p>
<p>The lib/python directory (for the PYTHON package) contains only a
choice of Makefile.lammps.* files. This is because no auxiliary code
or libraries are needed, only the Python library and other system libs
that should already available on your system. However, the
Makefile.lammps file is needed to tell LAMMPS which libs to use and
where to find them.</p>
<p>For libraries with provided code, the sub-directory README file
(e.g. lib/atc/README) has instructions on how to build that library.
This information is also summarized in <a class="reference internal" href="Section_packages.html"><span class="doc">Section packages</span></a>. Typically this is done by typing
something like:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="o">-</span><span class="n">f</span> <span class="n">Makefile</span><span class="o">.</span><span class="n">g</span><span class="o">++</span>
</pre></div>
</div>
<p>If one of the provided Makefiles is not appropriate for your system
you will need to edit or add one. Note that all the Makefiles have a
setting for EXTRAMAKE at the top that specifies a Makefile.lammps.*
file.</p>
<p>If the library build is successful, it will produce 2 files in the lib
directory:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">libpackage</span><span class="o">.</span><span class="n">a</span>
<span class="n">Makefile</span><span class="o">.</span><span class="n">lammps</span>
</pre></div>
</div>
<p>The Makefile.lammps file will typically be a copy of one of the
Makefile.lammps.* files in the library directory.</p>
<p>Note that you must insure that the settings in Makefile.lammps are
appropriate for your system. If they are not, the LAMMPS build may
fail. To fix this, you can edit or create a new Makefile.lammps.*
file for your system, and copy it to Makefile.lammps.</p>
<p>As explained in the lib/package/README files, the settings in
Makefile.lammps are used to specify additional system libraries and
their locations so that LAMMPS can build with the auxiliary library.
For example, if the MEAM package is used, the auxiliary library
consists of F90 code, built with a Fortran complier. To link that
library with LAMMPS (a C++ code) via whatever C++ compiler LAMMPS is
built with, typically requires additional Fortran-to-C libraries be
included in the link. Another example are the BLAS and LAPACK
libraries needed to use the USER-ATC or USER-AWPMD packages.</p>
<p>For libraries without provided code, the sub-directory README file has
information on where to download the library and how to build it,
e.g. lib/voronoi/README and lib/smd/README. The README files also
describe how you must either (a) create soft links, via the &#8220;ln&#8221;
command, in those directories to point to where you built or installed
the packages, or (b) check or edit the Makefile.lammps file in the
same directory to provide that information.</p>
<p>Some of the sub-directories, e.g. lib/voronoi, also have an install.py
script which can be used to automate the process of
downloading/building/installing the auxiliary library, and setting the
needed soft links. Type &#8220;python install.py&#8221; for further instructions.</p>
<p>As with the sub-directories containing library code, if the soft links
or settings in the lib/package/Makefile.lammps files are not correct,
the LAMMPS build will typically fail.</p>
<hr class="docutils" />
<p id="start-3-4"><a href="#id19"><span class="problematic" id="id20">**</span></a><em>Packages that require Makefile.machine settings</em>**</p>
<p>A few packages require specific settings in Makefile.machine, to
either build or use the package effectively. These are the
USER-INTEL, KOKKOS, USER-OMP, and OPT packages, used for accelerating
code performance on CPUs or other hardware, as discussed in <a class="reference internal" href="Section_accelerate.html"><span class="doc">Section acclerate</span></a>.</p>
<p>A summary of what Makefile.machine changes are needed for each of
these packages is given in <a class="reference internal" href="Section_packages.html"><span class="doc">Section packages</span></a>.
The details are given on the doc pages that describe each of these
accelerator packages in detail:</p>
<ul class="simple">
<li><a class="reference internal" href="accelerate_intel.html"><span class="doc">USER-INTEL package</span></a></li>
<li><a class="reference internal" href="accelerate_kokkos.html"><span class="doc">KOKKOS package</span></a></li>
<li><a class="reference internal" href="accelerate_omp.html"><span class="doc">USER-OMP package</span></a></li>
<li><a class="reference internal" href="accelerate_opt.html"><span class="doc">OPT package</span></a></li>
</ul>
<p>You can also look at the following machine Makefiles in
src/MAKE/OPTIONS, which include the changes. Note that the USER-INTEL
and KOKKOS packages allow for settings that build LAMMPS for different
hardware. The USER-INTEL package builds for CPU and the Xeon Phi, the
KOKKOS package builds for OpenMP, GPUs (Cuda), and the Xeon Phi.</p>
<ul class="simple">
<li>Makefile.intel_cpu</li>
<li>Makefile.intel_phi</li>
<li>Makefile.kokkos_omp</li>
<li>Makefile.kokkos_cuda</li>
<li>Makefile.kokkos_phi</li>
<li>Makefile.omp</li>
<li>Makefile.opt</li>
</ul>
<p>Also note that the Make.py tool, described in the next <a class="reference internal" href="#start-4"><span class="std std-ref">Section 2.4</span></a> can automatically add the needed info to an existing
machine Makefile, using simple command-line arguments.</p>
<hr class="docutils" />
</div>
<div class="section" id="building-lammps-via-the-make-py-tool">
<span id="start-4"></span><h2>2.4. Building LAMMPS via the Make.py tool</h2>
<p>The src directory includes a Make.py script, written in Python, which
can be used to automate various steps of the build process. It is
particularly useful for working with the accelerator packages, as well
as other packages which require auxiliary libraries to be built.</p>
<p>The goal of the Make.py tool is to allow any complex multi-step LAMMPS
build to be performed as a single Make.py command. And you can
archive the commands, so they can be re-invoked later via the -r
(redo) switch. If you find some LAMMPS build procedure that can&#8217;t be
done in a single Make.py command, let the developers know, and we&#8217;ll
see if we can augment the tool.</p>
<p>You can run Make.py from the src directory by typing either:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Make</span><span class="o">.</span><span class="n">py</span> <span class="o">-</span><span class="n">h</span>
<span class="n">python</span> <span class="n">Make</span><span class="o">.</span><span class="n">py</span> <span class="o">-</span><span class="n">h</span>
</pre></div>
</div>
<p>which will give you help info about the tool. For the former to work,
you may need to edit the first line of Make.py to point to your local
Python. And you may need to insure the script is executable:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">chmod</span> <span class="o">+</span><span class="n">x</span> <span class="n">Make</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p>Here are examples of build tasks you can perform with Make.py:</p>
<table border="1" class="docutils">
<colgroup>
<col width="58%" />
<col width="42%" />
</colgroup>
<tbody valign="top">
<tr class="row-odd"><td>Install/uninstall packages</td>
<td>Make.py -p no-lib kokkos omp intel</td>
</tr>
<tr class="row-even"><td>Build specific auxiliary libs</td>
<td>Make.py -a lib-atc lib-meam</td>
</tr>
<tr class="row-odd"><td>Build libs for all installed packages</td>
<td>Make.py -p cuda gpu -gpu mode=double arch=31 -a lib-all</td>
</tr>
<tr class="row-even"><td>Create a Makefile from scratch with compiler and MPI settings</td>
<td>Make.py -m none -cc g++ -mpi mpich -a file</td>
</tr>
<tr class="row-odd"><td>Augment Makefile.serial with settings for installed packages</td>
<td>Make.py -p intel -intel cpu -m serial -a file</td>
</tr>
<tr class="row-even"><td>Add JPG and FFTW support to Makefile.mpi</td>
<td>Make.py -m mpi -jpg -fft fftw -a file</td>
</tr>
<tr class="row-odd"><td>Build LAMMPS with a parallel make using Makefile.mpi</td>
<td>Make.py -j 16 -m mpi -a exe</td>
</tr>
<tr class="row-even"><td>Build LAMMPS and libs it needs using Makefile.serial with accelerator settings</td>
<td>Make.py -p gpu intel -intel cpu -a lib-all file serial</td>
</tr>
</tbody>
</table>
<p>The bench and examples directories give Make.py commands that can be
used to build LAMMPS with the various packages and options needed to
run all the benchmark and example input scripts. See these files for
more details:</p>
<ul class="simple">
<li>bench/README</li>
<li>bench/FERMI/README</li>
<li>bench/KEPLER/README</li>
<li>bench/PHI/README</li>
<li>examples/README</li>
<li>examples/accelerate/README</li>
<li>examples/accelerate/make.list</li>
</ul>
<p>All of the Make.py options and syntax help can be accessed by using
the &#8220;-h&#8221; switch.</p>
<p>E.g. typing &#8220;Make.py -h&#8221; gives</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Syntax</span><span class="p">:</span> <span class="n">Make</span><span class="o">.</span><span class="n">py</span> <span class="n">switch</span> <span class="n">args</span> <span class="o">...</span>
<span class="n">switches</span> <span class="n">can</span> <span class="n">be</span> <span class="n">listed</span> <span class="ow">in</span> <span class="nb">any</span> <span class="n">order</span>
<span class="n">help</span> <span class="n">switch</span><span class="p">:</span>
<span class="o">-</span><span class="n">h</span> <span class="n">prints</span> <span class="n">help</span> <span class="ow">and</span> <span class="n">syntax</span> <span class="k">for</span> <span class="nb">all</span> <span class="n">other</span> <span class="n">specified</span> <span class="n">switches</span>
<span class="n">switch</span> <span class="k">for</span> <span class="n">actions</span><span class="p">:</span>
<span class="o">-</span><span class="n">a</span> <span class="n">lib</span><span class="o">-</span><span class="nb">all</span><span class="p">,</span> <span class="n">lib</span><span class="o">-</span><span class="nb">dir</span><span class="p">,</span> <span class="n">clean</span><span class="p">,</span> <span class="n">file</span><span class="p">,</span> <span class="n">exe</span> <span class="ow">or</span> <span class="n">machine</span>
<span class="nb">list</span> <span class="n">one</span> <span class="ow">or</span> <span class="n">more</span> <span class="n">actions</span><span class="p">,</span> <span class="ow">in</span> <span class="nb">any</span> <span class="n">order</span>
<span class="n">machine</span> <span class="ow">is</span> <span class="n">a</span> <span class="n">Makefile</span><span class="o">.</span><span class="n">machine</span> <span class="n">suffix</span><span class="p">,</span> <span class="n">must</span> <span class="n">be</span> <span class="n">last</span> <span class="k">if</span> <span class="n">used</span>
<span class="n">one</span><span class="o">-</span><span class="n">letter</span> <span class="n">switches</span><span class="p">:</span>
<span class="o">-</span><span class="n">d</span> <span class="p">(</span><span class="nb">dir</span><span class="p">),</span> <span class="o">-</span><span class="n">j</span> <span class="p">(</span><span class="n">jmake</span><span class="p">),</span> <span class="o">-</span><span class="n">m</span> <span class="p">(</span><span class="n">makefile</span><span class="p">),</span> <span class="o">-</span><span class="n">o</span> <span class="p">(</span><span class="n">output</span><span class="p">),</span>
<span class="o">-</span><span class="n">p</span> <span class="p">(</span><span class="n">packages</span><span class="p">),</span> <span class="o">-</span><span class="n">r</span> <span class="p">(</span><span class="n">redo</span><span class="p">),</span> <span class="o">-</span><span class="n">s</span> <span class="p">(</span><span class="n">settings</span><span class="p">),</span> <span class="o">-</span><span class="n">v</span> <span class="p">(</span><span class="n">verbose</span><span class="p">)</span>
<span class="n">switches</span> <span class="k">for</span> <span class="n">libs</span><span class="p">:</span>
<span class="o">-</span><span class="n">atc</span><span class="p">,</span> <span class="o">-</span><span class="n">awpmd</span><span class="p">,</span> <span class="o">-</span><span class="n">colvars</span><span class="p">,</span> <span class="o">-</span><span class="n">cuda</span>
<span class="o">-</span><span class="n">gpu</span><span class="p">,</span> <span class="o">-</span><span class="n">meam</span><span class="p">,</span> <span class="o">-</span><span class="n">poems</span><span class="p">,</span> <span class="o">-</span><span class="n">qmmm</span><span class="p">,</span> <span class="o">-</span><span class="n">reax</span>
<span class="n">switches</span> <span class="k">for</span> <span class="n">build</span> <span class="ow">and</span> <span class="n">makefile</span> <span class="n">options</span><span class="p">:</span>
<span class="o">-</span><span class="n">intel</span><span class="p">,</span> <span class="o">-</span><span class="n">kokkos</span><span class="p">,</span> <span class="o">-</span><span class="n">cc</span><span class="p">,</span> <span class="o">-</span><span class="n">mpi</span><span class="p">,</span> <span class="o">-</span><span class="n">fft</span><span class="p">,</span> <span class="o">-</span><span class="n">jpg</span><span class="p">,</span> <span class="o">-</span><span class="n">png</span>
</pre></div>
</div>
<p>Using the &#8220;-h&#8221; switch with other switches and actions gives additional
info on all the other specified switches or actions. The &#8220;-h&#8221; can be
anywhere in the command-line and the other switches do not need their
arguments. E.g. type &#8220;Make.py -h -d -atc -intel&#8221; will print:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">d</span> <span class="nb">dir</span>
<span class="nb">dir</span> <span class="o">=</span> <span class="n">LAMMPS</span> <span class="n">home</span> <span class="nb">dir</span>
<span class="k">if</span> <span class="o">-</span><span class="n">d</span> <span class="ow">not</span> <span class="n">specified</span><span class="p">,</span> <span class="n">working</span> <span class="nb">dir</span> <span class="n">must</span> <span class="n">be</span> <span class="n">lammps</span><span class="o">/</span><span class="n">src</span>
</pre></div>
</div>
<div class="highlight-default"><div class="highlight"><pre><span></span>-atc make=suffix lammps=suffix2
all args are optional and can be in any order
make = use Makefile.suffix (def = g++)
lammps = use Makefile.lammps.suffix2 (def = EXTRAMAKE in makefile)
</pre></div>
</div>
<div class="highlight-default"><div class="highlight"><pre><span></span>-intel mode
mode = cpu or phi (def = cpu)
build Intel package for CPU or Xeon Phi
</pre></div>
</div>
<p>Note that Make.py never overwrites an existing Makefile.machine.
Instead, it creates src/MAKE/MINE/Makefile.auto, which you can save or
rename if desired. Likewise it creates an executable named
src/lmp_auto, which you can rename using the -o switch if desired.</p>
<p>The most recently executed Make.py commmand is saved in
src/Make.py.last. You can use the &#8220;-r&#8221; switch (for redo) to re-invoke
the last command, or you can save a sequence of one or more Make.py
commands to a file and invoke the file of commands using &#8220;-r&#8221;. You
can also label the commands in the file and invoke one or more of them
by name.</p>
<p>A typical use of Make.py is to start with a valid Makefile.machine for
your system, that works for a vanilla LAMMPS build, i.e. when optional
packages are not installed. You can then use Make.py to add various
settings (FFT, JPG, PNG) to the Makefile.machine as well as change its
compiler and MPI options. You can also add additional packages to the
build, as well as build the needed supporting libraries.</p>
<p>You can also use Make.py to create a new Makefile.machine from
scratch, using the &#8220;-m none&#8221; switch, if you also specify what compiler
and MPI options to use, via the &#8220;-cc&#8221; and &#8220;-mpi&#8221; switches.</p>
<hr class="docutils" />
</div>
<div class="section" id="building-lammps-as-a-library">
<span id="start-5"></span><h2>2.5. Building LAMMPS as a library</h2>
<p>LAMMPS can be built as either a static or shared library, which can
then be called from another application or a scripting language. See
<a class="reference internal" href="Section_howto.html#howto-10"><span class="std std-ref">this section</span></a> for more info on coupling
LAMMPS to other codes. See <a class="reference internal" href="Section_python.html"><span class="doc">this section</span></a> for
more info on wrapping and running LAMMPS from Python.</p>
<div class="section" id="static-library">
<h3>2.5.1. <strong>Static library:</strong></h3>
<p>To build LAMMPS as a static library (*.a file on Linux), type</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">foo</span> <span class="n">mode</span><span class="o">=</span><span class="n">lib</span>
</pre></div>
</div>
<p>where foo is the machine name. This kind of library is typically used
to statically link a driver application to LAMMPS, so that you can
insure all dependencies are satisfied at compile time. This will use
the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build
will create the file liblammps_foo.a which another application can
link to. It will also create a soft link liblammps.a, which will
point to the most recently built static library.</p>
</div>
<div class="section" id="shared-library">
<h3>2.5.2. <strong>Shared library:</strong></h3>
<p>To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, e.g. from Python, type</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">make</span> <span class="n">foo</span> <span class="n">mode</span><span class="o">=</span><span class="n">shlib</span>
</pre></div>
</div>
<p>where foo is the machine name. This kind of library is required when
wrapping LAMMPS with Python; see <a class="reference internal" href="Section_python.html"><span class="doc">Section_python</span></a>
for details. This will use the SHFLAGS and SHLIBFLAGS settings in
src/MAKE/Makefile.foo and perform the build in the directory
Obj_shared_foo. This is so that each file can be compiled with the
-fPIC flag which is required for inclusion in a shared library. The
build will create the file liblammps_foo.so which another application
can link to dyamically. It will also create a soft link liblammps.so,
which will point to the most recently built shared library. This is
the file the Python wrapper loads by default.</p>
<p>Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
libraries. This will be the case for libraries included with LAMMPS,
such as the dummy MPI library in src/STUBS or any package libraries in
lib/packages, since they are always built as shared libraries using
the -fPIC switch. However, if a library like MPI or FFTW does not
exist as a shared library, the shared library build will generate an
error. This means you will need to install a shared library version
of the auxiliary library. The build instructions for the library
should tell you how to do this.</p>
<p>Here is an example of such errors when the system FFTW or provided
lib/colvars library have not been built as shared libraries:</p>
<pre class="literal-block">
/usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation
R_X86_64_32 against '.rodata' can not be used when making a shared
object; recompile with -fPIC
/usr/local/lib/libfftw3.a: could not read symbols: Bad value
</pre>
<pre class="literal-block">
/usr/bin/ld: ../../lib/colvars/libcolvars.a(colvarmodule.o):
relocation R_X86_64_32 against '__pthread_key_create' can not be used
when making a shared object; recompile with -fPIC
../../lib/colvars/libcolvars.a: error adding symbols: Bad value
</pre>
<p>As an example, here is how to build and install the <a class="reference external" href="http://www-unix.mcs.anl.gov/mpi">MPICH library</a>, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default
/usr/local/lib location:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">./</span><span class="n">configure</span> <span class="o">--</span><span class="n">enable</span><span class="o">-</span><span class="n">shared</span>
<span class="n">make</span>
<span class="n">make</span> <span class="n">install</span>
</pre></div>
</div>
<p>You may need to use &#8220;sudo make install&#8221; in place of the last line if
you do not have write privileges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so.</p>
</div>
<div class="section" id="additional-requirement-for-using-a-shared-library">
<h3>2.5.3. <strong>Additional requirement for using a shared library:</strong></h3>
<p>The operating system finds shared libraries to load at run-time using
the environment variable LD_LIBRARY_PATH. So you may wish to copy the
file src/liblammps.so or src/liblammps_g++.so (for example) to a place
the system can find it by default, such as /usr/local/lib, or you may
wish to add the LAMMPS src directory to LD_LIBRARY_PATH, so that the
current version of the shared library is always available to programs
that use it.</p>
<p>For the csh or tcsh shells, you would add something like this to your
~/.cshrc file:</p>
<pre class="literal-block">
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src
</pre>
</div>
<div class="section" id="calling-the-lammps-library">
<h3>2.5.4. <strong>Calling the LAMMPS library:</strong></h3>
<p>Either flavor of library (static or shared) allows one or more LAMMPS
objects to be instantiated from the calling program.</p>
<p>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
within the calling code, as needed.</p>
<p>When used from a C or Fortran program or a scripting language like
Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.</p>
<p>See the sample codes in examples/COUPLE/simple for examples of C++ and
C and Fortran codes that invoke LAMMPS thru its library interface.
There are other examples as well in the COUPLE directory which are
discussed in <a class="reference internal" href="Section_howto.html#howto-10"><span class="std std-ref">Section_howto 10</span></a> of the
manual. See <a class="reference internal" href="Section_python.html"><span class="doc">Section_python</span></a> of the manual for a
description of the Python wrapper provided with LAMMPS that operates
through the LAMMPS library interface.</p>
<p>The files src/library.cpp and library.h define the C-style API for
using LAMMPS as a library. See <a class="reference internal" href="Section_howto.html#howto-19"><span class="std std-ref">Section_howto 19</span></a> of the manual for a description of the
interface and how to extend it for your needs.</p>
<hr class="docutils" />
</div>
</div>
<div class="section" id="running-lammps">
<span id="start-6"></span><h2>2.6. Running LAMMPS</h2>
<p>By default, LAMMPS runs by reading commands from standard input. Thus
if you run the LAMMPS executable by itself, e.g.</p>
<pre class="literal-block">
lmp_linux
</pre>
<p>it will simply wait, expecting commands from the keyboard. Typically
you should put commands in an input script and use I/O redirection,
e.g.</p>
<pre class="literal-block">
lmp_linux &lt; in.file
</pre>
<p>For parallel environments this should also work. If it does not, use
the &#8216;-in&#8217; command-line switch, e.g.</p>
<pre class="literal-block">
lmp_linux -in in.file
</pre>
<p><a class="reference internal" href="Section_commands.html"><span class="doc">This section</span></a> describes how input scripts are
structured and what commands they contain.</p>
<p>You can test LAMMPS on any of the sample inputs provided in the
examples or bench directory. Input scripts are named in.* and sample
outputs are named log.*.name.P where name is a machine and P is the
number of processors it was run on.</p>
<p>Here is how you might run a standard Lennard-Jones benchmark on a
Linux box, using mpirun to launch a parallel job:</p>
<pre class="literal-block">
cd src
make linux
cp lmp_linux ../bench
cd ../bench
mpirun -np 4 lmp_linux -in in.lj
</pre>
<p>See <a class="reference external" href="http://lammps.sandia.gov/bench.html">this page</a> for timings for this and the other benchmarks on
various platforms. Note that some of the example scripts require
LAMMPS to be built with one or more of its optional packages.</p>
<hr class="docutils" />
<p>On a Windows box, you can skip making LAMMPS and simply download an
executable, as described above, though the pre-packaged executables
include only certain packages.</p>
<p>To run a LAMMPS executable on a Windows machine, first decide whether
you want to download the non-MPI (serial) or the MPI (parallel)
version of the executable. Download and save the version you have
chosen.</p>
<p>For the non-MPI version, follow these steps:</p>
<ul class="simple">
<li>Get a command prompt by going to Start-&gt;Run... ,
then typing &#8220;cmd&#8221;.</li>
<li>Move to the directory where you have saved lmp_win_no-mpi.exe
(e.g. by typing: cd &#8220;Documents&#8221;).</li>
<li>At the command prompt, type &#8220;lmp_win_no-mpi -in in.lj&#8221;, replacing in.lj
with the name of your LAMMPS input script.</li>
</ul>
<p>For the MPI version, which allows you to run LAMMPS under Windows on
multiple processors, follow these steps:</p>
<ul class="simple">
<li>Download and install
<a class="reference external" href="http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads">MPICH2</a>
for Windows.</li>
<li>You&#8217;ll need to use the mpiexec.exe and smpd.exe files from the MPICH2
package. Put them in same directory (or path) as the LAMMPS Windows
executable.</li>
<li>Get a command prompt by going to Start-&gt;Run... ,
then typing &#8220;cmd&#8221;.</li>
<li>Move to the directory where you have saved lmp_win_mpi.exe
(e.g. by typing: cd &#8220;Documents&#8221;).</li>
<li>Then type something like this: &#8220;mpiexec -localonly 4 lmp_win_mpi -in
in.lj&#8221;, replacing in.lj with the name of your LAMMPS input script.</li>
<li>Note that you may need to provide smpd with a passphrase (it doesn&#8217;t
matter what you type).</li>
<li>In this mode, output may not immediately show up on the screen, so if
your input script takes a long time to execute, you may need to be
patient before the output shows up. Alternatively, you can still
use this executable to run on a single processor by typing something
like: &#8220;lmp_win_mpi -in in.lj&#8221;.</li>
</ul>
<hr class="docutils" />
<p>The screen output from LAMMPS is described in a section below. As it
runs, LAMMPS also writes a log.lammps file with the same information.</p>
<p>Note that this sequence of commands copies the LAMMPS executable
(lmp_linux) to the directory with the input files. This may not be
necessary, but some versions of MPI reset the working directory to
where the executable is, rather than leave it as the directory where
you launch mpirun from (if you launch lmp_linux on its own and not
under mpirun). If that happens, LAMMPS will look for additional input
files and write its output files to the executable directory, rather
than your working directory, which is probably not what you want.</p>
<p>If LAMMPS encounters errors in the input script or while running a
simulation it will print an ERROR message and stop or a WARNING
message and continue. See <a class="reference internal" href="Section_errors.html"><span class="doc">Section_errors</span></a> for a
discussion of the various kinds of errors LAMMPS can or can&#8217;t detect,
a list of all ERROR and WARNING messages, and what to do about them.</p>
<p>LAMMPS can run a problem on any number of processors, including a
single processor. In theory you should get identical answers on any
number of processors and on any machine. In practice, numerical
round-off can cause slight differences and eventual divergence of
molecular dynamics phase space trajectories.</p>
<p>LAMMPS can run as large a problem as will fit in the physical memory
of one or more processors. If you run out of memory, you must run on
more processors or setup a smaller problem.</p>
<hr class="docutils" />
</div>
<div class="section" id="command-line-options">
<span id="start-7"></span><h2>2.7. Command-line options</h2>
<p>At run time, LAMMPS recognizes several optional command-line switches
which may be used in any order. Either the full word or a one-or-two
letter abbreviation can be used:</p>
<ul class="simple">
<li>-e or -echo</li>
<li>-h or -help</li>
<li>-i or -in</li>
<li>-k or -kokkos</li>
<li>-l or -log</li>
<li>-nc or -nocite</li>
<li>-pk or -package</li>
<li>-p or -partition</li>
<li>-pl or -plog</li>
<li>-ps or -pscreen</li>
<li>-r or -restart</li>
<li>-ro or -reorder</li>
<li>-sc or -screen</li>
<li>-sf or -suffix</li>
<li>-v or -var</li>
</ul>
<p>For example, lmp_ibm might be launched as follows:</p>
<pre class="literal-block">
mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none -in in.alloy
mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none -in in.alloy
</pre>
<p>Here are the details on the options:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">echo</span> <span class="n">style</span>
</pre></div>
</div>
<p>Set the style of command echoing. The style can be <em>none</em> or <em>screen</em>
or <em>log</em> or <em>both</em>. Depending on the style, each command read from
the input script will be echoed to the screen and/or logfile. This
can be useful to figure out which line of your script is causing an
input error. The default value is <em>log</em>. The echo style can also be
set by using the <a class="reference internal" href="echo.html"><span class="doc">echo</span></a> command in the input script itself.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">help</span>
</pre></div>
</div>
<p>Print a brief help summary and a list of options compiled into this
executable for each LAMMPS style (atom_style, fix, compute,
pair_style, bond_style, etc). This can tell you if the command you
want to use was included via the appropriate package at compile time.
LAMMPS will print the info and immediately exit if this switch is
used.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="ow">in</span> <span class="n">file</span>
</pre></div>
</div>
<p>Specify a file to use as an input script. This is an optional switch
when running LAMMPS in one-partition mode. If it is not specified,
LAMMPS reads its script from standard input, typically from a script
via I/O redirection; e.g. lmp_linux &lt; in.run. I/O redirection should
also work in parallel, but if it does not (in the unlikely case that
an MPI implementation does not support it), then use the -in flag.
Note that this is a required switch when running LAMMPS in
multi-partition mode, since multiple processors cannot all read from
stdin.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">kokkos</span> <span class="n">on</span><span class="o">/</span><span class="n">off</span> <span class="n">keyword</span><span class="o">/</span><span class="n">value</span> <span class="o">...</span>
</pre></div>
</div>
<p>Explicitly enable or disable KOKKOS support, as provided by the KOKKOS
package. Even if LAMMPS is built with this package, as described
above in <a class="reference internal" href="#start-3"><span class="std std-ref">Section 2.3</span></a>, this switch must be set to enable
running with the KOKKOS-enabled styles the package provides. If the
switch is not set (the default), LAMMPS will operate as if the KOKKOS
package were not installed; i.e. you can run standard LAMMPS or with
the GPU or USER-OMP packages, for testing or benchmarking purposes.</p>
<p>Additional optional keyword/value pairs can be specified which
determine how Kokkos will use the underlying hardware on your
platform. These settings apply to each MPI task you launch via the
&#8220;mpirun&#8221; or &#8220;mpiexec&#8221; command. You may choose to run one or more MPI
tasks per physical node. Note that if you are running on a desktop
machine, you typically have one physical node. On a cluster or
supercomputer there may be dozens or 1000s of physical nodes.</p>
<p>Either the full word or an abbreviation can be used for the keywords.
Note that the keywords do not use a leading minus sign. I.e. the
keyword is &#8220;t&#8221;, not &#8220;-t&#8221;. Also note that each of the keywords has a
default setting. Example of when to use these options and what
settings to use on different platforms is given in <a class="reference internal" href="Section_accelerate.html#acc-3"><span class="std std-ref">Section 5.8</span></a>.</p>
<ul class="simple">
<li>d or device</li>
<li>g or gpus</li>
<li>t or threads</li>
<li>n or numa</li>
</ul>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">device</span> <span class="n">Nd</span>
</pre></div>
</div>
<p>This option is only relevant if you built LAMMPS with CUDA=yes, you
have more than one GPU per node, and if you are running with only one
MPI task per node. The Nd setting is the ID of the GPU on the node to
run on. By default Nd = 0. If you have multiple GPUs per node, they
have consecutive IDs numbered as 0,1,2,etc. This setting allows you
to launch multiple independent jobs on the node, each with a single
MPI task per node, and assign each job to run on a different GPU.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">gpus</span> <span class="n">Ng</span> <span class="n">Ns</span>
</pre></div>
</div>
<p>This option is only relevant if you built LAMMPS with CUDA=yes, you
have more than one GPU per node, and you are running with multiple MPI
tasks per node (up to one per GPU). The Ng setting is how many GPUs
you will use. The Ns setting is optional. If set, it is the ID of a
GPU to skip when assigning MPI tasks to GPUs. This may be useful if
your desktop system reserves one GPU to drive the screen and the rest
are intended for computational work like running LAMMPS. By default
Ng = 1 and Ns is not set.</p>
<p>Depending on which flavor of MPI you are running, LAMMPS will look for
one of these 3 environment variables</p>
<pre class="literal-block">
SLURM_LOCALID (various MPI variants compiled with SLURM support)
MV2_COMM_WORLD_LOCAL_RANK (Mvapich)
OMPI_COMM_WORLD_LOCAL_RANK (OpenMPI)
</pre>
<p>which are initialized by the &#8220;srun&#8221;, &#8220;mpirun&#8221; or &#8220;mpiexec&#8221; commands.
The environment variable setting for each MPI rank is used to assign a
unique GPU ID to the MPI task.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">threads</span> <span class="n">Nt</span>
</pre></div>
</div>
<p>This option assigns Nt number of threads to each MPI task for
performing work when Kokkos is executing in OpenMP or pthreads mode.
The default is Nt = 1, which essentially runs in MPI-only mode. If
there are Np MPI tasks per physical node, you generally want Np*Nt =
the number of physical cores per node, to use your available hardware
optimally. This also sets the number of threads used by the host when
LAMMPS is compiled with CUDA=yes.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">numa</span> <span class="n">Nm</span>
</pre></div>
</div>
<p>This option is only relevant when using pthreads with hwloc support.
In this case Nm defines the number of NUMA regions (typicaly sockets)
on a node which will be utilizied by a single MPI rank. By default Nm
= 1. If this option is used the total number of worker-threads per
MPI rank is threads*numa. Currently it is always almost better to
assign at least one MPI rank per NUMA region, and leave numa set to
its default value of 1. This is because letting a single process span
multiple NUMA regions induces a significant amount of cross NUMA data
traffic which is slow.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">log</span> <span class="n">file</span>
</pre></div>
</div>
<p>Specify a log file for LAMMPS to write status information to. In
one-partition mode, if the switch is not used, LAMMPS writes to the
file log.lammps. If this switch is used, LAMMPS writes to the
specified file. In multi-partition mode, if the switch is not used, a
log.lammps file is created with hi-level status information. Each
partition also writes to a log.lammps.N file where N is the partition
ID. If the switch is specified in multi-partition mode, the hi-level
logfile is named &#8220;file&#8221; and each partition also logs information to a
file.N. For both one-partition and multi-partition mode, if the
specified file is &#8220;none&#8221;, then no log files are created. Using a
<a class="reference internal" href="log.html"><span class="doc">log</span></a> command in the input script will override this setting.
Option -plog will override the name of the partition log files file.N.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">nocite</span>
</pre></div>
</div>
<p>Disable writing the log.cite file which is normally written to list
references for specific cite-able features used during a LAMMPS run.
See the <a class="reference external" href="http://lammps.sandia.gov/cite.html">citation page</a> for more
details.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">package</span> <span class="n">style</span> <span class="n">args</span> <span class="o">....</span>
</pre></div>
</div>
<p>Invoke the <a class="reference internal" href="package.html"><span class="doc">package</span></a> command with style and args. The
syntax is the same as if the command appeared at the top of the input
script. For example &#8220;-package gpu 2&#8221; or &#8220;-pk gpu 2&#8221; is the same as
<a class="reference internal" href="package.html"><span class="doc">package gpu 2</span></a> in the input script. The possible styles
and args are documented on the <a class="reference internal" href="package.html"><span class="doc">package</span></a> doc page. This
switch can be used multiple times, e.g. to set options for the
USER-INTEL and USER-OMP packages which can be used together.</p>
<p>Along with the &#8220;-suffix&#8221; command-line switch, this is a convenient
mechanism for invoking accelerator packages and their options without
having to edit an input script.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">partition</span> <span class="mi">8</span><span class="n">x2</span> <span class="mi">4</span> <span class="mi">5</span> <span class="o">...</span>
</pre></div>
</div>
<p>Invoke LAMMPS in multi-partition mode. When LAMMPS is run on P
processors and this switch is not used, LAMMPS runs in one partition,
i.e. all P processors run a single simulation. If this switch is
used, the P processors are split into separate partitions and each
partition runs its own simulation. The arguments to the switch
specify the number of processors in each partition. Arguments of the
form MxN mean M partitions, each with N processors. Arguments of the
form N mean a single partition with N processors. The sum of
processors in all partitions must equal P. Thus the command
&#8220;-partition 8x2 4 5&#8221; has 10 partitions and runs on a total of 25
processors.</p>
<p>Running with multiple partitions can e useful for running
<a class="reference internal" href="Section_howto.html#howto-5"><span class="std std-ref">multi-replica simulations</span></a>, where each
replica runs on on one or a few processors. Note that with MPI
installed on a machine (e.g. your desktop), you can run on more
(virtual) processors than you have physical processors.</p>
<p>To run multiple independent simulatoins from one input script, using
multiple partitions, see <a class="reference internal" href="Section_howto.html#howto-4"><span class="std std-ref">Section_howto 4</span></a>
of the manual. World- and universe-style <a class="reference internal" href="variable.html"><span class="doc">variables</span></a>
are useful in this context.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">plog</span> <span class="n">file</span>
</pre></div>
</div>
<p>Specify the base name for the partition log files, so partition N
writes log information to file.N. If file is none, then no partition
log files are created. This overrides the filename specified in the
-log command-line option. This option is useful when working with
large numbers of partitions, allowing the partition log files to be
suppressed (-plog none) or placed in a sub-directory (-plog
replica_files/log.lammps) If this option is not used the log file for
partition N is log.lammps.N or whatever is specified by the -log
command-line option.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">pscreen</span> <span class="n">file</span>
</pre></div>
</div>
<p>Specify the base name for the partition screen file, so partition N
writes screen information to file.N. If file is none, then no
partition screen files are created. This overrides the filename
specified in the -screen command-line option. This option is useful
when working with large numbers of partitions, allowing the partition
screen files to be suppressed (-pscreen none) or placed in a
sub-directory (-pscreen replica_files/screen). If this option is not
used the screen file for partition N is screen.N or whatever is
specified by the -screen command-line option.</p>
<pre class="literal-block">
-restart restartfile <em>remap</em> datafile keyword value ...
</pre>
<p>Convert the restart file into a data file and immediately exit. This
is the same operation as if the following 2-line input script were
run:</p>
<pre class="literal-block">
read_restart restartfile <em>remap</em>
write_data datafile keyword value ...
</pre>
<p>Note that the specified restartfile and datafile can have wild-card
characters (&#8220;*&#8221;,%&#8221;) as described by the
<a class="reference internal" href="read_restart.html"><span class="doc">read_restart</span></a> and <a class="reference internal" href="write_data.html"><span class="doc">write_data</span></a>
commands. But a filename such as file.* will need to be enclosed in
quotes to avoid shell expansion of the &#8220;*&#8221; character.</p>
<p>Note that following restartfile, the optional flag <em>remap</em> can be
used. This has the same effect as adding it to the
<a class="reference internal" href="read_restart.html"><span class="doc">read_restart</span></a> command, as explained on its doc
page. This is only useful if the reading of the restart file triggers
an error that atoms have been lost. In that case, use of the remap
flag should allow the data file to still be produced.</p>
<p>Also note that following datafile, the same optional keyword/value
pairs can be listed as used by the <a class="reference internal" href="write_data.html"><span class="doc">write_data</span></a>
command.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">reorder</span> <span class="n">nth</span> <span class="n">N</span>
<span class="o">-</span><span class="n">reorder</span> <span class="n">custom</span> <span class="n">filename</span>
</pre></div>
</div>
<p>Reorder the processors in the MPI communicator used to instantiate
LAMMPS, in one of several ways. The original MPI communicator ranks
all P processors from 0 to P-1. The mapping of these ranks to
physical processors is done by MPI before LAMMPS begins. It may be
useful in some cases to alter the rank order. E.g. to insure that
cores within each node are ranked in a desired order. Or when using
the <a class="reference internal" href="run_style.html"><span class="doc">run_style verlet/split</span></a> command with 2 partitions
to insure that a specific Kspace processor (in the 2nd partition) is
matched up with a specific set of processors in the 1st partition.
See the <a class="reference internal" href="Section_accelerate.html"><span class="doc">Section_accelerate</span></a> doc pages for
more details.</p>
<p>If the keyword <em>nth</em> is used with a setting <em>N</em>, then it means every
Nth processor will be moved to the end of the ranking. This is useful
when using the <a class="reference internal" href="run_style.html"><span class="doc">run_style verlet/split</span></a> command with 2
partitions via the -partition command-line switch. The first set of
processors will be in the first partition, the 2nd set in the 2nd
partition. The -reorder command-line switch can alter this so that
the 1st N procs in the 1st partition and one proc in the 2nd partition
will be ordered consecutively, e.g. as the cores on one physical node.
This can boost performance. For example, if you use &#8220;-reorder nth 4&#8221;
and &#8220;-partition 9 3&#8221; and you are running on 12 processors, the
processors will be reordered from</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="mi">0</span> <span class="mi">1</span> <span class="mi">2</span> <span class="mi">3</span> <span class="mi">4</span> <span class="mi">5</span> <span class="mi">6</span> <span class="mi">7</span> <span class="mi">8</span> <span class="mi">9</span> <span class="mi">10</span> <span class="mi">11</span>
</pre></div>
</div>
<p>to</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="mi">0</span> <span class="mi">1</span> <span class="mi">2</span> <span class="mi">4</span> <span class="mi">5</span> <span class="mi">6</span> <span class="mi">8</span> <span class="mi">9</span> <span class="mi">10</span> <span class="mi">3</span> <span class="mi">7</span> <span class="mi">11</span>
</pre></div>
</div>
<p>so that the processors in each partition will be</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="mi">0</span> <span class="mi">1</span> <span class="mi">2</span> <span class="mi">4</span> <span class="mi">5</span> <span class="mi">6</span> <span class="mi">8</span> <span class="mi">9</span> <span class="mi">10</span>
<span class="mi">3</span> <span class="mi">7</span> <span class="mi">11</span>
</pre></div>
</div>
<p>See the &#8220;processors&#8221; command for how to insure processors from each
partition could then be grouped optimally for quad-core nodes.</p>
<p>If the keyword is <em>custom</em>, then a file that specifies a permutation
of the processor ranks is also specified. The format of the reorder
file is as follows. Any number of initial blank or comment lines
(starting with a &#8220;#&#8221; character) can be present. These should be
followed by P lines of the form:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">I</span> <span class="n">J</span>
</pre></div>
</div>
<p>where P is the number of processors LAMMPS was launched with. Note
that if running in multi-partition mode (see the -partition switch
above) P is the total number of processors in all partitions. The I
and J values describe a permutation of the P processors. Every I and
J should be values from 0 to P-1 inclusive. In the set of P I values,
every proc ID should appear exactly once. Ditto for the set of P J
values. A single I,J pairing means that the physical processor with
rank I in the original MPI communicator will have rank J in the
reordered communicator.</p>
<p>Note that rank ordering can also be specified by many MPI
implementations, either by environment variables that specify how to
order physical processors, or by config files that specify what
physical processors to assign to each MPI rank. The -reorder switch
simply gives you a portable way to do this without relying on MPI
itself. See the <a class="reference external" href="processors">processors out</a> command for how to output
info on the final assignment of physical processors to the LAMMPS
simulation domain.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">screen</span> <span class="n">file</span>
</pre></div>
</div>
<p>Specify a file for LAMMPS to write its screen information to. In
one-partition mode, if the switch is not used, LAMMPS writes to the
screen. If this switch is used, LAMMPS writes to the specified file
instead and you will see no screen output. In multi-partition mode,
if the switch is not used, hi-level status information is written to
the screen. Each partition also writes to a screen.N file where N is
the partition ID. If the switch is specified in multi-partition mode,
the hi-level screen dump is named &#8220;file&#8221; and each partition also
writes screen information to a file.N. For both one-partition and
multi-partition mode, if the specified file is &#8220;none&#8221;, then no screen
output is performed. Option -pscreen will override the name of the
partition screen files file.N.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">suffix</span> <span class="n">style</span> <span class="n">args</span>
</pre></div>
</div>
<p>Use variants of various styles if they exist. The specified style can
be <em>cuda</em>, <em>gpu</em>, <em>intel</em>, <em>kk</em>, <em>omp</em>, <em>opt</em>, or <em>hybrid</em>. These
refer to optional packages that LAMMPS can be built with, as described
above in <a class="reference internal" href="#start-3"><span class="std std-ref">Section 2.3</span></a>. The &#8220;gpu&#8221; style corresponds to the
GPU package, the &#8220;intel&#8221; style to the USER-INTEL package, the &#8220;kk&#8221;
style to the KOKKOS package, the &#8220;opt&#8221; style to the OPT package, and
the &#8220;omp&#8221; style to the USER-OMP package. The hybrid style is the only
style that accepts arguments. It allows for two packages to be
specified. The first package specified is the default and will be used
if it is available. If no style is available for the first package,
the style for the second package will be used if available. For
example, &#8220;-suffix hybrid intel omp&#8221; will use styles from the
USER-INTEL package if they are installed and available, but styles for
the USER-OMP package otherwise.</p>
<p>Along with the &#8220;-package&#8221; command-line switch, this is a convenient
mechanism for invoking accelerator packages and their options without
having to edit an input script.</p>
<p>As an example, all of the packages provide a <a class="reference internal" href="pair_lj.html"><span class="doc">pair_style lj/cut</span></a> variant, with style names lj/cut/gpu,
lj/cut/intel, lj/cut/kk, lj/cut/omp, and lj/cut/opt. A variant style
can be specified explicitly in your input script, e.g. pair_style
lj/cut/gpu. If the -suffix switch is used the specified suffix
(gpu,intel,kk,omp,opt) is automatically appended whenever your input
script command creates a new <a class="reference internal" href="atom_style.html"><span class="doc">atom</span></a>,
<a class="reference internal" href="pair_style.html"><span class="doc">pair</span></a>, <a class="reference internal" href="fix.html"><span class="doc">fix</span></a>, <a class="reference internal" href="compute.html"><span class="doc">compute</span></a>, or
<a class="reference internal" href="run_style.html"><span class="doc">run</span></a> style. If the variant version does not exist,
the standard version is created.</p>
<p>For the GPU package, using this command-line switch also invokes the
default GPU settings, as if the command &#8220;package gpu 1&#8221; were used at
the top of your input script. These settings can be changed by using
the &#8220;-package gpu&#8221; command-line switch or the <a class="reference internal" href="package.html"><span class="doc">package gpu</span></a> command in your script.</p>
<p>For the USER-INTEL package, using this command-line switch also
invokes the default USER-INTEL settings, as if the command &#8220;package
intel 1&#8221; were used at the top of your input script. These settings
can be changed by using the &#8220;-package intel&#8221; command-line switch or
the <a class="reference internal" href="package.html"><span class="doc">package intel</span></a> command in your script. If the
USER-OMP package is also installed, the hybrid style with &#8220;intel omp&#8221;
arguments can be used to make the omp suffix a second choice, if a
requested style is not available in the USER-INTEL package. It will
also invoke the default USER-OMP settings, as if the command &#8220;package
omp 0&#8221; were used at the top of your input script. These settings can
be changed by using the &#8220;-package omp&#8221; command-line switch or the
<a class="reference internal" href="package.html"><span class="doc">package omp</span></a> command in your script.</p>
<p>For the KOKKOS package, using this command-line switch also invokes
the default KOKKOS settings, as if the command &#8220;package kokkos&#8221; were
used at the top of your input script. These settings can be changed
by using the &#8220;-package kokkos&#8221; command-line switch or the <a class="reference internal" href="package.html"><span class="doc">package kokkos</span></a> command in your script.</p>
<p>For the OMP package, using this command-line switch also invokes the
default OMP settings, as if the command &#8220;package omp 0&#8221; were used at
the top of your input script. These settings can be changed by using
the &#8220;-package omp&#8221; command-line switch or the <a class="reference internal" href="package.html"><span class="doc">package omp</span></a> command in your script.</p>
<p>The <a class="reference internal" href="suffix.html"><span class="doc">suffix</span></a> command can also be used within an input
script to set a suffix, or to turn off or back on any suffix setting
made via the command line.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="o">-</span><span class="n">var</span> <span class="n">name</span> <span class="n">value1</span> <span class="n">value2</span> <span class="o">...</span>
</pre></div>
</div>
<p>Specify a variable that will be defined for substitution purposes when
the input script is read. This switch can be used multiple times to
define multiple variables. &#8220;Name&#8221; is the variable name which can be a
single character (referenced as $x in the input script) or a full
string (referenced as ${abc}). An <a class="reference internal" href="variable.html"><span class="doc">index-style variable</span></a> will be created and populated with the
subsequent values, e.g. a set of filenames. Using this command-line
option is equivalent to putting the line &#8220;variable name index value1
value2 ...&#8221; at the beginning of the input script. Defining an index
variable as a command-line argument overrides any setting for the same
index variable in the input script, since index variables cannot be
re-defined. See the <a class="reference internal" href="variable.html"><span class="doc">variable</span></a> command for more info on
defining index and other kinds of variables and <a class="reference internal" href="Section_commands.html#cmd-2"><span class="std std-ref">this section</span></a> for more info on using variables
in input scripts.</p>
<div class="admonition note">
<p class="first admonition-title">Note</p>
<p class="last">Currently, the command-line parser looks for arguments that
start with &#8220;-&#8221; to indicate new switches. Thus you cannot specify
multiple variable values if any of they start with a &#8220;-&#8221;, e.g. a
negative numeric value. It is OK if the first value1 starts with a
&#8220;-&#8221;, since it is automatically skipped.</p>
</div>
<hr class="docutils" />
</div>
<div class="section" id="lammps-screen-output">
<span id="start-8"></span><h2>2.8. LAMMPS screen output</h2>
<p>As LAMMPS reads an input script, it prints information to both the
screen and a log file about significant actions it takes to setup a
simulation. When the simulation is ready to begin, LAMMPS performs
various initializations and prints the amount of memory (in MBytes per
processor) that the simulation requires. It also prints details of
the initial thermodynamic state of the system. During the run itself,
thermodynamic information is printed periodically, every few
timesteps. When the run concludes, LAMMPS prints the final
thermodynamic state and a total run time for the simulation. It then
appends statistics about the CPU time and storage requirements for the
simulation. An example set of statistics is shown here:</p>
<p>Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Performance</span><span class="p">:</span> <span class="mf">18.436</span> <span class="n">ns</span><span class="o">/</span><span class="n">day</span> <span class="mf">1.302</span> <span class="n">hours</span><span class="o">/</span><span class="n">ns</span> <span class="mf">106.689</span> <span class="n">timesteps</span><span class="o">/</span><span class="n">s</span>
<span class="mf">97.0</span><span class="o">%</span> <span class="n">CPU</span> <span class="n">use</span> <span class="k">with</span> <span class="mi">4</span> <span class="n">MPI</span> <span class="n">tasks</span> <span class="n">x</span> <span class="n">no</span> <span class="n">OpenMP</span> <span class="n">threads</span>
</pre></div>
</div>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">MPI</span> <span class="n">task</span> <span class="n">timings</span> <span class="n">breakdown</span><span class="p">:</span>
<span class="n">Section</span> <span class="o">|</span> <span class="nb">min</span> <span class="n">time</span> <span class="o">|</span> <span class="n">avg</span> <span class="n">time</span> <span class="o">|</span> <span class="nb">max</span> <span class="n">time</span> <span class="o">|%</span><span class="n">varavg</span><span class="o">|</span> <span class="o">%</span><span class="n">total</span>
<span class="o">---------------------------------------------------------------</span>
<span class="n">Pair</span> <span class="o">|</span> <span class="mf">1.9808</span> <span class="o">|</span> <span class="mf">2.0134</span> <span class="o">|</span> <span class="mf">2.0318</span> <span class="o">|</span> <span class="mf">1.4</span> <span class="o">|</span> <span class="mf">71.60</span>
<span class="n">Bond</span> <span class="o">|</span> <span class="mf">0.0021894</span> <span class="o">|</span> <span class="mf">0.0060319</span> <span class="o">|</span> <span class="mf">0.010058</span> <span class="o">|</span> <span class="mf">4.7</span> <span class="o">|</span> <span class="mf">0.21</span>
<span class="n">Kspace</span> <span class="o">|</span> <span class="mf">0.3207</span> <span class="o">|</span> <span class="mf">0.3366</span> <span class="o">|</span> <span class="mf">0.36616</span> <span class="o">|</span> <span class="mf">3.1</span> <span class="o">|</span> <span class="mf">11.97</span>
<span class="n">Neigh</span> <span class="o">|</span> <span class="mf">0.28411</span> <span class="o">|</span> <span class="mf">0.28464</span> <span class="o">|</span> <span class="mf">0.28516</span> <span class="o">|</span> <span class="mf">0.1</span> <span class="o">|</span> <span class="mf">10.12</span>
<span class="n">Comm</span> <span class="o">|</span> <span class="mf">0.075732</span> <span class="o">|</span> <span class="mf">0.077018</span> <span class="o">|</span> <span class="mf">0.07883</span> <span class="o">|</span> <span class="mf">0.4</span> <span class="o">|</span> <span class="mf">2.74</span>
<span class="n">Output</span> <span class="o">|</span> <span class="mf">0.00030518</span> <span class="o">|</span> <span class="mf">0.00042665</span> <span class="o">|</span> <span class="mf">0.00078821</span> <span class="o">|</span> <span class="mf">1.0</span> <span class="o">|</span> <span class="mf">0.02</span>
<span class="n">Modify</span> <span class="o">|</span> <span class="mf">0.086606</span> <span class="o">|</span> <span class="mf">0.086631</span> <span class="o">|</span> <span class="mf">0.086668</span> <span class="o">|</span> <span class="mf">0.0</span> <span class="o">|</span> <span class="mf">3.08</span>
<span class="n">Other</span> <span class="o">|</span> <span class="o">|</span> <span class="mf">0.007178</span> <span class="o">|</span> <span class="o">|</span> <span class="o">|</span> <span class="mf">0.26</span>
</pre></div>
</div>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Nlocal</span><span class="p">:</span> <span class="mi">501</span> <span class="n">ave</span> <span class="mi">508</span> <span class="nb">max</span> <span class="mi">490</span> <span class="nb">min</span>
<span class="n">Histogram</span><span class="p">:</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">1</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">1</span>
<span class="n">Nghost</span><span class="p">:</span> <span class="mf">6586.25</span> <span class="n">ave</span> <span class="mi">6628</span> <span class="nb">max</span> <span class="mi">6548</span> <span class="nb">min</span>
<span class="n">Histogram</span><span class="p">:</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">1</span>
<span class="n">Neighs</span><span class="p">:</span> <span class="mi">177007</span> <span class="n">ave</span> <span class="mi">180562</span> <span class="nb">max</span> <span class="mi">170212</span> <span class="nb">min</span>
<span class="n">Histogram</span><span class="p">:</span> <span class="mi">1</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">0</span> <span class="mi">1</span> <span class="mi">1</span> <span class="mi">1</span>
</pre></div>
</div>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Total</span> <span class="c1"># of neighbors = 708028</span>
<span class="n">Ave</span> <span class="n">neighs</span><span class="o">/</span><span class="n">atom</span> <span class="o">=</span> <span class="mf">353.307</span>
<span class="n">Ave</span> <span class="n">special</span> <span class="n">neighs</span><span class="o">/</span><span class="n">atom</span> <span class="o">=</span> <span class="mf">2.34032</span>
<span class="n">Neighbor</span> <span class="nb">list</span> <span class="n">builds</span> <span class="o">=</span> <span class="mi">26</span>
<span class="n">Dangerous</span> <span class="n">builds</span> <span class="o">=</span> <span class="mi">0</span>
</pre></div>
</div>
<p>The first section provides a global loop timing summary. The loop time
is the total wall time for the section. The <em>Performance</em> line is
provided for convenience to help predicting the number of loop
continuations required and for comparing performance with other
similar MD codes. The CPU use line provides the CPU utilzation per
MPI task; it should be close to 100% times the number of OpenMP
threads (or 1). Lower numbers correspond to delays due to file I/O or
insufficient thread utilization.</p>
<p>The MPI task section gives the breakdown of the CPU run time (in
seconds) into major categories:</p>
<ul class="simple">
<li><em>Pair</em> stands for all non-bonded force computation</li>
<li><em>Bond</em> stands for bonded interactions: bonds, angles, dihedrals, impropers</li>
<li><em>Kspace</em> stands for reciprocal space interactions: Ewald, PPPM, MSM</li>
<li><em>Neigh</em> stands for neighbor list construction</li>
<li><em>Comm</em> stands for communicating atoms and their properties</li>
<li><em>Output</em> stands for writing dumps and thermo output</li>
<li><em>Modify</em> stands for fixes and computes called by them</li>
<li><em>Other</em> is the remaining time</li>
</ul>
<p>For each category, there is a breakdown of the least, average and most
amount of wall time a processor spent on this section. Also you have the
variation from the average time. Together these numbers allow to gauge
the amount of load imbalance in this segment of the calculation. Ideally
the difference between minimum, maximum and average is small and thus
the variation from the average close to zero. The final column shows
the percentage of the total loop time is spent in this section.</p>
<p>When using the <a class="reference internal" href="timer.html"><span class="doc">timer full</span></a> setting, an additional column
is present that also prints the CPU utilization in percent. In
addition, when using <em>timer full</em> and the <a class="reference internal" href="package.html"><span class="doc">package omp</span></a>
command are active, a similar timing summary of time spent in threaded
regions to monitor thread utilization and load balance is provided. A
new entry is the <em>Reduce</em> section, which lists the time spend in
reducing the per-thread data elements to the storage for non-threaded
computation. These thread timings are taking from the first MPI rank
only and and thus, as the breakdown for MPI tasks can change from MPI
rank to MPI rank, this breakdown can be very different for individual
ranks. Here is an example output for this section:</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Thread</span> <span class="n">timings</span> <span class="n">breakdown</span> <span class="p">(</span><span class="n">MPI</span> <span class="n">rank</span> <span class="mi">0</span><span class="p">):</span>
<span class="n">Total</span> <span class="n">threaded</span> <span class="n">time</span> <span class="mf">0.6846</span> <span class="o">/</span> <span class="mf">90.6</span><span class="o">%</span>
<span class="n">Section</span> <span class="o">|</span> <span class="nb">min</span> <span class="n">time</span> <span class="o">|</span> <span class="n">avg</span> <span class="n">time</span> <span class="o">|</span> <span class="nb">max</span> <span class="n">time</span> <span class="o">|%</span><span class="n">varavg</span><span class="o">|</span> <span class="o">%</span><span class="n">total</span>
<span class="o">---------------------------------------------------------------</span>
<span class="n">Pair</span> <span class="o">|</span> <span class="mf">0.5127</span> <span class="o">|</span> <span class="mf">0.5147</span> <span class="o">|</span> <span class="mf">0.5167</span> <span class="o">|</span> <span class="mf">0.3</span> <span class="o">|</span> <span class="mf">75.18</span>
<span class="n">Bond</span> <span class="o">|</span> <span class="mf">0.0043139</span> <span class="o">|</span> <span class="mf">0.0046779</span> <span class="o">|</span> <span class="mf">0.0050418</span> <span class="o">|</span> <span class="mf">0.5</span> <span class="o">|</span> <span class="mf">0.68</span>
<span class="n">Kspace</span> <span class="o">|</span> <span class="mf">0.070572</span> <span class="o">|</span> <span class="mf">0.074541</span> <span class="o">|</span> <span class="mf">0.07851</span> <span class="o">|</span> <span class="mf">1.5</span> <span class="o">|</span> <span class="mf">10.89</span>
<span class="n">Neigh</span> <span class="o">|</span> <span class="mf">0.084778</span> <span class="o">|</span> <span class="mf">0.086969</span> <span class="o">|</span> <span class="mf">0.089161</span> <span class="o">|</span> <span class="mf">0.7</span> <span class="o">|</span> <span class="mf">12.70</span>
<span class="n">Reduce</span> <span class="o">|</span> <span class="mf">0.0036485</span> <span class="o">|</span> <span class="mf">0.003737</span> <span class="o">|</span> <span class="mf">0.0038254</span> <span class="o">|</span> <span class="mf">0.1</span> <span class="o">|</span> <span class="mf">0.55</span>
</pre></div>
</div>
<p>The third section lists the number of owned atoms (Nlocal), ghost atoms
(Nghost), and pair-wise neighbors stored per processor. The max and min
values give the spread of these values across processors with a 10-bin
histogram showing the distribution. The total number of histogram counts
is equal to the number of processors.</p>
<p>The last section gives aggregate statistics for pair-wise neighbors
and special neighbors that LAMMPS keeps track of (see the
<a class="reference internal" href="special_bonds.html"><span class="doc">special_bonds</span></a> command). The number of times
neighbor lists were rebuilt during the run is given as well as the
number of potentially &#8220;dangerous&#8221; rebuilds. If atom movement
triggered neighbor list rebuilding (see the
<a class="reference internal" href="neigh_modify.html"><span class="doc">neigh_modify</span></a> command), then dangerous
reneighborings are those that were triggered on the first timestep
atom movement was checked for. If this count is non-zero you may wish
to reduce the delay factor to insure no force interactions are missed
by atoms moving beyond the neighbor skin distance before a rebuild
takes place.</p>
<p>If an energy minimization was performed via the
<a class="reference internal" href="minimize.html"><span class="doc">minimize</span></a> command, additional information is printed,
e.g.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">Minimization</span> <span class="n">stats</span><span class="p">:</span>
<span class="n">Stopping</span> <span class="n">criterion</span> <span class="o">=</span> <span class="n">linesearch</span> <span class="n">alpha</span> <span class="ow">is</span> <span class="n">zero</span>
<span class="n">Energy</span> <span class="n">initial</span><span class="p">,</span> <span class="nb">next</span><span class="o">-</span><span class="n">to</span><span class="o">-</span><span class="n">last</span><span class="p">,</span> <span class="n">final</span> <span class="o">=</span>
<span class="o">-</span><span class="mf">6372.3765206</span> <span class="o">-</span><span class="mf">8328.46998942</span> <span class="o">-</span><span class="mf">8328.46998942</span>
<span class="n">Force</span> <span class="n">two</span><span class="o">-</span><span class="n">norm</span> <span class="n">initial</span><span class="p">,</span> <span class="n">final</span> <span class="o">=</span> <span class="mf">1059.36</span> <span class="mf">5.36874</span>
<span class="n">Force</span> <span class="nb">max</span> <span class="n">component</span> <span class="n">initial</span><span class="p">,</span> <span class="n">final</span> <span class="o">=</span> <span class="mf">58.6026</span> <span class="mf">1.46872</span>
<span class="n">Final</span> <span class="n">line</span> <span class="n">search</span> <span class="n">alpha</span><span class="p">,</span> <span class="nb">max</span> <span class="n">atom</span> <span class="n">move</span> <span class="o">=</span> <span class="mf">2.7842e-10</span> <span class="mf">4.0892e-10</span>
<span class="n">Iterations</span><span class="p">,</span> <span class="n">force</span> <span class="n">evaluations</span> <span class="o">=</span> <span class="mi">701</span> <span class="mi">1516</span>
</pre></div>
</div>
<p>The first line prints the criterion that determined the minimization
to be completed. The third line lists the initial and final energy,
as well as the energy on the next-to-last iteration. The next 2 lines
give a measure of the gradient of the energy (force on all atoms).
The 2-norm is the &#8220;length&#8221; of this force vector; the inf-norm is the
largest component. Then some information about the line search and
statistics on how many iterations and force-evaluations the minimizer
required. Multiple force evaluations are typically done at each
iteration to perform a 1d line minimization in the search direction.</p>
<p>If a <a class="reference internal" href="kspace_style.html"><span class="doc">kspace_style</span></a> long-range Coulombics solve was
performed during the run (PPPM, Ewald), then additional information is
printed, e.g.</p>
<div class="highlight-default"><div class="highlight"><pre><span></span><span class="n">FFT</span> <span class="n">time</span> <span class="p">(</span><span class="o">%</span> <span class="n">of</span> <span class="n">Kspce</span><span class="p">)</span> <span class="o">=</span> <span class="mf">0.200313</span> <span class="p">(</span><span class="mf">8.34477</span><span class="p">)</span>
<span class="n">FFT</span> <span class="n">Gflps</span> <span class="mi">3</span><span class="n">d</span> <span class="mi">1</span><span class="n">d</span><span class="o">-</span><span class="n">only</span> <span class="o">=</span> <span class="mf">2.31074</span> <span class="mf">9.19989</span>
</pre></div>
</div>
<p>The first line gives the time spent doing 3d FFTs (4 per timestep) and
the fraction it represents of the total KSpace time (listed above).
Each 3d FFT requires computation (3 sets of 1d FFTs) and communication
(transposes). The total flops performed is 5Nlog_2(N), where N is the
number of points in the 3d grid. The FFTs are timed with and without
the communication and a Gflop rate is computed. The 3d rate is with
communication; the 1d rate is without (just the 1d FFTs). Thus you
can estimate what fraction of your FFT time was spent in
communication, roughly 75% in the example above.</p>
<hr class="docutils" />
</div>
<div class="section" id="tips-for-users-of-previous-lammps-versions">
<span id="start-9"></span><h2>2.9. Tips for users of previous LAMMPS versions</h2>
<p>The current C++ began with a complete rewrite of LAMMPS 2001, which
was written in F90. Features of earlier versions of LAMMPS are listed
in <a class="reference internal" href="Section_history.html"><span class="doc">Section_history</span></a>. The F90 and F77 versions
(2001 and 99) are also freely distributed as open-source codes; check
the <a class="reference external" href="http://lammps.sandia.gov">LAMMPS WWW Site</a> for distribution information if you prefer
those versions. The 99 and 2001 versions are no longer under active
development; they do not have all the features of C++ LAMMPS.</p>
<p>If you are a previous user of LAMMPS 2001, these are the most
significant changes you will notice in C++ LAMMPS:</p>
<p>(1) The names and arguments of many input script commands have
changed. All commands are now a single word (e.g. read_data instead
of read data).</p>
<p>(2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS,
but you may need to specify the relevant commands in different ways.</p>
<p>(3) The format of the data file can be streamlined for some problems.
See the <a class="reference internal" href="read_data.html"><span class="doc">read_data</span></a> command for details. The data file
section &#8220;Nonbond Coeff&#8221; has been renamed to &#8220;Pair Coeff&#8221; in C++ LAMMPS.</p>
<p>(4) Binary restart files written by LAMMPS 2001 cannot be read by C++
LAMMPS with a <a class="reference internal" href="read_restart.html"><span class="doc">read_restart</span></a> command. This is
because they were output by F90 which writes in a different binary
format than C or C++ writes or reads. Use the <em>restart2data</em> tool
provided with LAMMPS 2001 to convert the 2001 restart file to a text
data file. Then edit the data file as necessary before using the C++
LAMMPS <a class="reference internal" href="read_data.html"><span class="doc">read_data</span></a> command to read it in.</p>
<p>(5) There are numerous small numerical changes in C++ LAMMPS that mean
you will not get identical answers when comparing to a 2001 run.
However, your initial thermodynamic energy and MD trajectory should be
close if you have setup the problem for both codes the same.</p>
</div>
</div>
</div>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="Section_commands.html" class="btn btn-neutral float-right" title="3. Commands" accesskey="n">Next <span class="fa fa-arrow-circle-right"></span></a>
<a href="Section_intro.html" class="btn btn-neutral" title="1. Introduction" accesskey="p"><span class="fa fa-arrow-circle-left"></span> Previous</a>
</div>
<hr/>
<div role="contentinfo">
<p>
&copy; Copyright 2013 Sandia Corporation.
</p>
</div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT:'./',
VERSION:'',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="_static/sphinxcontrib-images/LightBox2/lightbox2/js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="_static/sphinxcontrib-images/LightBox2/lightbox2/js/lightbox.min.js"></script>
<script type="text/javascript" src="_static/sphinxcontrib-images/LightBox2/lightbox2-customize/jquery-noconflict.js"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.StickyNav.enable();
});
</script>
</body>
</html>

Event Timeline