diff --git a/doc/Section_python.html b/doc/Section_python.html
index 0b60c2510..9b69fe6fd 100644
--- a/doc/Section_python.html
+++ b/doc/Section_python.html
@@ -1,612 +1,630 @@
 <HTML>
 <CENTER><A HREF = "Section_modify.html">Previous Section</A> - <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> - <A HREF = "Manual.html">LAMMPS Documentation</A> - <A HREF = "Section_commands.html#comm">LAMMPS Commands</A> - <A HREF = "Section_errors.html">Next Section</A> 
 </CENTER>
 
 
 
 
 
 
 <HR>
 
 <H3>11. Python interface to LAMMPS 
 </H3>
 <P>This section describes how to build and use LAMMPS via a Python
 interface.
 </P>
-<UL><LI>11.1 <A HREF = "#py_1">Setting necessary environment variables</A>
-<LI>11.2 <A HREF = "#py_2">Building LAMMPS as a shared library</A>
+<UL><LI>11.1 <A HREF = "#py_1">Building LAMMPS as a shared library</A>
+<LI>11.2 <A HREF = "#py_2">Installing the Python wrapper into Python</A>
 <LI>11.3 <A HREF = "#py_3">Extending Python with MPI to run in parallel</A>
 <LI>11.4 <A HREF = "#py_4">Testing the Python-LAMMPS interface</A>
 <LI>11.5 <A HREF = "#py_5">Using LAMMPS from Python</A>
 <LI>11.6 <A HREF = "#py_6">Example Python scripts that use LAMMPS</A> 
 </UL>
 <P>The LAMMPS distribution includes the file python/lammps.py which wraps
 the library interface to LAMMPS.  This file makes it is possible to
 run LAMMPS, invoke LAMMPS commands or give it an input script, extract
 LAMMPS results, an modify internal LAMMPS variables, either from a
 Python script or interactively from a Python prompt.  You can do the
 former in serial or parallel.  Running Python interactively in
 parallel does not generally work, unless you have a package installed
 that extends your Python to enable multiple instances of Python to
 read what you type.
 </P>
 <P><A HREF = "http://www.python.org">Python</A> is a powerful scripting and programming
 language which can be used to wrap software like LAMMPS and other
 packages.  It can be used to glue multiple pieces of software
 together, e.g. to run a coupled or multiscale model.  See <A HREF = "Section_howto.html#howto_10">Section
 section</A> of the manual and the couple
 directory of the distribution for more ideas about coupling LAMMPS to
 other codes.  See <A HREF = "Section_start.html#start_5">Section_start 4</A> about
 how to build LAMMPS as a library, and <A HREF = "Section_howto.html#howto_19">Section_howto
 19</A> for a description of the library
 interface provided in src/library.cpp and src/library.h and how to
 extend it for your needs.  As described below, that interface is what
 is exposed to Python.  It is designed to be easy to add functions to.
 This can easily extend the Python inteface as well.  See details
 below.
 </P>
 <P>By using the Python interface, LAMMPS can also be coupled with a GUI
 or other visualization tools that display graphs or animations in real
 time as LAMMPS runs.  Examples of such scripts are inlcluded in the
 python directory.
 </P>
 <P>Two advantages of using Python are how concise the language is, and
 that it can be run interactively, enabling rapid development and
 debugging of programs.  If you use it to mostly invoke costly
 operations within LAMMPS, such as running a simulation for a
 reasonable number of timesteps, then the overhead cost of invoking
 LAMMPS thru Python will be negligible.
 </P>
 <P>Before using LAMMPS from a Python script, you have to do two things.
 You need to set two environment variables.  And you need to build
 LAMMPS as a dynamic shared library, so it can be loaded by Python.
 Both these steps are discussed below.  If you wish to run LAMMPS in
 parallel from Python, you also need to extend your Python with MPI.
 This is also discussed below.
 </P>
 <P>The Python wrapper for LAMMPS uses the amazing and magical (to me)
 "ctypes" package in Python, which auto-generates the interface code
 needed between Python and a set of C interface routines for a library.
 Ctypes is part of standard Python for versions 2.5 and later.  You can
 check which version of Python you have installed, by simply typing
 "python" at a shell prompt.
 </P>
 <HR>
 
 <HR>
 
-<A NAME = "py_1"></A><H4>11.1 Setting necessary environment variables 
-</H4>
-<P>For Python to use the LAMMPS interface, it needs to find two files.
-The paths to these files need to be added to two environment variables
-that Python checks.
-</P>
-<P>The first is the environment variable PYTHONPATH.  It needs
-to include the directory where the python/lammps.py file is.
-</P>
-<P>For the csh or tcsh shells, add something like this to your ~/.cshrc
-file:
-</P>
-<PRE>setenv PYTHONPATH $<I>PYTHONPATH</I>:/home/sjplimp/lammps/python 
-</PRE>
-<P>The second is the environment variable LD_LIBRARY_PATH, which is used
-by the operating system to find dynamic shared libraries when it loads
-them.  See the discussion in <A HREF = "Section_start.html#start_5">Section_start
-5</A> of the manual about building LAMMPS as a
-shared library, for instructions on how to set the LD_LIBRARY_PATH
-variable appropriately.
-</P>
-<P>If your LAMMPS build is not using any auxiliary libraries which are in
-non-default directories where the system cannot find them, you
-typically just need to add something like this to your ~/.cshrc file:
-</P>
-<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src 
-</PRE>
-<HR>
-
-<A NAME = "py_2"></A><H4>11.2 Building LAMMPS as a shared library 
+<A NAME = "py_1"></A><H4>11.1 Building LAMMPS as a shared library 
 </H4>
 <P>Instructions on how to build LAMMPS as a shared library are given in
 <A HREF = "Section_start.html#start_5">Section_start 5</A>.  A shared library is one
 that is dynamically loadable, which is what Python requires.  On Linux
 this is a library file that ends in ".so", not ".a".
 </P>
 <P>From the src directory, type
 </P>
 <PRE>make makeshlib
 make -f Makefile.shlib foo 
 </PRE>
 <P>where foo is the machine target name, such as linux or g++ or serial.
 This should create the file liblmp_foo.so in the src directory, as
-well as a soft link liblmp.so which is what the Python wrapper will
+well as a soft link liblmp.so, which is what the Python wrapper will
 load by default.  Note that if you are building multiple machine
 versions of the shared library, the soft link is always set to the
 most recently built version.
 </P>
 <P>If this fails, see <A HREF = "Section_start.html#start_5">Section_start 5</A> for
-more details, especially if your LAMMPS build uses auxiliary
-libraries, e.g. ones required by certain packages and found in the
-lib/package directories.
+more details, especially if your LAMMPS build uses auxiliary libraries
+like MPI or FFTW which may not be built as shared libraries on your
+system.
+</P>
+<HR>
+
+<A NAME = "py_2"></A><H4>11.2 Installing the Python wrapper into Python 
+</H4>
+<P>For Python to invoke LAMMPS, there are 2 files it needs to have:
+</P>
+<UL><LI>python/lammps.py
+<LI>src/liblmp.so 
+</UL>
+<P>Lammps.py is the Python wrapper on the LAMMPS library interface.
+Liblmp.so is the shared LAMMPS library that Python loads, as described
+above.
+</P>
+<P>You can insure Python can find these files in one of two ways:
+</P>
+<UL><LI>set two environment variables
+<LI>run the python/install.py script 
+</UL>
+<P>If you set the paths to these files as environment variables, you only
+have to do it once.  For the csh or tcsh shells, add something like
+this to your ~/.cshrc file, one line for each of the two files:
+</P>
+<PRE>setenv PYTHONPATH $<I>PYTHONPATH</I>:/home/sjplimp/lammps/python
+setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src 
+</PRE>
+<P>If you run the python/install.py script, you need to rerun it every
+time you rebuild LAMMPS (as a shared library) or make changes to the
+python/lammps.py file.
+</P>
+<P>You can invoke install.py from the python directory as
+</P>
+<PRE>% python install.py 
+</PRE>
+<P>Prefix this command with "sudo" if it does not allow you to copy files
+into the Python site-packages directory.  If you do this, make sure
+that the Python run by root is the same as the Python you run.
+E.g. you may need to do something like
+</P>
+<PRE>% sudo /usr/local/bin/python install.py 
+</PRE>
+<P>You can also invoke install.py from the src directory as
+</P>
+<PRE>% make install-python 
+</PRE>
+<P>Again, you may need to prefix this with "sudo".  In this mode you
+cannot control which Python root invokes.
 </P>
 <HR>
 
 <A NAME = "py_3"></A><H4>11.3 Extending Python with MPI to run in parallel 
 </H4>
 <P>If you wish to run LAMMPS in parallel from Python, you need to extend
 your Python with an interface to MPI.  This also allows you to
 make MPI calls directly from Python in your script, if you desire.
 </P>
 <P>There are several Python packages available that purport to wrap MPI
 as a library and allow MPI functions to be called from Python.
 </P>
 <P>These include
 </P>
 <UL><LI><A HREF = "http://pympi.sourceforge.net/">pyMPI</A>
 <LI><A HREF = "http://code.google.com/p/maroonmpi/">maroonmpi</A>
 <LI><A HREF = "http://code.google.com/p/mpi4py/">mpi4py</A>
 <LI><A HREF = "http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16">myMPI</A>
 <LI><A HREF = "http://code.google.com/p/pypar">Pypar</A> 
 </UL>
 <P>All of these except pyMPI work by wrapping the MPI library and
 exposing (some portion of) its interface to your Python script.  This
 means Python cannot be used interactively in parallel, since they do
 not address the issue of interactive input to multiple instances of
 Python running on different processors.  The one exception is pyMPI,
 which alters the Python interpreter to address this issue, and (I
 believe) creates a new alternate executable (in place of "python"
 itself) as a result.
 </P>
 <P>In principle any of these Python/MPI packages should work to invoke
 LAMMPS in parallel and MPI calls themselves from a Python script which
 is itself running in parallel.  However, when I downloaded and looked
 at a few of them, their documentation was incomplete and I had trouble
 with their installation.  It's not clear if some of the packages are
 still being actively developed and supported.
 </P>
 <P>The one I recommend, since I have successfully used it with LAMMPS, is
 Pypar.  Pypar requires the ubiquitous <A HREF = "http://numpy.scipy.org">Numpy
 package</A> be installed in your Python.  After
 launching python, type
 </P>
 <PRE>import numpy 
 </PRE>
 <P>to see if it is installed.  If not, here is how to install it (version
 1.3.0b1 as of April 2009).  Unpack the numpy tarball and from its
 top-level directory, type
 </P>
 <PRE>python setup.py build
 sudo python setup.py install 
 </PRE>
 <P>The "sudo" is only needed if required to copy Numpy files into your
 Python distribution's site-packages directory.
 </P>
 <P>To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
 and from its "source" directory, type
 </P>
 <PRE>python setup.py build
 sudo python setup.py install 
 </PRE>
 <P>Again, the "sudo" is only needed if required to copy Pypar files into
 your Python distribution's site-packages directory.
 </P>
 <P>If you have successully installed Pypar, you should be able to run
 Python and type
 </P>
 <PRE>import pypar 
 </PRE>
 <P>without error.  You should also be able to run python in parallel
 on a simple test script
 </P>
 <PRE>% mpirun -np 4 python test.py 
 </PRE>
 <P>where test.py contains the lines
 </P>
 <PRE>import pypar
 print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) 
 </PRE>
 <P>and see one line of output for each processor you run on.
 </P>
 <P>IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
 must insure both are using the same version of MPI.  If you only have
 one MPI installed on your system, this is not an issue, but it can be
 if you have multiple MPIs.  Your LAMMPS build is explicit about which
 MPI it is using, since you specify the details in your lo-level
 src/MAKE/Makefile.foo file.  Pypar uses the "mpicc" command to find
 information about the MPI it uses to build against.  And it tries to
 load "libmpi.so" from the LD_LIBRARY_PATH.  This may or may not find
 the MPI library that LAMMPS is using.  If you have problems running
 both Pypar and LAMMPS together, this is an issue you may need to
 address, e.g. by moving other MPI installations so that Pypar finds
 the right one.
 </P>
 <HR>
 
 <A NAME = "py_4"></A><H4>11.4 Testing the Python-LAMMPS interface 
 </H4>
 <P>To test if LAMMPS is callable from Python, launch Python interactively
 and type:
 </P>
 <PRE>>>> from lammps import lammps
 >>> lmp = lammps() 
 </PRE>
 <P>If you get no errors, you're ready to use LAMMPS from Python.
 If the load fails, the most common error to see is
 </P>
 <PRE>OSError: Could not load LAMMPS dynamic library 
 </PRE>
 <P>which means Python was unable to load the LAMMPS shared library.  This
 typically occurs if the system can't find the LAMMMPS shared library
 or one of the auxiliary shared libraries it depends on.
 </P>
 <P>Python (actually the operating system) isn't verbose about telling you
 why the load failed, so carefully go through the steps above regarding
 environment variables, and the instructions in <A HREF = "Section_start.html#start_5">Section_start
 5</A> about building a shared library and
 about setting the LD_LIBRARY_PATH envirornment variable.
 </P>
 <H5><B>Test LAMMPS and Python in serial:</B> 
 </H5>
 <P>To run a LAMMPS test in serial, type these lines into Python
 interactively from the bench directory:
 </P>
 <PRE>>>> from lammps import lammps
 >>> lmp = lammps()
 >>> lmp.file("in.lj") 
 </PRE>
 <P>Or put the same lines in the file test.py and run it as
 </P>
 <PRE>% python test.py 
 </PRE>
 <P>Either way, you should see the results of running the in.lj benchmark
 on a single processor appear on the screen, the same as if you had
 typed something like:
 </P>
 <PRE>lmp_g++ < in.lj 
 </PRE>
 <H5><B>Test LAMMPS and Python in parallel:</B> 
 </H5>
 <P>To run LAMMPS in parallel, assuming you have installed the
 <A HREF = "http://datamining.anu.edu.au/~ole/pypar">Pypar</A> package as discussed
 above, create a test.py file containing these lines:
 </P>
 <PRE>import pypar
 from lammps import lammps
 lmp = lammps()
 lmp.file("in.lj")
 print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()),lmp
 pypar.finalize() 
 </PRE>
 <P>You can then run it in parallel as:
 </P>
 <PRE>% mpirun -np 4 python test.py 
 </PRE>
 <P>and you should see the same output as if you had typed
 </P>
 <PRE>% mpirun -np 4 lmp_g++ < in.lj 
 </PRE>
 <P>Note that if you leave out the 3 lines from test.py that specify Pypar
 commands you will instantiate and run LAMMPS independently on each of
 the P processors specified in the mpirun command.  In this case you
 should get 4 sets of output, each showing that a LAMMPS run was made
 on a single processor, instead of one set of output showing that
 LAMMPS ran on 4 processors.  If the 1-processor outputs occur, it
 means that Pypar is not working correctly.
 </P>
 <P>Also note that once you import the PyPar module, Pypar initializes MPI
 for you, and you can use MPI calls directly in your Python script, as
 described in the Pypar documentation.  The last line of your Python
 script should be pypar.finalize(), to insure MPI is shut down
 correctly.
 </P>
 <H5><B>Running Python scripts:</B> 
 </H5>
 <P>Note that any Python script (not just for LAMMPS) can be invoked in
 one of several ways:
 </P>
 <PRE>% python foo.script
 % python -i foo.script
 % foo.script 
 </PRE>
 <P>The last command requires that the first line of the script be
 something like this:
 </P>
 <PRE>#!/usr/local/bin/python 
 #!/usr/local/bin/python -i 
 </PRE>
 <P>where the path points to where you have Python installed, and that you
 have made the script file executable:
 </P>
 <PRE>% chmod +x foo.script 
 </PRE>
 <P>Without the "-i" flag, Python will exit when the script finishes.
 With the "-i" flag, you will be left in the Python interpreter when
 the script finishes, so you can type subsequent commands.  As
 mentioned above, you can only run Python interactively when running
 Python on a single processor, not in parallel.
 </P>
 <HR>
 
 <HR>
 
 <A NAME = "py_5"></A><H4>11.5 Using LAMMPS from Python 
 </H4>
 <P>The Python interface to LAMMPS consists of a Python "lammps" module,
 the source code for which is in python/lammps.py, which creates a
 "lammps" object, with a set of methods that can be invoked on that
 object.  The sample Python code below assumes you have first imported
 the "lammps" module in your Python script, as follows:
 </P>
 <PRE>from lammps import lammps 
 </PRE>
 <P>These are the methods defined by the lammps module.  If you look
 at the file src/library.cpp you will see that they correspond
 one-to-one with calls you can make to the LAMMPS library from a C++ or
 C or Fortran program.
 </P>
 <PRE>lmp = lammps()           # create a LAMMPS object using the default liblmp.so library
 lmp = lammps("g++")      # create a LAMMPS object using the liblmp_g++.so library
 lmp = lammps("",list)    # ditto, with command-line args, e.g. list = ["-echo","screen"]
 lmp = lammps("g++",list) 
 </PRE>
 <PRE>lmp.close()              # destroy a LAMMPS object 
 </PRE>
 <PRE>lmp.file(file)           # run an entire input script, file = "in.lj"
 lmp.command(cmd)         # invoke a single LAMMPS command, cmd = "run 100" 
 </PRE>
 <PRE>xlo = lmp.extract_global(name,type)  # extract a global quantity
                                      # name = "boxxlo", "nlocal", etc
 				     # type = 0 = int
 				     #        1 = double 
 </PRE>
 <PRE>coords = lmp.extract_atom(name,type)      # extract a per-atom quantity
                                           # name = "x", "type", etc
 				          # type = 0 = vector of ints
 				          #        1 = array of ints
 				          #        2 = vector of doubles
 				          #        3 = array of doubles 
 </PRE>
 <PRE>eng = lmp.extract_compute(id,style,type)  # extract value(s) from a compute
 v3 = lmp.extract_fix(id,style,type,i,j)   # extract value(s) from a fix
                                           # id = ID of compute or fix
 					  # style = 0 = global data
 					  #	    1 = per-atom data
 					  #         2 = local data
 					  # type = 0 = scalar
 					  #	   1 = vector
 					  #        2 = array
 					  # i,j = indices of value in global vector or array 
 </PRE>
 <PRE>var = lmp.extract_variable(name,group,flag)  # extract value(s) from a variable
 	                                     # name = name of variable
 					     # group = group ID (ignored for equal-style variables)
 					     # flag = 0 = equal-style variable
 					     #        1 = atom-style variable 
 </PRE>
 <PRE>natoms = lmp.get_natoms()                 # total # of atoms as int
 data = lmp.gather_atoms(name,type,count)  # return atom attribute of all atoms gathered into data, ordered by atom ID
                                           # name = "x", "charge", "type", etc
                                           # count = # of per-atom values, 1 or 3, etc
 lmp.scatter_atoms(name,type,count,data)   # scatter atom attribute of all atoms from data, ordered by atom ID
                                           # name = "x", "charge", "type", etc
                                           # count = # of per-atom values, 1 or 3, etc 
 </PRE>
 <HR>
 
 <P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
 lammps.py does not take an MPI communicator as an argument.  There
 should be a way to do this, so that the LAMMPS instance runs on a
 subset of processors if desired, but I don't know how to do it from
 Pypar.  So for now, it runs with MPI_COMM_WORLD, which is all the
 processors.  If someone figures out how to do this with one or more of
 the Python wrappers for MPI, like Pypar, please let us know and we
 will amend these doc pages.
 </P>
 <P>Note that you can create multiple LAMMPS objects in your Python
 script, and coordinate and run multiple simulations, e.g.
 </P>
 <PRE>from lammps import lammps
 lmp1 = lammps()
 lmp2 = lammps()
 lmp1.file("in.file1")
 lmp2.file("in.file2") 
 </PRE>
 <P>The file() and command() methods allow an input script or single
 commands to be invoked.
 </P>
 <P>The extract_global(), extract_atom(), extract_compute(),
 extract_fix(), and extract_variable() methods return values or
 pointers to data structures internal to LAMMPS.
 </P>
 <P>For extract_global() see the src/library.cpp file for the list of
 valid names.  New names could easily be added.  A double or integer is
 returned.  You need to specify the appropriate data type via the type
 argument.
 </P>
 <P>For extract_atom(), a pointer to internal LAMMPS atom-based data is
 returned, which you can use via normal Python subscripting.  See the
 extract() method in the src/atom.cpp file for a list of valid names.
 Again, new names could easily be added.  A pointer to a vector of
 doubles or integers, or a pointer to an array of doubles (double **)
 or integers (int **) is returned.  You need to specify the appropriate
 data type via the type argument.
 </P>
 <P>For extract_compute() and extract_fix(), the global, per-atom, or
 local data calulated by the compute or fix can be accessed.  What is
 returned depends on whether the compute or fix calculates a scalar or
 vector or array.  For a scalar, a single double value is returned.  If
 the compute or fix calculates a vector or array, a pointer to the
 internal LAMMPS data is returned, which you can use via normal Python
 subscripting.  The one exception is that for a fix that calculates a
 global vector or array, a single double value from the vector or array
 is returned, indexed by I (vector) or I and J (array).  I,J are
 zero-based indices.  The I,J arguments can be left out if not needed.
 See <A HREF = "Section_howto.html#howto_15">Section_howto 15</A> of the manual for a
 discussion of global, per-atom, and local data, and of scalar, vector,
 and array data types.  See the doc pages for individual
 <A HREF = "compute.html">computes</A> and <A HREF = "fix.html">fixes</A> for a description of what
 they calculate and store.
 </P>
 <P>For extract_variable(), an <A HREF = "variable.html">equal-style or atom-style
 variable</A> is evaluated and its result returned.
 </P>
 <P>For equal-style variables a single double value is returned and the
 group argument is ignored.  For atom-style variables, a vector of
 doubles is returned, one value per atom, which you can use via normal
 Python subscripting. The values will be zero for atoms not in the
 specified group.
 </P>
 <P>The get_natoms() method returns the total number of atoms in the
 simulation, as an int.
 </P>
 <P>The gather_atoms() method returns a ctypes vector of ints or doubles
 as specified by type, of length count*natoms, for the property of all
 the atoms in the simulation specified by name, ordered by count and
 then by atom ID.  The vector can be used via normal Python
 subscripting.  If atom IDs are not consecutively ordered within
 LAMMPS, a None is returned as indication of an error.
 </P>
 <P>Note that the data structure gather_atoms("x") returns is different
 from the data structure returned by extract_atom("x") in four ways.
 (1) Gather_atoms() returns a vector which you index as x[i];
 extract_atom() returns an array which you index as x[i][j].  (2)
 Gather_atoms() orders the atoms by atom ID while extract_atom() does
 not.  (3) Gathert_atoms() returns a list of all atoms in the
 simulation; extract_atoms() returns just the atoms local to each
 processor.  (4) Finally, the gather_atoms() data structure is a copy
 of the atom coords stored internally in LAMMPS, whereas extract_atom()
 returns an array that effectively points directly to the internal
 data.  This means you can change values inside LAMMPS from Python by
 assigning a new values to the extract_atom() array.  To do this with
 the gather_atoms() vector, you need to change values in the vector,
 then invoke the scatter_atoms() method.
 </P>
 <P>The scatter_atoms() method takes a vector of ints or doubles as
 specified by type, of length count*natoms, for the property of all the
 atoms in the simulation specified by name, ordered by bount and then
 by atom ID.  It uses the vector of data to overwrite the corresponding
 properties for each atom inside LAMMPS.  This requires LAMMPS to have
 its "map" option enabled; see the <A HREF = "atom_modify.html">atom_modify</A>
 command for details.  If it is not, or if atom IDs are not
 consecutively ordered, no coordinates are reset.
 </P>
 <P>The array of coordinates passed to scatter_atoms() must be a ctypes
 vector of ints or doubles, allocated and initialized something like
 this:
 </P>
 <PRE>from ctypes import *
 natoms = lmp.get_natoms()
 n3 = 3*natoms
 x = (n3*c_double)()
 x<B>0</B> = x coord of atom with ID 1
 x<B>1</B> = y coord of atom with ID 1
 x<B>2</B> = z coord of atom with ID 1
 x<B>3</B> = x coord of atom with ID 2
 ...
 x<B>n3-1</B> = z coord of atom with ID natoms
 lmp.scatter_coords("x",1,3,x) 
 </PRE>
 <P>Alternatively, you can just change values in the vector returned by
 gather_atoms("x",1,3), since it is a ctypes vector of doubles.
 </P>
 <HR>
 
 <P>As noted above, these Python class methods correspond one-to-one with
 the functions in the LAMMPS library interface in src/library.cpp and
 library.h.  This means you can extend the Python wrapper via the
 following steps:
 </P>
 <UL><LI>Add a new interface function to src/library.cpp and
 src/library.h. 
 
 <LI>Rebuild LAMMPS as a shared library. 
 
 <LI>Add a wrapper method to python/lammps.py for this interface
 function. 
 
 <LI>You should now be able to invoke the new interface function from a
 Python script.  Isn't ctypes amazing? 
 </UL>
 <HR>
 
 <HR>
 
 <A NAME = "py_6"></A><H4>11.6 Example Python scripts that use LAMMPS 
 </H4>
 <P>These are the Python scripts included as demos in the python/examples
 directory of the LAMMPS distribution, to illustrate the kinds of
 things that are possible when Python wraps LAMMPS.  If you create your
 own scripts, send them to us and we can include them in the LAMMPS
 distribution.
 </P>
 <DIV ALIGN=center><TABLE  BORDER=1 >
 <TR><TD >trivial.py</TD><TD > read/run a LAMMPS input script thru Python</TD></TR>
 <TR><TD >demo.py</TD><TD > invoke various LAMMPS library interface routines</TD></TR>
 <TR><TD >simple.py</TD><TD > mimic operation of couple/simple/simple.cpp in Python</TD></TR>
 <TR><TD >gui.py</TD><TD > GUI go/stop/temperature-slider to control LAMMPS</TD></TR>
 <TR><TD >plot.py</TD><TD > real-time temeperature plot with GnuPlot via Pizza.py</TD></TR>
 <TR><TD >viz_tool.py</TD><TD > real-time viz via some viz package</TD></TR>
 <TR><TD >vizplotgui_tool.py</TD><TD > combination of viz_tool.py and plot.py and gui.py 
 </TD></TR></TABLE></DIV>
 
 <HR>
 
 <P>For the viz_tool.py and vizplotgui_tool.py commands, replace "tool"
 with "gl" or "atomeye" or "pymol" or "vmd", depending on what
 visualization package you have installed. 
 </P>
 <P>Note that for GL, you need to be able to run the Pizza.py GL tool,
 which is included in the pizza sub-directory.  See the <A HREF = "http://www.sandia.gov/~sjplimp/pizza.html">Pizza.py doc
 pages</A> for more info:
 </P>
 
 
 <P>Note that for AtomEye, you need version 3, and there is a line in the
 scripts that specifies the path and name of the executable.  See the
 AtomEye WWW pages <A HREF = "http://mt.seas.upenn.edu/Archive/Graphics/A">here</A> or <A HREF = "http://mt.seas.upenn.edu/Archive/Graphics/A3/A3.html">here</A> for more details:
 </P>
 <PRE>http://mt.seas.upenn.edu/Archive/Graphics/A
 http://mt.seas.upenn.edu/Archive/Graphics/A3/A3.html 
 </PRE>
 
 
 
 
 <P>The latter link is to AtomEye 3 which has the scriping
 capability needed by these Python scripts.
 </P>
 <P>Note that for PyMol, you need to have built and installed the
 open-source version of PyMol in your Python, so that you can import it
 from a Python script.  See the PyMol WWW pages <A HREF = "http://www.pymol.org">here</A> or
 <A HREF = "http://sourceforge.net/scm/?type=svn&group_id=4546">here</A> for more details:
 </P>
 <PRE>http://www.pymol.org
 http://sourceforge.net/scm/?type=svn&group_id=4546 
 </PRE>
 
 
 
 
 <P>The latter link is to the open-source version.
 </P>
 <P>Note that for VMD, you need a fairly current version (1.8.7 works for
 me) and there are some lines in the pizza/vmd.py script for 4 PIZZA
 variables that have to match the VMD installation on your system.
 </P>
 <HR>
 
 <P>See the python/README file for instructions on how to run them and the
 source code for individual scripts for comments about what they do.
 </P>
 <P>Here are screenshots of the vizplotgui_tool.py script in action for
 different visualization package options.  Click to see larger images:
 </P>
 <A HREF = "JPG/screenshot_gl.jpg"><IMG SRC = "JPG/screenshot_gl_small.jpg"></A>
 
 <A HREF = "JPG/screenshot_atomeye.jpg"><IMG SRC = "JPG/screenshot_atomeye_small.jpg"></A>
 
 <A HREF = "JPG/screenshot_pymol.jpg"><IMG SRC = "JPG/screenshot_pymol_small.jpg"></A>
 
 <A HREF = "JPG/screenshot_vmd.jpg"><IMG SRC = "JPG/screenshot_vmd_small.jpg"></A>
 
 </HTML>
diff --git a/doc/Section_python.txt b/doc/Section_python.txt
index 8041bf10c..db9f034e3 100644
--- a/doc/Section_python.txt
+++ b/doc/Section_python.txt
@@ -1,597 +1,615 @@
 "Previous Section"_Section_modify.html - "LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc - "Next Section"_Section_errors.html :c
 
 :link(lws,http://lammps.sandia.gov)
 :link(ld,Manual.html)
 :link(lc,Section_commands.html#comm)
 
 :line
 
 11. Python interface to LAMMPS :h3
 
 This section describes how to build and use LAMMPS via a Python
 interface.
 
-11.1 "Setting necessary environment variables"_#py_1
-11.2 "Building LAMMPS as a shared library"_#py_2
+11.1 "Building LAMMPS as a shared library"_#py_1
+11.2 "Installing the Python wrapper into Python"_#py_2
 11.3 "Extending Python with MPI to run in parallel"_#py_3
 11.4 "Testing the Python-LAMMPS interface"_#py_4
 11.5 "Using LAMMPS from Python"_#py_5
 11.6 "Example Python scripts that use LAMMPS"_#py_6 :ul
 
 The LAMMPS distribution includes the file python/lammps.py which wraps
 the library interface to LAMMPS.  This file makes it is possible to
 run LAMMPS, invoke LAMMPS commands or give it an input script, extract
 LAMMPS results, an modify internal LAMMPS variables, either from a
 Python script or interactively from a Python prompt.  You can do the
 former in serial or parallel.  Running Python interactively in
 parallel does not generally work, unless you have a package installed
 that extends your Python to enable multiple instances of Python to
 read what you type.
 
 "Python"_http://www.python.org is a powerful scripting and programming
 language which can be used to wrap software like LAMMPS and other
 packages.  It can be used to glue multiple pieces of software
 together, e.g. to run a coupled or multiscale model.  See "Section
 section"_Section_howto.html#howto_10 of the manual and the couple
 directory of the distribution for more ideas about coupling LAMMPS to
 other codes.  See "Section_start 4"_Section_start.html#start_5 about
 how to build LAMMPS as a library, and "Section_howto
 19"_Section_howto.html#howto_19 for a description of the library
 interface provided in src/library.cpp and src/library.h and how to
 extend it for your needs.  As described below, that interface is what
 is exposed to Python.  It is designed to be easy to add functions to.
 This can easily extend the Python inteface as well.  See details
 below.
 
 By using the Python interface, LAMMPS can also be coupled with a GUI
 or other visualization tools that display graphs or animations in real
 time as LAMMPS runs.  Examples of such scripts are inlcluded in the
 python directory.
 
 Two advantages of using Python are how concise the language is, and
 that it can be run interactively, enabling rapid development and
 debugging of programs.  If you use it to mostly invoke costly
 operations within LAMMPS, such as running a simulation for a
 reasonable number of timesteps, then the overhead cost of invoking
 LAMMPS thru Python will be negligible.
 
 Before using LAMMPS from a Python script, you have to do two things.
 You need to set two environment variables.  And you need to build
 LAMMPS as a dynamic shared library, so it can be loaded by Python.
 Both these steps are discussed below.  If you wish to run LAMMPS in
 parallel from Python, you also need to extend your Python with MPI.
 This is also discussed below.
 
 The Python wrapper for LAMMPS uses the amazing and magical (to me)
 "ctypes" package in Python, which auto-generates the interface code
 needed between Python and a set of C interface routines for a library.
 Ctypes is part of standard Python for versions 2.5 and later.  You can
 check which version of Python you have installed, by simply typing
 "python" at a shell prompt.
 
 :line
 :line
 
-11.1 Setting necessary environment variables :link(py_1),h4
-
-For Python to use the LAMMPS interface, it needs to find two files.
-The paths to these files need to be added to two environment variables
-that Python checks.
-
-The first is the environment variable PYTHONPATH.  It needs
-to include the directory where the python/lammps.py file is.
-
-For the csh or tcsh shells, add something like this to your ~/.cshrc
-file:
-
-setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python :pre
-
-The second is the environment variable LD_LIBRARY_PATH, which is used
-by the operating system to find dynamic shared libraries when it loads
-them.  See the discussion in "Section_start
-5"_Section_start.html#start_5 of the manual about building LAMMPS as a
-shared library, for instructions on how to set the LD_LIBRARY_PATH
-variable appropriately.
-
-If your LAMMPS build is not using any auxiliary libraries which are in
-non-default directories where the system cannot find them, you
-typically just need to add something like this to your ~/.cshrc file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
-
-:line
-
-11.2 Building LAMMPS as a shared library :link(py_2),h4
+11.1 Building LAMMPS as a shared library :link(py_1),h4
 
 Instructions on how to build LAMMPS as a shared library are given in
 "Section_start 5"_Section_start.html#start_5.  A shared library is one
 that is dynamically loadable, which is what Python requires.  On Linux
 this is a library file that ends in ".so", not ".a".
 
 From the src directory, type
 
 make makeshlib
 make -f Makefile.shlib foo :pre
 
 where foo is the machine target name, such as linux or g++ or serial.
 This should create the file liblmp_foo.so in the src directory, as
-well as a soft link liblmp.so which is what the Python wrapper will
+well as a soft link liblmp.so, which is what the Python wrapper will
 load by default.  Note that if you are building multiple machine
 versions of the shared library, the soft link is always set to the
 most recently built version.
 
 If this fails, see "Section_start 5"_Section_start.html#start_5 for
-more details, especially if your LAMMPS build uses auxiliary
-libraries, e.g. ones required by certain packages and found in the
-lib/package directories.
+more details, especially if your LAMMPS build uses auxiliary libraries
+like MPI or FFTW which may not be built as shared libraries on your
+system.
+
+:line
+
+11.2 Installing the Python wrapper into Python :link(py_2),h4
+
+For Python to invoke LAMMPS, there are 2 files it needs to have:
+
+python/lammps.py
+src/liblmp.so :ul
+
+Lammps.py is the Python wrapper on the LAMMPS library interface.
+Liblmp.so is the shared LAMMPS library that Python loads, as described
+above.
+
+You can insure Python can find these files in one of two ways:
+
+set two environment variables
+run the python/install.py script :ul
+
+If you set the paths to these files as environment variables, you only
+have to do it once.  For the csh or tcsh shells, add something like
+this to your ~/.cshrc file, one line for each of the two files:
+
+setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python
+setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
+
+If you run the python/install.py script, you need to rerun it every
+time you rebuild LAMMPS (as a shared library) or make changes to the
+python/lammps.py file.
+
+You can invoke install.py from the python directory as
+
+% python install.py :pre
+
+Prefix this command with "sudo" if it does not allow you to copy files
+into the Python site-packages directory.  If you do this, make sure
+that the Python run by root is the same as the Python you run.
+E.g. you may need to do something like
+
+% sudo /usr/local/bin/python install.py :pre
+
+You can also invoke install.py from the src directory as
+
+% make install-python :pre
+
+Again, you may need to prefix this with "sudo".  In this mode you
+cannot control which Python root invokes.
 
 :line
 
 11.3 Extending Python with MPI to run in parallel :link(py_3),h4
 
 If you wish to run LAMMPS in parallel from Python, you need to extend
 your Python with an interface to MPI.  This also allows you to
 make MPI calls directly from Python in your script, if you desire.
 
 There are several Python packages available that purport to wrap MPI
 as a library and allow MPI functions to be called from Python.
 
 These include
 
 "pyMPI"_http://pympi.sourceforge.net/
 "maroonmpi"_http://code.google.com/p/maroonmpi/
 "mpi4py"_http://code.google.com/p/mpi4py/
 "myMPI"_http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16
 "Pypar"_http://code.google.com/p/pypar :ul
 
 All of these except pyMPI work by wrapping the MPI library and
 exposing (some portion of) its interface to your Python script.  This
 means Python cannot be used interactively in parallel, since they do
 not address the issue of interactive input to multiple instances of
 Python running on different processors.  The one exception is pyMPI,
 which alters the Python interpreter to address this issue, and (I
 believe) creates a new alternate executable (in place of "python"
 itself) as a result.
 
 In principle any of these Python/MPI packages should work to invoke
 LAMMPS in parallel and MPI calls themselves from a Python script which
 is itself running in parallel.  However, when I downloaded and looked
 at a few of them, their documentation was incomplete and I had trouble
 with their installation.  It's not clear if some of the packages are
 still being actively developed and supported.
 
 The one I recommend, since I have successfully used it with LAMMPS, is
 Pypar.  Pypar requires the ubiquitous "Numpy
 package"_http://numpy.scipy.org be installed in your Python.  After
 launching python, type
 
 import numpy :pre
 
 to see if it is installed.  If not, here is how to install it (version
 1.3.0b1 as of April 2009).  Unpack the numpy tarball and from its
 top-level directory, type
 
 python setup.py build
 sudo python setup.py install :pre
 
 The "sudo" is only needed if required to copy Numpy files into your
 Python distribution's site-packages directory.
 
 To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
 and from its "source" directory, type
 
 python setup.py build
 sudo python setup.py install :pre
 
 Again, the "sudo" is only needed if required to copy Pypar files into
 your Python distribution's site-packages directory.
 
 If you have successully installed Pypar, you should be able to run
 Python and type
 
 import pypar :pre
 
 without error.  You should also be able to run python in parallel
 on a simple test script
 
 % mpirun -np 4 python test.py :pre
 
 where test.py contains the lines
 
 import pypar
 print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) :pre
 
 and see one line of output for each processor you run on.
 
 IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
 must insure both are using the same version of MPI.  If you only have
 one MPI installed on your system, this is not an issue, but it can be
 if you have multiple MPIs.  Your LAMMPS build is explicit about which
 MPI it is using, since you specify the details in your lo-level
 src/MAKE/Makefile.foo file.  Pypar uses the "mpicc" command to find
 information about the MPI it uses to build against.  And it tries to
 load "libmpi.so" from the LD_LIBRARY_PATH.  This may or may not find
 the MPI library that LAMMPS is using.  If you have problems running
 both Pypar and LAMMPS together, this is an issue you may need to
 address, e.g. by moving other MPI installations so that Pypar finds
 the right one.
 
 :line
 
 11.4 Testing the Python-LAMMPS interface :link(py_4),h4
 
 To test if LAMMPS is callable from Python, launch Python interactively
 and type:
 
 >>> from lammps import lammps
 >>> lmp = lammps() :pre
 
 If you get no errors, you're ready to use LAMMPS from Python.
 If the load fails, the most common error to see is
 
 OSError: Could not load LAMMPS dynamic library :pre
 
 which means Python was unable to load the LAMMPS shared library.  This
 typically occurs if the system can't find the LAMMMPS shared library
 or one of the auxiliary shared libraries it depends on.
 
 Python (actually the operating system) isn't verbose about telling you
 why the load failed, so carefully go through the steps above regarding
 environment variables, and the instructions in "Section_start
 5"_Section_start.html#start_5 about building a shared library and
 about setting the LD_LIBRARY_PATH envirornment variable.
 
 [Test LAMMPS and Python in serial:] :h5
 
 To run a LAMMPS test in serial, type these lines into Python
 interactively from the bench directory:
 
 >>> from lammps import lammps
 >>> lmp = lammps()
 >>> lmp.file("in.lj") :pre
 
 Or put the same lines in the file test.py and run it as
 
 % python test.py :pre
 
 Either way, you should see the results of running the in.lj benchmark
 on a single processor appear on the screen, the same as if you had
 typed something like:
 
 lmp_g++ < in.lj :pre
 
 [Test LAMMPS and Python in parallel:] :h5
 
 To run LAMMPS in parallel, assuming you have installed the
 "Pypar"_http://datamining.anu.edu.au/~ole/pypar package as discussed
 above, create a test.py file containing these lines:
 
 import pypar
 from lammps import lammps
 lmp = lammps()
 lmp.file("in.lj")
 print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()),lmp
 pypar.finalize() :pre
 
 You can then run it in parallel as:
 
 % mpirun -np 4 python test.py :pre
 
 and you should see the same output as if you had typed
 
 % mpirun -np 4 lmp_g++ < in.lj :pre
 
 Note that if you leave out the 3 lines from test.py that specify Pypar
 commands you will instantiate and run LAMMPS independently on each of
 the P processors specified in the mpirun command.  In this case you
 should get 4 sets of output, each showing that a LAMMPS run was made
 on a single processor, instead of one set of output showing that
 LAMMPS ran on 4 processors.  If the 1-processor outputs occur, it
 means that Pypar is not working correctly.
 
 Also note that once you import the PyPar module, Pypar initializes MPI
 for you, and you can use MPI calls directly in your Python script, as
 described in the Pypar documentation.  The last line of your Python
 script should be pypar.finalize(), to insure MPI is shut down
 correctly.
 
 [Running Python scripts:] :h5
 
 Note that any Python script (not just for LAMMPS) can be invoked in
 one of several ways:
 
 % python foo.script
 % python -i foo.script
 % foo.script :pre
 
 The last command requires that the first line of the script be
 something like this:
 
 #!/usr/local/bin/python 
 #!/usr/local/bin/python -i :pre
 
 where the path points to where you have Python installed, and that you
 have made the script file executable:
 
 % chmod +x foo.script :pre
 
 Without the "-i" flag, Python will exit when the script finishes.
 With the "-i" flag, you will be left in the Python interpreter when
 the script finishes, so you can type subsequent commands.  As
 mentioned above, you can only run Python interactively when running
 Python on a single processor, not in parallel.
 
 :line
 :line
 
 11.5 Using LAMMPS from Python :link(py_5),h4
 
 The Python interface to LAMMPS consists of a Python "lammps" module,
 the source code for which is in python/lammps.py, which creates a
 "lammps" object, with a set of methods that can be invoked on that
 object.  The sample Python code below assumes you have first imported
 the "lammps" module in your Python script, as follows:
 
 from lammps import lammps :pre
 
 These are the methods defined by the lammps module.  If you look
 at the file src/library.cpp you will see that they correspond
 one-to-one with calls you can make to the LAMMPS library from a C++ or
 C or Fortran program.
 
 lmp = lammps()           # create a LAMMPS object using the default liblmp.so library
 lmp = lammps("g++")      # create a LAMMPS object using the liblmp_g++.so library
 lmp = lammps("",list)    # ditto, with command-line args, e.g. list = \["-echo","screen"\]
 lmp = lammps("g++",list) :pre
 
 lmp.close()              # destroy a LAMMPS object :pre
 
 lmp.file(file)           # run an entire input script, file = "in.lj"
 lmp.command(cmd)         # invoke a single LAMMPS command, cmd = "run 100" :pre
 
 xlo = lmp.extract_global(name,type)  # extract a global quantity
                                      # name = "boxxlo", "nlocal", etc
 				     # type = 0 = int
 				     #        1 = double :pre
 
 coords = lmp.extract_atom(name,type)      # extract a per-atom quantity
                                           # name = "x", "type", etc
 				          # type = 0 = vector of ints
 				          #        1 = array of ints
 				          #        2 = vector of doubles
 				          #        3 = array of doubles :pre
 
 eng = lmp.extract_compute(id,style,type)  # extract value(s) from a compute
 v3 = lmp.extract_fix(id,style,type,i,j)   # extract value(s) from a fix
                                           # id = ID of compute or fix
 					  # style = 0 = global data
 					  #	    1 = per-atom data
 					  #         2 = local data
 					  # type = 0 = scalar
 					  #	   1 = vector
 					  #        2 = array
 					  # i,j = indices of value in global vector or array :pre
 
 var = lmp.extract_variable(name,group,flag)  # extract value(s) from a variable
 	                                     # name = name of variable
 					     # group = group ID (ignored for equal-style variables)
 					     # flag = 0 = equal-style variable
 					     #        1 = atom-style variable :pre
 
 natoms = lmp.get_natoms()                 # total # of atoms as int
 data = lmp.gather_atoms(name,type,count)  # return atom attribute of all atoms gathered into data, ordered by atom ID
                                           # name = "x", "charge", "type", etc
                                           # count = # of per-atom values, 1 or 3, etc
 lmp.scatter_atoms(name,type,count,data)   # scatter atom attribute of all atoms from data, ordered by atom ID
                                           # name = "x", "charge", "type", etc
                                           # count = # of per-atom values, 1 or 3, etc :pre
 
 :line
 
 IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
 lammps.py does not take an MPI communicator as an argument.  There
 should be a way to do this, so that the LAMMPS instance runs on a
 subset of processors if desired, but I don't know how to do it from
 Pypar.  So for now, it runs with MPI_COMM_WORLD, which is all the
 processors.  If someone figures out how to do this with one or more of
 the Python wrappers for MPI, like Pypar, please let us know and we
 will amend these doc pages.
 
 Note that you can create multiple LAMMPS objects in your Python
 script, and coordinate and run multiple simulations, e.g.
 
 from lammps import lammps
 lmp1 = lammps()
 lmp2 = lammps()
 lmp1.file("in.file1")
 lmp2.file("in.file2") :pre
 
 The file() and command() methods allow an input script or single
 commands to be invoked.
 
 The extract_global(), extract_atom(), extract_compute(),
 extract_fix(), and extract_variable() methods return values or
 pointers to data structures internal to LAMMPS.
 
 For extract_global() see the src/library.cpp file for the list of
 valid names.  New names could easily be added.  A double or integer is
 returned.  You need to specify the appropriate data type via the type
 argument.
 
 For extract_atom(), a pointer to internal LAMMPS atom-based data is
 returned, which you can use via normal Python subscripting.  See the
 extract() method in the src/atom.cpp file for a list of valid names.
 Again, new names could easily be added.  A pointer to a vector of
 doubles or integers, or a pointer to an array of doubles (double **)
 or integers (int **) is returned.  You need to specify the appropriate
 data type via the type argument.
 
 For extract_compute() and extract_fix(), the global, per-atom, or
 local data calulated by the compute or fix can be accessed.  What is
 returned depends on whether the compute or fix calculates a scalar or
 vector or array.  For a scalar, a single double value is returned.  If
 the compute or fix calculates a vector or array, a pointer to the
 internal LAMMPS data is returned, which you can use via normal Python
 subscripting.  The one exception is that for a fix that calculates a
 global vector or array, a single double value from the vector or array
 is returned, indexed by I (vector) or I and J (array).  I,J are
 zero-based indices.  The I,J arguments can be left out if not needed.
 See "Section_howto 15"_Section_howto.html#howto_15 of the manual for a
 discussion of global, per-atom, and local data, and of scalar, vector,
 and array data types.  See the doc pages for individual
 "computes"_compute.html and "fixes"_fix.html for a description of what
 they calculate and store.
 
 For extract_variable(), an "equal-style or atom-style
 variable"_variable.html is evaluated and its result returned.
 
 For equal-style variables a single double value is returned and the
 group argument is ignored.  For atom-style variables, a vector of
 doubles is returned, one value per atom, which you can use via normal
 Python subscripting. The values will be zero for atoms not in the
 specified group.
 
 The get_natoms() method returns the total number of atoms in the
 simulation, as an int.
 
 The gather_atoms() method returns a ctypes vector of ints or doubles
 as specified by type, of length count*natoms, for the property of all
 the atoms in the simulation specified by name, ordered by count and
 then by atom ID.  The vector can be used via normal Python
 subscripting.  If atom IDs are not consecutively ordered within
 LAMMPS, a None is returned as indication of an error.
 
 Note that the data structure gather_atoms("x") returns is different
 from the data structure returned by extract_atom("x") in four ways.
 (1) Gather_atoms() returns a vector which you index as x\[i\];
 extract_atom() returns an array which you index as x\[i\]\[j\].  (2)
 Gather_atoms() orders the atoms by atom ID while extract_atom() does
 not.  (3) Gathert_atoms() returns a list of all atoms in the
 simulation; extract_atoms() returns just the atoms local to each
 processor.  (4) Finally, the gather_atoms() data structure is a copy
 of the atom coords stored internally in LAMMPS, whereas extract_atom()
 returns an array that effectively points directly to the internal
 data.  This means you can change values inside LAMMPS from Python by
 assigning a new values to the extract_atom() array.  To do this with
 the gather_atoms() vector, you need to change values in the vector,
 then invoke the scatter_atoms() method.
 
 The scatter_atoms() method takes a vector of ints or doubles as
 specified by type, of length count*natoms, for the property of all the
 atoms in the simulation specified by name, ordered by bount and then
 by atom ID.  It uses the vector of data to overwrite the corresponding
 properties for each atom inside LAMMPS.  This requires LAMMPS to have
 its "map" option enabled; see the "atom_modify"_atom_modify.html
 command for details.  If it is not, or if atom IDs are not
 consecutively ordered, no coordinates are reset.
 
 The array of coordinates passed to scatter_atoms() must be a ctypes
 vector of ints or doubles, allocated and initialized something like
 this:
 
 from ctypes import *
 natoms = lmp.get_natoms()
 n3 = 3*natoms
 x = (n3*c_double)()
 x[0] = x coord of atom with ID 1
 x[1] = y coord of atom with ID 1
 x[2] = z coord of atom with ID 1
 x[3] = x coord of atom with ID 2
 ...
 x[n3-1] = z coord of atom with ID natoms
 lmp.scatter_coords("x",1,3,x) :pre
 
 Alternatively, you can just change values in the vector returned by
 gather_atoms("x",1,3), since it is a ctypes vector of doubles.
 
 :line 
 
 As noted above, these Python class methods correspond one-to-one with
 the functions in the LAMMPS library interface in src/library.cpp and
 library.h.  This means you can extend the Python wrapper via the
 following steps:
 
 Add a new interface function to src/library.cpp and
 src/library.h. :ulb,l
 
 Rebuild LAMMPS as a shared library. :l
 
 Add a wrapper method to python/lammps.py for this interface
 function. :l
 
 You should now be able to invoke the new interface function from a
 Python script.  Isn't ctypes amazing? :l,ule
 
 :line
 :line
 
 11.6 Example Python scripts that use LAMMPS :link(py_6),h4
 
 These are the Python scripts included as demos in the python/examples
 directory of the LAMMPS distribution, to illustrate the kinds of
 things that are possible when Python wraps LAMMPS.  If you create your
 own scripts, send them to us and we can include them in the LAMMPS
 distribution.
 
 trivial.py, read/run a LAMMPS input script thru Python,
 demo.py, invoke various LAMMPS library interface routines,
 simple.py, mimic operation of couple/simple/simple.cpp in Python,
 gui.py, GUI go/stop/temperature-slider to control LAMMPS,
 plot.py, real-time temeperature plot with GnuPlot via Pizza.py,
 viz_tool.py, real-time viz via some viz package,
 vizplotgui_tool.py, combination of viz_tool.py and plot.py and gui.py :tb(c=2)
 
 :line
 
 For the viz_tool.py and vizplotgui_tool.py commands, replace "tool"
 with "gl" or "atomeye" or "pymol" or "vmd", depending on what
 visualization package you have installed. 
 
 Note that for GL, you need to be able to run the Pizza.py GL tool,
 which is included in the pizza sub-directory.  See the "Pizza.py doc
 pages"_pizza for more info:
 
 :link(pizza,http://www.sandia.gov/~sjplimp/pizza.html)
 
 Note that for AtomEye, you need version 3, and there is a line in the
 scripts that specifies the path and name of the executable.  See the
 AtomEye WWW pages "here"_atomeye or "here"_atomeye3 for more details:
 
 http://mt.seas.upenn.edu/Archive/Graphics/A
 http://mt.seas.upenn.edu/Archive/Graphics/A3/A3.html :pre
 
 :link(atomeye,http://mt.seas.upenn.edu/Archive/Graphics/A)
 :link(atomeye3,http://mt.seas.upenn.edu/Archive/Graphics/A3/A3.html)
 
 The latter link is to AtomEye 3 which has the scriping
 capability needed by these Python scripts.
 
 Note that for PyMol, you need to have built and installed the
 open-source version of PyMol in your Python, so that you can import it
 from a Python script.  See the PyMol WWW pages "here"_pymol or
 "here"_pymolopen for more details:
 
 http://www.pymol.org
 http://sourceforge.net/scm/?type=svn&group_id=4546 :pre
 
 :link(pymol,http://www.pymol.org)
 :link(pymolopen,http://sourceforge.net/scm/?type=svn&group_id=4546)
 
 The latter link is to the open-source version.
 
 Note that for VMD, you need a fairly current version (1.8.7 works for
 me) and there are some lines in the pizza/vmd.py script for 4 PIZZA
 variables that have to match the VMD installation on your system.
 
 :line
 
 See the python/README file for instructions on how to run them and the
 source code for individual scripts for comments about what they do.
 
 Here are screenshots of the vizplotgui_tool.py script in action for
 different visualization package options.  Click to see larger images:
 
 :image(JPG/screenshot_gl_small.jpg,JPG/screenshot_gl.jpg)
 :image(JPG/screenshot_atomeye_small.jpg,JPG/screenshot_atomeye.jpg)
 :image(JPG/screenshot_pymol_small.jpg,JPG/screenshot_pymol.jpg)
 :image(JPG/screenshot_vmd_small.jpg,JPG/screenshot_vmd.jpg)
diff --git a/doc/Section_start.html b/doc/Section_start.html
index 8fcdf944c..e0150bf57 100644
--- a/doc/Section_start.html
+++ b/doc/Section_start.html
@@ -1,1440 +1,1401 @@
 <HTML>
 <CENTER><A HREF = "Section_intro.html">Previous Section</A> - <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> - <A HREF = "Manual.html">LAMMPS Documentation</A> - <A HREF = "Section_commands.html#comm">LAMMPS Commands</A> - <A HREF = "Section_commands.html">Next Section</A> 
 </CENTER>
 
 
 
 
 
 
 <HR>
 
 <H3>2. Getting Started 
 </H3>
 <P>This section describes how to build and run LAMMPS, for both new and
 experienced users.
 </P>
 2.1 <A HREF = "#start_1">What's in the LAMMPS distribution</A><BR>
 2.2 <A HREF = "#start_2">Making LAMMPS</A><BR>
 2.3 <A HREF = "#start_3">Making LAMMPS with optional packages</A><BR>
 2.4 <A HREF = "#start_4">Building LAMMPS via the Make.py script</A><BR>
 2.5 <A HREF = "#start_5">Building LAMMPS as a library</A><BR>
 2.6 <A HREF = "#start_6">Running LAMMPS</A><BR>
 2.7 <A HREF = "#start_7">Command-line options</A><BR>
 2.8 <A HREF = "#start_8">Screen output</A><BR>
 2.9 <A HREF = "#start_9">Tips for users of previous versions</A> <BR>
 
 <HR>
 
 <HR>
 
 <H4><A NAME = "start_1"></A>2.1 What's in the LAMMPS distribution 
 </H4>
 <P>When you download LAMMPS you will need to unzip and untar the
 downloaded file with the following commands, after placing the file in
 an appropriate directory.
 </P>
 <PRE>gunzip lammps*.tar.gz 
 tar xvf lammps*.tar 
 </PRE>
 <P>This will create a LAMMPS directory containing two files and several
 sub-directories:
 </P>
 <DIV ALIGN=center><TABLE  BORDER=1 >
 <TR><TD >README</TD><TD > text file</TD></TR>
 <TR><TD >LICENSE</TD><TD > the GNU General Public License (GPL)</TD></TR>
 <TR><TD >bench</TD><TD > benchmark problems</TD></TR>
 <TR><TD >doc</TD><TD > documentation</TD></TR>
 <TR><TD >examples</TD><TD > simple test problems</TD></TR>
 <TR><TD >potentials</TD><TD > embedded atom method (EAM) potential files</TD></TR>
 <TR><TD >src</TD><TD > source files</TD></TR>
 <TR><TD >tools</TD><TD > pre- and post-processing tools 
 </TD></TR></TABLE></DIV>
 
 <P>If you download one of the Windows executables from the download page,
 then you just get a single file:
 </P>
 <PRE>lmp_windows.exe 
 </PRE>
 <P>Skip to the <A HREF = "#start_6">Running LAMMPS</A> sections for info on how to
 launch these executables on a Windows box.
 </P>
 <P>The Windows executables for serial or parallel only include certain
 packages and bug-fixes/upgrades listed on <A HREF = "http://lammps.sandia.gov/bug.html">this
 page</A> up to a certain date, as
 stated on the download page.  If you want something with more packages
 or that is more current, you'll have to download the source tarball
 and build it yourself from source code using Microsoft Visual Studio,
 as described in the next section.
 </P>
 <HR>
 
 <H4><A NAME = "start_2"></A>2.2 Making LAMMPS 
 </H4>
 <P>This section has the following sub-sections:
 </P>
 <UL><LI><A HREF = "#start_2_1">Read this first</A>
 <LI><A HREF = "#start_2_2">Steps to build a LAMMPS executable</A>
 <LI><A HREF = "#start_2_3">Common errors that can occur when making LAMMPS</A>
 <LI><A HREF = "#start_2_4">Additional build tips</A>
 <LI><A HREF = "#start_2_5">Building for a Mac</A>
 <LI><A HREF = "#start_2_6">Building for Windows</A> 
 </UL>
 <HR>
 
 <A NAME = "start_2_1"></A><B><I>Read this first:</I></B> 
 
 <P>Building LAMMPS can be non-trivial.  You may need to edit a makefile,
 there are compiler options to consider, additional libraries can be
 used (MPI, FFT, JPEG), LAMMPS packages may be included or excluded,
 some of these packages use auxiliary libraries which need to be
 pre-built, etc.
 </P>
 <P>Please read this section carefully.  If you are not comfortable with
 makefiles, or building codes on a Unix platform, or running an MPI job
 on your machine, please find a local expert to help you.  Many
 compiling, linking, and run problems that users have are often not
 LAMMPS issues - they are peculiar to the user's system, compilers,
 libraries, etc.  Such questions are better answered by a local expert.
 </P>
 <P>If you have a build problem that you are convinced is a LAMMPS issue
 (e.g. the compiler complains about a line of LAMMPS source code), then
 please post a question to the <A HREF = "http://lammps.sandia.gov/mail.html">LAMMPS mail
 list</A>.
 </P>
 <P>If you succeed in building LAMMPS on a new kind of machine, for which
 there isn't a similar Makefile for in the src/MAKE directory, send it
 to the developers and we can include it in the LAMMPS distribution.
 </P>
 <HR>
 
 <A NAME = "start_2_2"></A><B><I>Steps to build a LAMMPS executable:</I></B> 
 
 <P><B>Step 0</B>
 </P>
 <P>The src directory contains the C++ source and header files for LAMMPS.
 It also contains a top-level Makefile and a MAKE sub-directory with
 low-level Makefile.* files for many machines.  From within the src
 directory, type "make" or "gmake".  You should see a list of available
 choices.  If one of those is the machine and options you want, you can
 type a command like:
 </P>
 <PRE>make linux
 or
 gmake mac 
 </PRE>
 <P>Note that on a multi-processor or multi-core platform you can launch a
 parallel make, by using the "-j" switch with the make command, which
 will build LAMMPS more quickly.
 </P>
 <P>If you get no errors and an executable like lmp_linux or lmp_mac is
 produced, you're done; it's your lucky day.
 </P>
 <P>Note that by default only a few of LAMMPS optional packages are
 installed.  To build LAMMPS with optional packages, see <A HREF = "#start_3">this
 section</A> below.
 </P>
 <P><B>Step 1</B>
 </P>
 <P>If Step 0 did not work, you will need to create a low-level Makefile
 for your machine, like Makefile.foo.  You should make a copy of an
 existing src/MAKE/Makefile.* as a starting point.  The only portions
 of the file you need to edit are the first line, the "compiler/linker
 settings" section, and the "LAMMPS-specific settings" section.
 </P>
 <P><B>Step 2</B>
 </P>
 <P>Change the first line of src/MAKE/Makefile.foo to list the word "foo"
 after the "#", and whatever other options it will set.  This is the
 line you will see if you just type "make".
 </P>
 <P><B>Step 3</B>
 </P>
 <P>The "compiler/linker settings" section lists compiler and linker
 settings for your C++ compiler, including optimization flags.  You can
 use g++, the open-source GNU compiler, which is available on all Unix
 systems.  You can also use mpicc which will typically be available if
 MPI is installed on your system, though you should check which actual
 compiler it wraps.  Vendor compilers often produce faster code.  On
 boxes with Intel CPUs, we suggest using the commercial Intel icc
 compiler, which can be downloaded from <A HREF = "http://www.intel.com/software/products/noncom">Intel's compiler site</A>.
 </P>
 
 
 <P>If building a C++ code on your machine requires additional libraries,
 then you should list them as part of the LIB variable.
 </P>
 <P>The DEPFLAGS setting is what triggers the C++ compiler to create a
 dependency list for a source file.  This speeds re-compilation when
 source (*.cpp) or header (*.h) files are edited.  Some compilers do
 not support dependency file creation, or may use a different switch
 than -D.  GNU g++ works with -D.  If your compiler can't create
 dependency files, then you'll need to create a Makefile.foo patterned
 after Makefile.storm, which uses different rules that do not involve
 dependency files.  Note that when you build LAMMPS for the first time
 on a new platform, a long list of *.d files will be printed out
 rapidly.  This is not an error; it is the Makefile doing its normal
 creation of dependencies.
 </P>
 <P><B>Step 4</B>
 </P>
 <P>The "system-specific settings" section has several parts.  Note that
 if you change any -D setting in this section, you should do a full
 re-compile, after typing "make clean" (which will describe different
 clean options).
 </P>
 <P>The LMP_INC variable is used to include options that turn on ifdefs
 within the LAMMPS code.  The options that are currently recogized are:
 </P>
 <UL><LI>-DLAMMPS_GZIP
 <LI>-DLAMMPS_JPEG
 <LI>-DLAMMPS_MEMALIGN
 <LI>-DLAMMPS_XDR
 <LI>-DLAMMPS_SMALLBIG
 <LI>-DLAMMPS_BIGBIG
 <LI>-DLAMMPS_SMALLSMALL
 <LI>-DLAMMPS_LONGLONG_TO_LONG
 <LI>-DPACK_ARRAY
 <LI>-DPACK_POINTER
 <LI>-DPACK_MEMCPY 
 </UL>
 <P>The read_data and dump commands will read/write gzipped files if you
 compile with -DLAMMPS_GZIP.  It requires that your Unix support the
 "popen" command.
 </P>
 <P>If you use -DLAMMPS_JPEG, the <A HREF = "dump.html">dump image</A> command will be
 able to write out JPEG image files.  If not, it will only be able to
 write out text-based PPM image files.  For JPEG files, you must also
 link LAMMPS with a JPEG library, as described below.
 </P>
 <P>Using -DLAMMPS_MEMALIGN=<bytes> enables the use of the
 posix_memalign() call instead of malloc() when large chunks or memory
 are allocated by LAMMPS.  This can help to make more efficient use of
 vector instructions of modern CPUS, since dynamically allocated memory
 has to be aligned on larger than default byte boundaries (e.g. 16
 bytes instead of 8 bytes on x86 type platforms) for optimal
 performance.
 </P>
 <P>If you use -DLAMMPS_XDR, the build will include XDR compatibility
 files for doing particle dumps in XTC format.  This is only necessary
 if your platform does have its own XDR files available.  See the
 Restrictions section of the <A HREF = "dump.html">dump</A> command for details.
 </P>
 <P>Use at most one of the -DLAMMPS_SMALLBIG, -DLAMMPS_BIGBIG,
 -D-DLAMMPS_SMALLSMALL settings.  The default is -DLAMMPS_SMALLBIG.
 These settings refer to use of 4-byte (small) vs 8-byte (big) integers
 within LAMMPS, as specified in src/lmptype.h.  The only reason to use
 the BIGBIG setting is to enable simulation of huge molecular systems
 with more than 2 billion atoms or to allow moving atoms to wrap back
 through a periodic box more than 512 times.  The only reason to use
 the SMALLSMALL setting is if your machine does not support 64-bit
 integers.  See the <A HREF = "#start_2_4">Additional build tips</A> section below
 for more details.
 </P>
 <P>The -DLAMMPS_LONGLONG_TO_LONG setting may be needed if your system or
 MPI version does not recognize "long long" data types.  In this case a
 "long" data type is likely already 64-bits, in which case this setting
 will convert to that data type.
 </P>
 <P>Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY
 options can make for faster parallel FFTs (in the PPPM solver) on some
 platforms.  The -DPACK_ARRAY setting is the default.  See the
 <A HREF = "kspace_style.html">kspace_style</A> command for info about PPPM.  See
 Step 6 below for info about building LAMMPS with an FFT library.
 </P>
 <P><B>Step 5</B>
 </P>
 <P>The 3 MPI variables are used to specify an MPI library to build LAMMPS
 with. 
 </P>
 <P>If you want LAMMPS to run in parallel, you must have an MPI library
 installed on your platform.  If you use an MPI-wrapped compiler, such
 as "mpicc" to build LAMMPS, you should be able to leave these 3
 variables blank; the MPI wrapper knows where to find the needed files.
 If not, and MPI is installed on your system in the usual place (under
 /usr/local), you also may not need to specify these 3 variables.  On
 some large parallel machines which use "modules" for their
 compile/link environements, you may simply need to include the correct
 module in your build environment.  Or the parallel machine may have a
 vendor-provided MPI which the compiler has no trouble finding.
 </P>
 <P>Failing this, with these 3 variables you can specify where the mpi.h
 file (MPI_INC) and the MPI library file (MPI_PATH) are found and the
 name of the library file (MPI_LIB).
 </P>
 <P>If you are installing MPI yourself, we recommend Argonne's MPICH2
 or OpenMPI.  MPICH can be downloaded from the <A HREF = "http://www.mcs.anl.gov/research/projects/mpich2/">Argonne MPI
 site</A>.  OpenMPI can
 be downloaded from the <A HREF = "http://www.open-mpi.org">OpenMPI site</A>.
 Other MPI packages should also work. If you are running on a big
 parallel platform, your system people or the vendor should have
 already installed a version of MPI, which is likely to be faster
 than a self-installed MPICH or OpenMPI, so find out how to build
 and link with it.  If you use MPICH or OpenMPI, you will have to
 configure and build it for your platform.  The MPI configure script
 should have compiler options to enable you to use the same compiler
 you are using for the LAMMPS build, which can avoid problems that can
 arise when linking LAMMPS to the MPI library.
 </P>
 <P>If you just want to run LAMMPS on a single processor, you can use the
 dummy MPI library provided in src/STUBS, since you don't need a true
 MPI library installed on your system.  See the
 src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
 in this case.  You will also need to build the STUBS library for your
-platform before making LAMMPS itself.  To build it as a static
-library, from the src directory, type "make stubs", or from the STUBS
-dir, type "make" and it should create a libmpi_stubs.a suitable for
-linking to LAMMPS.  To build it as a shared library, from the STUBS
-dir, type "make shlib" and it should create a libmpi_stubs.so suitable
-for dynamically loading when LAMMPS runs.  If either of these builds
-fail, you will need to edit the STUBS/Makefile for your platform.
+platform before making LAMMPS itself.  To build from the src
+directory, type "make stubs", or from the STUBS dir, type "make".
+This should create a libmpi_stubs.a file suitable for linking to
+LAMMPS.  If the build fails, you will need to edit the STUBS/Makefile
+for your platform.
 </P>
 <P>The file STUBS/mpi.cpp provides a CPU timer function called
 MPI_Wtime() that calls gettimeofday() .  If your system doesn't
 support gettimeofday() , you'll need to insert code to call another
 timer.  Note that the ANSI-standard function clock() rolls over after
 an hour or so, and is therefore insufficient for timing long LAMMPS
 simulations.
 </P>
 <P><B>Step 6</B>
 </P>
 <P>The 3 FFT variables allow you to specify an FFT library which LAMMPS
 uses (for performing 1d FFTs) when running the particle-particle
 particle-mesh (PPPM) option for long-range Coulombics via the
 <A HREF = "kspace_style.html">kspace_style</A> command.
 </P>
 <P>LAMMPS supports various open-source or vendor-supplied FFT libraries
 for this purpose.  If you leave these 3 variables blank, LAMMPS will
 use the open-source <A HREF = "http://kissfft.sf.net">KISS FFT library</A>, which is
 included in the LAMMPS distribution.  This library is portable to all
 platforms and for typical LAMMPS simulations is almost as fast as FFTW
 or vendor optimized libraries.  If you are not including the KSPACE
 package in your build, you can also leave the 3 variables blank.
 </P>
 <P>Otherwise, select which kinds of FFTs to use as part of the FFT_INC
 setting by a switch of the form -DFFT_XXX.  Recommended values for XXX
 are: MKL, SCSL, FFTW2, and FFTW3.  Legacy options are: INTEL, SGI,
 ACML, and T3E.  For backward compatability, using -DFFT_FFTW will use
 the FFTW2 library.  Using -DFFT_NONE will use the KISS library
 described above.
 </P>
 <P>You may also need to set the FFT_INC, FFT_PATH, and FFT_LIB variables,
 so the compiler and linker can find the needed FFT header and library
 files.  Note that on some large parallel machines which use "modules"
 for their compile/link environements, you may simply need to include
 the correct module in your build environment.  Or the parallel machine
 may have a vendor-provided FFT library which the compiler has no
 trouble finding.
 </P>
 <P>FFTW is a fast, portable library that should also work on any
 platform.  You can download it from
 <A HREF = "http://www.fftw.org">www.fftw.org</A>.  Both the legacy version 2.1.X and
 the newer 3.X versions are supported as -DFFT_FFTW2 or -DFFT_FFTW3.
 Building FFTW for your box should be as simple as ./configure; make.
 Note that on some platforms FFTW2 has been pre-installed, and uses
 renamed files indicating the precision it was compiled with,
 e.g. sfftw.h, or dfftw.h instead of fftw.h.  In this case, you can
 specify an additional define variable for FFT_INC called -DFFTW_SIZE,
 which will select the correct include file.  In this case, for FFT_LIB
 you must also manually specify the correct library, namely -lsfftw or
 -ldfftw.
 </P>
 <P>The FFT_INC variable also allows for a -DFFT_SINGLE setting that will
 use single-precision FFTs with PPPM, which can speed-up long-range
 calulations, particularly in parallel or on GPUs.  Fourier transform
 and related PPPM operations are somewhat insensitive to floating point
 truncation errors and thus do not always need to be performed in
 double precision.  Using the -DFFT_SINGLE setting trades off a little
 accuracy for reduced memory use and parallel communication costs for
 transposing 3d FFT data.  Note that single precision FFTs have only
 been tested with the FFTW3, FFTW2, MKL, and KISS FFT options.
 </P>
 <P><B>Step 7</B>
 </P>
 <P>The 3 JPG variables allow you to specify a JPEG library which LAMMPS
 uses when writing out JPEG files via the <A HREF = "dump_image.html">dump image</A>
 command.  These can be left blank if you do not use the -DLAMMPS_JPEG
 switch discussed above in Step 4, since in that case JPEG output will
 be disabled.
 </P>
 <P>A standard JPEG library usually goes by the name libjpeg.a and has an
 associated header file jpeglib.h.  Whichever JPEG library you have on
 your platform, you'll need to set the appropriate JPG_INC, JPG_PATH,
 and JPG_LIB variables, so that the compiler and linker can find it.
 </P>
 <P>As before, if these header and library files are in the usual place on
 your machine, you may not need to set these variables.
 </P>
 <P><B>Step 8</B>
 </P>
 <P>Note that by default only a few of LAMMPS optional packages are
 installed.  To build LAMMPS with optional packages, see <A HREF = "#start_3">this
 section</A> below, before proceeding to Step 9.
 </P>
 <P><B>Step 9</B>
 </P>
 <P>That's it.  Once you have a correct Makefile.foo, you have installed
 the optional LAMMPS packages you want to include in your build, and
 you have pre-built any other needed libraries (e.g. MPI, FFT, package
 libraries), all you need to do from the src directory is type
 something like this:
 </P>
 <PRE>make foo
 or
 gmake foo 
 </PRE>
 <P>You should get the executable lmp_foo when the build is complete.
 </P>
 <HR>
 
 <A NAME = "start_2_3"></A><B><I>Errors that can occur when making LAMMPS:</I></B> 
 
 <P>IMPORTANT NOTE: If an error occurs when building LAMMPS, the compiler
 or linker will state very explicitly what the problem is.  The error
 message should give you a hint as to which of the steps above has
 failed, and what you need to do in order to fix it.  Building a code
 with a Makefile is a very logical process.  The compiler and linker
 need to find the appropriate files and those files need to be
 compatible with LAMMPS source files.  When a make fails, there is
 usually a very simple reason, which you or a local expert will need to
 fix.
 </P>
 <P>Here are two non-obvious errors that can occur:
 </P>
 <P>(1) If the make command breaks immediately with errors that indicate
 it can't find files with a "*" in their names, this can be because
 your machine's native make doesn't support wildcard expansion in a
 makefile.  Try gmake instead of make.  If that doesn't work, try using
 a -f switch with your make command to use a pre-generated
 Makefile.list which explicitly lists all the needed files, e.g.
 </P>
 <PRE>make makelist
 make -f Makefile.list linux
 gmake -f Makefile.list mac 
 </PRE>
 <P>The first "make" command will create a current Makefile.list with all
 the file names in your src dir.  The 2nd "make" command (make or
 gmake) will use it to build LAMMPS.  Note that you should
 include/exclude any desired optional packages before using the "make
 makelist" command.
 </P>
 <P>(2) If you get an error that says something like 'identifier "atoll"
 is undefined', then your machine does not support "long long"
 integers.  Try using the -DLAMMPS_LONGLONG_TO_LONG setting described
 above in Step 4.
 </P>
 <HR>
 
 <A NAME = "start_2_4"></A><B><I>Additional build tips:</I></B> 
 
 <P>(1) Building LAMMPS for multiple platforms.
 </P>
 <P>You can make LAMMPS for multiple platforms from the same src
 directory.  Each target creates its own object sub-directory called
 Obj_target where it stores the system-specific *.o files.
 </P>
 <P>(2) Cleaning up.
 </P>
 <P>Typing "make clean-all" or "make clean-foo" will delete *.o object
 files created when LAMMPS is built, for either all builds or for a
 particular machine.
 </P>
 <P>(3) Changing the LAMMPS size limits via -DLAMMPS_SMALLBIG or
 -DLAMMPS_BIBIG or -DLAMMPS_SMALLSMALL
 </P>
 <P>As explained above, any of these 3 settings can be specified on the
 LMP_INC line in your low-level src/MAKE/Makefile.foo.
 </P>
 <P>The default is -DLAMMPS_SMALLBIG which allows for systems with up to
 2^63 atoms and timesteps (about 9 billion billion).  The atom limit is
 for atomic systems that do not require atom IDs.  For molecular
 models, which require atom IDs, the limit is 2^31 atoms (about 2
 billion).  With this setting, image flags are stored in 32-bit
 integers, which means for 3 dimensions that atoms can only wrap around
 a periodic box at most 512 times.  If atoms move through the periodic
 box more than this limit, the image flags will "roll over", e.g. from
 511 to -512, which can cause diagnostics like the mean-squared
 displacement, as calculated by the <A HREF = "compute_msd.html">compute msd</A>
 command, to be faulty.
 </P>
 <P>To allow for larger molecular systems or larger image flags, compile
 with -DLAMMPS_BIGBIG.  This enables molecular systems with up to 2^63
 atoms (about 9 billion billion).  And image flags will not "roll over"
 until they reach 2^20 = 1048576.
 </P>
 <P>IMPORTANT NOTE: As of 6/2012, the BIGBIG setting does not yet enable
 molecular systems to grow as large as 2^63.  Only the image flag roll
 over is currently affected by this compile option.
 </P>
 <P>If your system does not support 8-byte integers, you will need to
 compile with the -DLAMMPS_SMALLSMALL setting.  This will restrict your
 total number of atoms (for atomic or molecular models) and timesteps
 to 2^31 (about 2 billion).  Image flags will roll over at 2^9 = 512.
 </P>
 <P>Note that in src/lmptype.h there are also settings for the MPI data
 types associated with the integers that store atom IDs and total
 system sizes.  These need to be consistent with the associated C data
 types, or else LAMMPS will generate a run-time error.
 </P>
 <P>In all cases, the size of problem that can be run on a per-processor
 basis is limited by 4-byte integer storage to 2^31 atoms per processor
 (about 2 billion).  This should not normally be a restriction since
 such a problem would have a huge per-processor memory footprint due to
 neighbor lists and would run very slowly in terms of CPU
 secs/timestep.
 </P>
 <HR>
 
 <A NAME = "start_2_5"></A><B><I>Building for a Mac:</I></B> 
 
 <P>OS X is BSD Unix, so it should just work.  See the
 src/MAKE/Makefile.mac file.
 </P>
 <HR>
 
 <A NAME = "start_2_6"></A><B><I>Building for Windows:</I></B> 
 
 <P>The LAMMPS download page has an option to download both a serial and
 parallel pre-built Windows exeutable.  See the <A HREF = "#start_6">Running
 LAMMPS</A> section for instructions for running these
 executables on a Windows box.
 </P>
 <P>The pre-built executables are built with a subset of the available
 pacakges; see the download page for the list.  If you want
 a Windows version with specific packages included and excluded,
 you can build it yourself.
 </P>
 <P>One way to do this is install and use cygwin to build LAMMPS with a
 standard Linus make, just as you would on any Linux box; see
 src/MAKE/Makefile.cygwin.
 </P>
 <P>The other way to do this is using Visual Studio and project files.
 See the src/WINDOWS directory and its README.txt file for instructions
 on both a basic build and a customized build with pacakges you select.
 </P>
 <HR>
 
 <H4><A NAME = "start_3"></A>2.3 Making LAMMPS with optional packages 
 </H4>
 <P>This section has the following sub-sections:
 </P>
 <UL><LI><A HREF = "#start_3_1">Package basics</A>
 <LI><A HREF = "#start_3_2">Including/excluding packages</A>
 <LI><A HREF = "#start_3_3">Packages that require extra libraries</A>
 <LI><A HREF = "#start_3_4">Additional Makefile settings for extra libraries</A> 
 </UL>
 <HR>
 
 <A NAME = "start_3_1"></A><B><I>Package basics:</I></B> 
 
 <P>The source code for LAMMPS is structured as a set of core files which
 are always included, plus optional packages.  Packages are groups of
 files that enable a specific set of features.  For example, force
 fields for molecular systems or granular systems are in packages.  You
 can see the list of all packages by typing "make package" from within
 the src directory of the LAMMPS distribution.
 </P>
 <P>If you use a command in a LAMMPS input script that is specific to a
 particular package, you must have built LAMMPS with that package, else
 you will get an error that the style is invalid or the command is
 unknown.  Every command's doc page specfies if it is part of a
 package.  You can also type
 </P>
 <PRE>lmp_machine -h 
 </PRE>
 <P>to run your executable with the optional <A HREF = "#start_7">-h command-line
 switch</A> for "help", which will list the styles and commands
 known to your executable.
 </P>
 <P>There are two kinds of packages in LAMMPS, standard and user packages.
 More information about the contents of standard and user packages is
 given in <A HREF = "Section_packages.html">Section_packages</A> of the manual.  The
 difference between standard and user packages is as follows:
 </P>
 <P>Standard packages are supported by the LAMMPS developers and are
 written in a syntax and style consistent with the rest of LAMMPS.
 This means we will answer questions about them, debug and fix them if
 necessary, and keep them compatible with future changes to LAMMPS.
 </P>
 <P>User packages have been contributed by users, and always begin with
 the user prefix.  If they are a single command (single file), they are
 typically in the user-misc package.  Otherwise, they are a a set of
 files grouped together which add a specific functionality to the code.
 </P>
 <P>User packages don't necessarily meet the requirements of the standard
 packages.  If you have problems using a feature provided in a user
 package, you will likely need to contact the contributor directly to
 get help.  Information on how to submit additions you make to LAMMPS
 as a user-contributed package is given in <A HREF = "Section_modify.html#mod_14">this
 section</A> of the documentation.
 </P>
 <HR>
 
 <A NAME = "start_3_2"></A><B><I>Including/excluding packages:</I></B> 
 
 <P>To use or not use a package you must include or exclude it before
 building LAMMPS.  From the src directory, this is typically as simple
 as:
 </P>
 <PRE>make yes-colloid
 make g++ 
 </PRE>
 <P>or
 </P>
 <PRE>make no-manybody
 make g++ 
 </PRE>
 <P>Some packages have individual files that depend on other packages
 being included.  LAMMPS checks for this and does the right thing.
 I.e. individual files are only included if their dependencies are
 already included.  Likewise, if a package is excluded, other files
 dependent on that package are also excluded.
 </P>
 <P>The reason to exclude packages is if you will never run certain kinds
 of simulations.  For some packages, this will keep you from having to
 build auxiliary libraries (see below), and will also produce a smaller
 executable which may run a bit faster.
 </P>
 <P>When you download a LAMMPS tarball, these packages are pre-installed
 in the src directory: KSPACE, MANYBODY,MOLECULE.  When you download
 LAMMPS source files from the SVN or Git repositories, no packages are
 pre-installed.
 </P>
 <P>Packages are included or excluded by typing "make yes-name" or "make
 no-name", where "name" is the name of the package in lower-case, e.g.
 name = kspace for the KSPACE package or name = user-atc for the
 USER-ATC package.  You can also type "make yes-standard", "make
 no-standard", "make yes-user", "make no-user", "make yes-all" or "make
 no-all" to include/exclude various sets of packages.  Type "make
 package" to see the all of the package-related make options.
 </P>
 <P>IMPORTANT NOTE: Inclusion/exclusion of a package works by simply
 moving files back and forth between the main src directory and
 sub-directories with the package name (e.g. src/KSPACE, src/USER-ATC),
 so that the files are seen or not seen when LAMMPS is built.  After
 you have included or excluded a package, you must re-build LAMMPS.
 </P>
 <P>Additional package-related make options exist to help manage LAMMPS
 files that exist in both the src directory and in package
 sub-directories.  You do not normally need to use these commands
 unless you are editing LAMMPS files or have downloaded a patch from
 the LAMMPS WWW site.
 </P>
 <P>Typing "make package-update" will overwrite src files with files from
 the package sub-directories if the package has been included.  It
 should be used after a patch is installed, since patches only update
 the files in the package sub-directory, but not the src files.  Typing
 "make package-overwrite" will overwrite files in the package
 sub-directories with src files.
 </P>
 <P>Typing "make package-status" will show which packages are currently
 included. Of those that are included, it will list files that are
 different in the src directory and package sub-directory.  Typing
 "make package-diff" lists all differences between these files.  Again,
 type "make package" to see all of the package-related make options.
 </P>
 <HR>
 
 <A NAME = "start_3_3"></A><B><I>Packages that require extra libraries:</I></B> 
 
 <P>A few of the standard and user packages require additional auxiliary
 libraries to be compiled first.  If you get a LAMMPS build error about
 a missing library, this is likely the reason.  The source code or
 hooks to these libraries is included in the LAMMPS distribution under
 the "lib" directory.  Look at the lib/README file for a list of these
 or see <A HREF = "Section_packages.html">Section_packages</A> of the doc pages.
 </P>
 <P>Each lib directory has a README file (e.g. lib/reax/README) with
 instructions on how to build that library.  Typically this is done 
 in this manner:
 </P>
 <PRE>make -f Makefile.g++ 
 </PRE>
 <P>in the appropriate directory, e.g. in lib/reax.  However, some of the
 libraries do not build this way.  Again, see the libary README file
 for details.
 </P>
 <P>If you are building the library, you will need to use a Makefile that
 is a match for your system.  If one of the provided Makefiles is not
 appropriate for your system you will need to edit or add one.  For
 example, in the case of Fortran-based libraries, your system must have
 a Fortran compiler, the settings for which will need to be listed in
 the Makefile.
 </P>
 <P>When you have built one of these libraries, there are 2 things to
 check:
 </P>
 <P>(1) The file libname.a should now exist in lib/name.
 E.g. lib/reax/libreax.a.  This is the library file LAMMPS will link
 against.  One exception is the lib/cuda library which produces the
 file liblammpscuda.a, because there is already a system library
 libcuda.a.
 </P>
 <P>(2) The file Makefile.lammps should exist in lib/name.  E.g.
 lib/cuda/Makefile.lammps.  This file may be auto-generated by the
 build of the library, or you may need to make a copy of the
 appropriate provided file (e.g. lib/meam/Makefile.lammps.gfortran).
 Either way you should insure that the settings in this file are
 appropriate for your system.
 </P>
 <P>There are typically 3 settings in the Makefile.lammps file (unless
 some are blank or not needed): a SYSINC, SYSPATH, and SYSLIB setting,
 specific to this package.  These are settings the LAMMPS build will
 import when compiling the LAMMPS package files (not the library
 files), and linking to the auxiliary library.  They typically list any
 other system libraries needed to support the package and where to find
 them.  An example is the BLAS and LAPACK libraries needed by the
 USER-ATC package.  Or the system libraries that support calling
 Fortran from C++, as the MEAM and REAX packages do.
 </P>
 <P>(3) One exception to these rules is the lib/linalg directory, which is
 simply BLAS and LAPACK files used by the USER-ATC package (and
 possibly other packages in the future).  If you do not have these
 libraries on your system, you can use one of the Makefiles in this
 directory (which you may need to modify) to build a dummy BLAS and
 LAPACK library.  It can then be included in the
 lib/atc/Makefile.lammps file as part of the SYSPATH and SYSLIB lines
 so that LAMMPS will build properly with the USER-ATC package.
 </P>
 <P>Note that if the Makefile.lammps settings are not correct for your
 box, the LAMMPS build will likely fail.
 </P>
 <P>There are also a few packages, like KIM and USER-MOLFILE, that use
 additional auxiliary libraries which are not provided with LAMMPS.  In
 these cases, there is no corresponding sub-directory under the lib
 directory.  You are expected to download and install these libraries
 yourself before building LAMMPS with the package installed, if they
 are not already on your system.
 </P>
 <P>However there is still a Makefile.lammps file with settings used when
 building LAMMPS with the package installed, as in (2) above.  Is is
 found in the package directory itself, e.g. src/KIM/Makefile.lammps.
 This file contains the same 3 settings described above for SYSINC,
 SYSPATH, and SYSLIB.  The Makefile.lammps file contains instructions
 on how to specify these settings for your system.  You need to specify
 the settings before building LAMMPS with one of those packages
 installed, else the LAMMPS build will likely fail.
 </P>
 <HR>
 
 <H4><A NAME = "start_4"></A>2.4 Building LAMMPS via the Make.py script 
 </H4>
 <P>The src directory includes a Make.py script, written
 in Python, which can be used to automate various steps
 of the build process.
 </P>
 <P>You can run the script from the src directory by typing either:
 </P>
 <PRE>Make.py
 python Make.py 
 </PRE>
 <P>which will give you info about the tool.  For the former to work, you
 may need to edit the 1st line of the script to point to your local
 Python.  And you may need to insure the script is executable:
 </P>
 <PRE>chmod +x Make.py 
 </PRE>
 <P>The following options are supported as switches:
 </P>
 <UL><LI>-i file1 file2 ...
 <LI>-p package1 package2 ...
 <LI>-u package1 package2 ...
 <LI>-e package1 arg1 arg2 package2 ...
 <LI>-o dir
 <LI>-b machine
 <LI>-s suffix1 suffix2 ...
 <LI>-l dir
 <LI>-j N
 <LI>-h switch1 switch2 ... 
 </UL>
 <P>Help on any switch can be listed by using -h, e.g.
 </P>
 <PRE>Make.py -h -i -p 
 </PRE>
 <P>At a hi-level, these are the kinds of package management
 and build tasks that can be performed easily, using
 the Make.py tool:
 </P>
 <UL><LI>install/uninstall packages and build the associated external libs (use -p and -u and -e)
 <LI>install packages needed for one or more input scripts (use -i and -p)
 <LI>build LAMMPS, either in the src dir or new dir (use -b)
 <LI>create a new dir with only the source code needed for one or more input scripts (use -i and -o) 
 </UL>
 <P>The last bullet can be useful when you wish to build a stripped-down
 version of LAMMPS to run a specific script(s).  Or when you wish to
 move the minimal amount of files to another platform for a remote
 LAMMPS build.
 </P>
 <P>Note that using Make.py is not a substitute for insuring you have a
 valid src/MAKE/Makefile.foo for your system, or that external library
 Makefiles in any lib/* directories you use are also valid for your
 system.  But once you have done that, you can use Make.py to quickly
 include/exclude the packages and external libraries needed by your
 input scripts.
 </P>
 <HR>
 
 <H4><A NAME = "start_5"></A>2.5 Building LAMMPS as a library 
 </H4>
 <P>LAMMPS can be built as either a static or shared library, which can
 then be called from another application or a scripting language.  See
 <A HREF = "Section_howto.html#howto_10">this section</A> for more info on coupling
 LAMMPS to other codes.  See <A HREF = "Section_python.html">this section</A> for
 more info on wrapping and running LAMMPS from Python.
 </P>
 <H5><B>Static library:</B> 
 </H5>
 <P>To build LAMMPS as a static library (*.a file on Linux), type
 </P>
 <PRE>make makelib
 make -f Makefile.lib foo 
 </PRE>
 <P>where foo is the machine name.  This kind of library is typically used
 to statically link a driver application to LAMMPS, so that you can
 insure all dependencies are satisfied at compile time.  Note that
 inclusion or exclusion of any desired optional packages should be done
 before typing "make makelib".  The first "make" command will create a
 current Makefile.lib with all the file names in your src dir.  The
 second "make" command will use it to build LAMMPS as a static library,
 using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo.  The
 build will create the file liblmp_foo.a which another application can
 link to.
 </P>
 <H5><B>Shared library:</B> 
 </H5>
 <P>To build LAMMPS as a shared library (*.so file on Linux), which can be
 dynamically loaded, e.g. from Python, type
 </P>
 <PRE>make makeshlib
 make -f Makefile.shlib foo 
 </PRE>
 <P>where foo is the machine name.  This kind of library is required when
 wrapping LAMMPS with Python; see <A HREF = "Section_python.html">Section_python</A>
 for details.  Again, note that inclusion or exclusion of any desired
 optional packages should be done before typing "make makelib".  The
 first "make" command will create a current Makefile.shlib with all the
 file names in your src dir.  The second "make" command will use it to
 build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
 settings in src/MAKE/Makefile.foo.  The build will create the file
-liblmp_foo.so which another application can link to dyamically, as
-well as a soft link liblmp.so, which the Python wrapper uses by
-default.
+liblmp_foo.so which another application can link to dyamically.  It
+will also create a soft link liblmp.so, which the Python wrapper uses
+by default.
 </P>
-<H5><B>Additional requirements for building a shared library:</B> 
-</H5>
 <P>Note that for a shared library to be usable by a calling program, all
 the auxiliary libraries it depends on must also exist as shared
-libraries, and the operating system must be able to find them.  For
-LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
-FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
-needed by MPI), any packages you have installed that require libraries
-provided with LAMMPS (e.g. the USER-ATC package require
-lib/atc/libatc.so), and any system libraries (e.g. BLAS or
-Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
-file.
-</P>
-<P>If one of these auxiliary libraries does not exist as a shared
-library, the second make command should generate a build error.  If a
-needed library is simply missing from the link list, this will not
-generate an error at build time, but will generate a run-time error
-when the library is loaded, so be sure all needed libraries are
-listed, just as they are when building LAMMPS as a stand-alone code.
-</P>
-<P>Note that if you install them yourself, some libraries, such as MPI,
-may not build by default as shared libraries.  The build instructions
-for the library should tell you how to do this.
+libraries.  This will be the case for libraries included with LAMMPS,
+such as the dummy MPI library in src/STUBS or any package libraries in
+lib/packges, since they are always built as shared libraries with the
+-fPIC switch.  However, if a library like MPI or FFTW does not exist
+as a shared library, the second make command will generate an error.
+This means you will need to install a shared library version of the
+package.  The build instructions for the library should tell you how
+to do this.
 </P>
 <P>As an example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH
 library</A>, a popular open-source version of MPI, distributed by
 Argonne National Labs, as a shared library in the default
 /usr/local/lib location:
 </P>
 
 
 <PRE>./configure --enable-shared
 make
 make install 
 </PRE>
 <P>You may need to use "sudo make install" in place of the last line if
 you do not have write privileges for /usr/local/lib.  The end result
 should be the file /usr/local/lib/libmpich.so.
 </P>
-<P>Also note that not all of the auxiliary libraries provided with LAMMPS
-include Makefiles in their lib directories suitable for building them
-as shared libraries.  Typically this simply requires 3 steps: (a)
-adding a -fPIC switch when files are compiled, (b) adding "-fPIC
--shared" switches when the library is linked with a C++ (or Fortran)
-compiler, and (c) creating an output target that ends in ".so", like
-libatc.o.  As we or others create and contribute these Makefiles, we
-will add them to the LAMMPS distribution.
-</P>
-<H5><B>Additional requirements for using a shared library:</B> 
+<H5><B>Additional requirement for using a shared library:</B> 
 </H5>
 <P>The operating system finds shared libraries to load at run-time using
-the environment variable LD_LIBRARY_PATH.  So at a minimum you
-must set it to include the lammps src directory where the LAMMPS
-shared library file is created.
+the environment variable LD_LIBRARY_PATH.  So you may wish
+to copy the file src/liblmp.so or src/liblmp_g++.so (for example)
+to a place the system can find it by default, such as /usr/local/lib,
+or you may wish to add the lammps src directory to LD_LIBRARY_PATH.
 </P>
 <P>For the csh or tcsh shells, you could add something like this to your
 ~/.cshrc file:
 </P>
 <PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src 
 </PRE>
-<P>If any auxiliary libraries, used by LAMMPS, are not in default places
-where the operating system can find them, then you also have to add
-their paths to the LD_LIBRARY_PATH environment variable.
-</P>
-<P>For example, if you are using the dummy MPI library provided in
-src/STUBS, and have built the file libmpi_stubs.so, you would add
-something like this to your ~/.cshrc file:
-</P>
-<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src/STUBS 
-</PRE>
-<P>If you are using the LAMMPS USER-ATC package, and have built the file
-lib/atc/libatc.so, you would add something like this to your ~/.cshrc
-file:
-</P>
-<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/lib/atc 
-</PRE>
 <H5><B>Calling the LAMMPS library:</B> 
 </H5>
 <P>Either flavor of library (static or shared0 allows one or more LAMMPS
 objects to be instantiated from the calling program.
 </P>
 <P>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
 namespace; you can safely use any of its classes and methods from
 within the calling code, as needed.
 </P>
 <P>When used from a C or Fortran program or a scripting language like
 Python, the library has a simple function-style interface, provided in
 src/library.cpp and src/library.h.
 </P>
 <P>See the sample codes in examples/COUPLE/simple for examples of C++ and
 C and Fortran codes that invoke LAMMPS thru its library interface.
 There are other examples as well in the COUPLE directory which are
 discussed in <A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the
 manual.  See <A HREF = "Section_python.html">Section_python</A> of the manual for a
 description of the Python wrapper provided with LAMMPS that operates
 through the LAMMPS library interface.
 </P>
 <P>The files src/library.cpp and library.h define the C-style API for
 using LAMMPS as a library.  See <A HREF = "Section_howto.html#howto_19">Section_howto
 19</A> of the manual for a description of the
 interface and how to extend it for your needs.
 </P>
 <HR>
 
 <H4><A NAME = "start_6"></A>2.6 Running LAMMPS 
 </H4>
 <P>By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux
 < in.file.  This means you first create an input script (e.g. in.file)
 containing the desired commands.  <A HREF = "Section_commands.html">This section</A>
 describes how input scripts are structured and what commands they
 contain.
 </P>
 <P>You can test LAMMPS on any of the sample inputs provided in the
 examples or bench directory.  Input scripts are named in.* and sample
 outputs are named log.*.name.P where name is a machine and P is the
 number of processors it was run on.
 </P>
 <P>Here is how you might run a standard Lennard-Jones benchmark on a
 Linux box, using mpirun to launch a parallel job:
 </P>
 <PRE>cd src
 make linux
 cp lmp_linux ../bench
 cd ../bench
 mpirun -np 4 lmp_linux < in.lj 
 </PRE>
 <P>See <A HREF = "http://lammps.sandia.gov/bench.html">this page</A> for timings for this and the other benchmarks
 on various platforms.
 </P>
 
 
 <HR>
 
 <P>On a Windows box, you can skip making LAMMPS and simply download an
 executable, as described above, though the pre-packaged executables
 include only certain packages.
 </P>
 <P>To run a LAMMPS executable on a Windows machine, first decide whether
 you want to download the non-MPI (serial) or the MPI (parallel)
 version of the executable. Download and save the version you have
 chosen.
 </P>
 <P>For the non-MPI version, follow these steps:
 </P>
 <UL><LI>Get a command prompt by going to Start->Run... , 
 then typing "cmd". 
 
 <LI>Move to the directory where you have saved lmp_win_no-mpi.exe
 (e.g. by typing: cd "Documents"). 
 
 <LI>At the command prompt, type "lmp_win_no-mpi -in in.lj", replacing in.lj
 with the name of your LAMMPS input script. 
 </UL>
 <P>For the MPI version, which allows you to run LAMMPS under Windows on 
 multiple processors, follow these steps:
 </P>
 <UL><LI>Download and install 
 <A HREF = "http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads">MPICH2</A>
 for Windows. 
 
 <LI>You'll need to use the mpiexec.exe and smpd.exe files from the MPICH2 package. Put them in 
 same directory (or path) as the LAMMPS Windows executable. 
 
 <LI>Get a command prompt by going to Start->Run... , 
 then typing "cmd". 
 
 <LI>Move to the directory where you have saved lmp_win_mpi.exe
 (e.g. by typing: cd "Documents"). 
 
 <LI>Then type something like this: "mpiexec -np 4 -localonly lmp_win_mpi -in in.lj", 
 replacing in.lj with the name of your LAMMPS input script. 
 
 <LI>Note that you may need to provide smpd with a passphrase --- it doesn't matter what you 
 type. 
 
 <LI>In this mode, output may not immediately show up on the screen, so 
 if your input script takes a long time to execute, you may need to be 
 patient before the output shows up. 
 
 <LI>Alternatively, you can still use this executable to run on a single processor by
 typing something like: "lmp_win_mpi -in in.lj". 
 </UL>
 <HR>
 
 <P>The screen output from LAMMPS is described in the next section.  As it
 runs, LAMMPS also writes a log.lammps file with the same information.
 </P>
 <P>Note that this sequence of commands copies the LAMMPS executable
 (lmp_linux) to the directory with the input files.  This may not be
 necessary, but some versions of MPI reset the working directory to
 where the executable is, rather than leave it as the directory where
 you launch mpirun from (if you launch lmp_linux on its own and not
 under mpirun).  If that happens, LAMMPS will look for additional input
 files and write its output files to the executable directory, rather
 than your working directory, which is probably not what you want.
 </P>
 <P>If LAMMPS encounters errors in the input script or while running a
 simulation it will print an ERROR message and stop or a WARNING
 message and continue.  See <A HREF = "Section_errors.html">Section_errors</A> for a
 discussion of the various kinds of errors LAMMPS can or can't detect,
 a list of all ERROR and WARNING messages, and what to do about them.
 </P>
 <P>LAMMPS can run a problem on any number of processors, including a
 single processor.  In theory you should get identical answers on any
 number of processors and on any machine.  In practice, numerical
 round-off can cause slight differences and eventual divergence of
 molecular dynamics phase space trajectories.
 </P>
 <P>LAMMPS can run as large a problem as will fit in the physical memory
 of one or more processors.  If you run out of memory, you must run on
 more processors or setup a smaller problem.
 </P>
 <HR>
 
 <H4><A NAME = "start_7"></A>2.7 Command-line options 
 </H4>
 <P>At run time, LAMMPS recognizes several optional command-line switches
 which may be used in any order.  Either the full word or a one-or-two
 letter abbreviation can be used:
 </P>
 <UL><LI>-c or -cuda
 <LI>-e or -echo
 <LI>-i or -in
 <LI>-h or -help
 <LI>-l or -log
 <LI>-p or -partition
 <LI>-pl or -plog
 <LI>-ps or -pscreen
 <LI>-r or -reorder
 <LI>-sc or -screen
 <LI>-sf or -suffix
 <LI>-v or -var 
 </UL>
 <P>For example, lmp_ibm might be launched as follows:
 </P>
 <PRE>mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none < in.alloy
 mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy 
 </PRE>
 <P>Here are the details on the options:
 </P>
 <PRE>-cuda on/off 
 </PRE>
 <P>Explicitly enable or disable CUDA support, as provided by the
 USER-CUDA package.  If LAMMPS is built with this package, as described
 above in <A HREF = "#start_3">Section 2.3</A>, then by default LAMMPS will run in
 CUDA mode.  If this switch is set to "off", then it will not, even if
 it was built with the USER-CUDA package, which means you can run
 standard LAMMPS or with the GPU package for testing or benchmarking
 purposes.  The only reason to set the switch to "on", is to check if
 LAMMPS was built with the USER-CUDA package, since an error will be
 generated if it was not.
 </P>
 <PRE>-echo style 
 </PRE>
 <P>Set the style of command echoing.  The style can be <I>none</I> or <I>screen</I>
 or <I>log</I> or <I>both</I>.  Depending on the style, each command read from
 the input script will be echoed to the screen and/or logfile.  This
 can be useful to figure out which line of your script is causing an
 input error.  The default value is <I>log</I>.  The echo style can also be
 set by using the <A HREF = "echo.html">echo</A> command in the input script itself.
 </P>
 <PRE>-in file 
 </PRE>
 <P>Specify a file to use as an input script.  This is an optional switch
 when running LAMMPS in one-partition mode.  If it is not specified,
 LAMMPS reads its input script from stdin - e.g. lmp_linux < in.run.
 This is a required switch when running LAMMPS in multi-partition mode,
 since multiple processors cannot all read from stdin.
 </P>
 <PRE>-help 
 </PRE>
 <P>Print a list of options compiled into this executable for each LAMMPS
 style (atom_style, fix, compute, pair_style, bond_style, etc).  This
 can help you know if the command you want to use was included via the
 appropriate package.  LAMMPS will print the info and immediately exit
 if this switch is used.
 </P>
 <PRE>-log file 
 </PRE>
 <P>Specify a log file for LAMMPS to write status information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 file log.lammps.  If this switch is used, LAMMPS writes to the
 specified file.  In multi-partition mode, if the switch is not used, a
 log.lammps file is created with hi-level status information.  Each
 partition also writes to a log.lammps.N file where N is the partition
 ID.  If the switch is specified in multi-partition mode, the hi-level
 logfile is named "file" and each partition also logs information to a
 file.N.  For both one-partition and multi-partition mode, if the
 specified file is "none", then no log files are created.  Using a
 <A HREF = "log.html">log</A> command in the input script will override this setting.
 Option -plog will override the name of the partition log files file.N.
 </P>
 <PRE>-partition 8x2 4 5 ... 
 </PRE>
 <P>Invoke LAMMPS in multi-partition mode.  When LAMMPS is run on P
 processors and this switch is not used, LAMMPS runs in one partition,
 i.e. all P processors run a single simulation.  If this switch is
 used, the P processors are split into separate partitions and each
 partition runs its own simulation.  The arguments to the switch
 specify the number of processors in each partition.  Arguments of the
 form MxN mean M partitions, each with N processors.  Arguments of the
 form N mean a single partition with N processors.  The sum of
 processors in all partitions must equal P.  Thus the command
 "-partition 8x2 4 5" has 10 partitions and runs on a total of 25
 processors.
 </P>
 <P>Running with multiple partitions can e useful for running
 <A HREF = "Section_howto.html#howto_5">multi-replica simulations</A>, where each
 replica runs on on one or a few processors.  Note that with MPI
 installed on a machine (e.g. your desktop), you can run on more
 (virtual) processors than you have physical processors.
 </P>
 <P>To run multiple independent simulatoins from one input script, using
 multiple partitions, see <A HREF = "Section_howto.html#howto_4">Section_howto 4</A>
 of the manual.  World- and universe-style <A HREF = "variable.html">variables</A>
 are useful in this context.
 </P>
 <PRE>-plog file 
 </PRE>
 <P>Specify the base name for the partition log files, so partition N
 writes log information to file.N. If file is none, then no partition
 log files are created.  This overrides the filename specified in the
 -log command-line option.  This option is useful when working with
 large numbers of partitions, allowing the partition log files to be
 suppressed (-plog none) or placed in a sub-directory (-plog
 replica_files/log.lammps) If this option is not used the log file for
 partition N is log.lammps.N or whatever is specified by the -log
 command-line option.
 </P>
 <PRE>-pscreen file 
 </PRE>
 <P>Specify the base name for the partition screen file, so partition N
 writes screen information to file.N. If file is none, then no
 partition screen files are created.  This overrides the filename
 specified in the -screen command-line option.  This option is useful
 when working with large numbers of partitions, allowing the partition
 screen files to be suppressed (-pscreen none) or placed in a
 sub-directory (-pscreen replica_files/screen).  If this option is not
 used the screen file for partition N is screen.N or whatever is
 specified by the -screen command-line option.
 </P>
 <PRE>-reorder nth N
 -reorder custom filename 
 </PRE>
 <P>Reorder the processors in the MPI communicator used to instantiate
 LAMMPS, in one of several ways.  The original MPI communicator ranks
 all P processors from 0 to P-1.  The mapping of these ranks to
 physical processors is done by MPI before LAMMPS begins.  It may be
 useful in some cases to alter the rank order.  E.g. to insure that
 cores within each node are ranked in a desired order.  Or when using
 the <A HREF = "run_style.html">run_style verlet/split</A> command with 2 partitions
 to insure that a specific Kspace processor (in the 2nd partition) is
 matched up with a specific set of processors in the 1st partition.
 See the <A HREF = "Section_accelerate.html">Section_accelerate</A> doc pages for
 more details.
 </P>
 <P>If the keyword <I>nth</I> is used with a setting <I>N</I>, then it means every
 Nth processor will be moved to the end of the ranking.  This is useful
 when using the <A HREF = "run_style.html">run_style verlet/split</A> command with 2
 partitions via the -partition command-line switch.  The first set of
 processors will be in the first partition, the 2nd set in the 2nd
 partition.  The -reorder command-line switch can alter this so that
 the 1st N procs in the 1st partition and one proc in the 2nd partition
 will be ordered consecutively, e.g. as the cores on one physical node.
 This can boost performance.  For example, if you use "-reorder nth 4"
 and "-partition 9 3" and you are running on 12 processors, the
 processors will be reordered from
 </P>
 <PRE>0 1 2 3 4 5 6 7 8 9 10 11 
 </PRE>
 <P>to
 </P>
 <PRE>0 1 2 4 5 6 8 9 10 3 7 11 
 </PRE>
 <P>so that the processors in each partition will be
 </P>
 <PRE>0 1 2 4 5 6 8 9 10 
 3 7 11 
 </PRE>
 <P>See the "processors" command for how to insure processors from each
 partition could then be grouped optimally for quad-core nodes.
 </P>
 <P>If the keyword is <I>custom", then a file that specifies a permutation
 of the processor ranks is also specified.  The format of the reorder
 file is as follows.  Any number of initial blank or comment lines
 (starting with a "#" character) can be present.  These should be
 followed by P lines of the form:
 </P>
 <PRE>I J 
 </PRE>
 <P>where P is the number of processors LAMMPS was launched with.  Note
 that if running in multi-partition mode (see the -partition switch
 above) P is the total number of processors in all partitions.  The I
 and J values describe a permutation of the P processors.  Every I and
 J should be values from 0 to P-1 inclusive.  In the set of P I values,
 every proc ID should appear exactly once.  Ditto for the set of P J
 values.  A single I,J pairing means that the physical processor with
 rank I in the original MPI communicator will have rank J in the
 reordered communicator.
 </P>
 <P>Note that rank ordering can also be specified by many MPI
 implementations, either by environment variables that specify how to
 order physical processors, or by config files that specify what
 physical processors to assign to each MPI rank.  The -reorder switch
 simply gives you a portable way to do this without relying on MPI
 itself.  See the <A HREF = "processors">processors out</A> command for how to output
 info on the final assignment of physical processors to the LAMMPS
 simulation domain.
 </P>
 <PRE>-screen file 
 </PRE>
 <P>Specify a file for LAMMPS to write its screen information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 screen.  If this switch is used, LAMMPS writes to the specified file
 instead and you will see no screen output.  In multi-partition mode,
 if the switch is not used, hi-level status information is written to
 the screen.  Each partition also writes to a screen.N file where N is
 the partition ID.  If the switch is specified in multi-partition mode,
 the hi-level screen dump is named "file" and each partition also
 writes screen information to a file.N.  For both one-partition and
 multi-partition mode, if the specified file is "none", then no screen
 output is performed. Option -pscreen will override the name of the
 partition screen files file.N.
 </P>
 <PRE>-suffix style 
 </PRE>
 <P>Use variants of various styles if they exist.  The specified style can
 be <I>opt</I>, <I>omp</I>, <I>gpu</I>, or <I>cuda</I>.  These refer to optional packages that
 LAMMPS can be built with, as described above in <A HREF = "#start_3">Section
 2.3</A>.  The "opt" style corrsponds to the OPT package, the
 "omp" style to the USER-OMP package, the "gpu" style to the GPU 
 package, and the "cuda" style to the USER-CUDA package.
 </P>
 <P>As an example, all of the packages provide a <A HREF = "pair_lj.html">pair_style
 lj/cut</A> variant, with style names lj/cut/opt, lj/cut/omp,
 lj/cut/gpu, or lj/cut/cuda.  A variant styles can be specified
 explicitly in your input script, e.g. pair_style lj/cut/gpu.  If the
 -suffix switch is used, you do not need to modify your input script.
 The specified suffix (opt,omp,gpu,cuda) is automatically appended
 whenever your input script command creates a new
 <A HREF = "atom_style.html">atom</A>, <A HREF = "pair_style.html">pair</A>, <A HREF = "fix.html">fix</A>,
 <A HREF = "compute.html">compute</A>, or <A HREF = "run_style.html">run</A> style.  If the variant
 version does not exist, the standard version is created.
 </P>
 <P>For the GPU package, using this command-line switch also invokes the
 default GPU settings, as if the command "package gpu force/neigh 0 0
 1" were used at the top of your input script.  These settings can be
 changed by using the <A HREF = "package.html">package gpu</A> command in your script
 if desired.
 </P>
 <P>For the OMP package, using this command-line switch also invokes the
 default OMP settings, as if the command "package omp *" were used at
 the top of your input script.  These settings can be changed by using
 the <A HREF = "package.html">package omp</A> command in your script if desired.
 </P>
 <P>The <A HREF = "suffix.html">suffix</A> command can also set a suffix and it can also
 turn off/on any suffix setting made via the command line.
 </P>
 <PRE>-var name value1 value2 ... 
 </PRE>
 <P>Specify a variable that will be defined for substitution purposes when
 the input script is read.  "Name" is the variable name which can be a
 single character (referenced as $x in the input script) or a full
 string (referenced as ${abc}).  An <A HREF = "variable.html">index-style
 variable</A> will be created and populated with the
 subsequent values, e.g. a set of filenames.  Using this command-line
 option is equivalent to putting the line "variable name index value1
 value2 ..."  at the beginning of the input script.  Defining an index
 variable as a command-line argument overrides any setting for the same
 index variable in the input script, since index variables cannot be
 re-defined.  See the <A HREF = "variable.html">variable</A> command for more info on
 defining index and other kinds of variables and <A HREF = "Section_commands.html#cmd_2">this
 section</A> for more info on using variables
 in input scripts.
 </P>
 <P>NOTE: Currently, the command-line parser looks for arguments that
 start with "-" to indicate new switches.  Thus you cannot specify
 multiple variable values if any of they start with a "-", e.g. a
 negative numeric value.  It is OK if the first value1 starts with a
 "-", since it is automatically skipped.
 </P>
 <HR>
 
 <H4><A NAME = "start_8"></A>2.8 LAMMPS screen output 
 </H4>
 <P>As LAMMPS reads an input script, it prints information to both the
 screen and a log file about significant actions it takes to setup a
 simulation.  When the simulation is ready to begin, LAMMPS performs
 various initializations and prints the amount of memory (in MBytes per
 processor) that the simulation requires.  It also prints details of
 the initial thermodynamic state of the system.  During the run itself,
 thermodynamic information is printed periodically, every few
 timesteps.  When the run concludes, LAMMPS prints the final
 thermodynamic state and a total run time for the simulation.  It then
 appends statistics about the CPU time and storage requirements for the
 simulation.  An example set of statistics is shown here:
 </P>
 <PRE>Loop time of 49.002 on 2 procs for 2004 atoms 
 </PRE>
 <PRE>Pair   time (%) = 35.0495 (71.5267)
 Bond   time (%) = 0.092046 (0.187841)
 Kspce  time (%) = 6.42073 (13.103)
 Neigh  time (%) = 2.73485 (5.5811)
 Comm   time (%) = 1.50291 (3.06703)
 Outpt  time (%) = 0.013799 (0.0281601)
 Other  time (%) = 2.13669 (4.36041) 
 </PRE>
 <PRE>Nlocal:    1002 ave, 1015 max, 989 min
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 Nghost:    8720 ave, 8724 max, 8716 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1
 Neighs:    354141 ave, 361422 max, 346860 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 </PRE>
 <PRE>Total # of neighbors = 708282
 Ave neighs/atom = 353.434
 Ave special neighs/atom = 2.34032
 Number of reneighborings = 42
 Dangerous reneighborings = 2 
 </PRE>
 <P>The first section gives the breakdown of the CPU run time (in seconds)
 into major categories.  The second section lists the number of owned
 atoms (Nlocal), ghost atoms (Nghost), and pair-wise neighbors stored
 per processor.  The max and min values give the spread of these values
 across processors with a 10-bin histogram showing the distribution.
 The total number of histogram counts is equal to the number of
 processors.
 </P>
 <P>The last section gives aggregate statistics for pair-wise neighbors
 and special neighbors that LAMMPS keeps track of (see the
 <A HREF = "special_bonds.html">special_bonds</A> command).  The number of times
 neighbor lists were rebuilt during the run is given as well as the
 number of potentially "dangerous" rebuilds.  If atom movement
 triggered neighbor list rebuilding (see the
 <A HREF = "neigh_modify.html">neigh_modify</A> command), then dangerous
 reneighborings are those that were triggered on the first timestep
 atom movement was checked for.  If this count is non-zero you may wish
 to reduce the delay factor to insure no force interactions are missed
 by atoms moving beyond the neighbor skin distance before a rebuild
 takes place.
 </P>
 <P>If an energy minimization was performed via the
 <A HREF = "minimize.html">minimize</A> command, additional information is printed,
 e.g.
 </P>
 <PRE>Minimization stats:
   E initial, next-to-last, final = -0.895962 -2.94193 -2.94342
   Gradient 2-norm init/final= 1920.78 20.9992
   Gradient inf-norm init/final= 304.283 9.61216
   Iterations = 36
   Force evaluations = 177 
 </PRE>
 <P>The first line lists the initial and final energy, as well as the
 energy on the next-to-last iteration.  The next 2 lines give a measure
 of the gradient of the energy (force on all atoms).  The 2-norm is the
 "length" of this force vector; the inf-norm is the largest component.
 The last 2 lines are statistics on how many iterations and
 force-evaluations the minimizer required.  Multiple force evaluations
 are typically done at each iteration to perform a 1d line minimization
 in the search direction.
 </P>
 <P>If a <A HREF = "kspace_style.html">kspace_style</A> long-range Coulombics solve was
 performed during the run (PPPM, Ewald), then additional information is
 printed, e.g.
 </P>
 <PRE>FFT time (% of Kspce) = 0.200313 (8.34477)
 FFT Gflps 3d 1d-only = 2.31074 9.19989 
 </PRE>
 <P>The first line gives the time spent doing 3d FFTs (4 per timestep) and
 the fraction it represents of the total KSpace time (listed above).
 Each 3d FFT requires computation (3 sets of 1d FFTs) and communication
 (transposes).  The total flops performed is 5Nlog_2(N), where N is the
 number of points in the 3d grid.  The FFTs are timed with and without
 the communication and a Gflop rate is computed.  The 3d rate is with
 communication; the 1d rate is without (just the 1d FFTs).  Thus you
 can estimate what fraction of your FFT time was spent in
 communication, roughly 75% in the example above.
 </P>
 <HR>
 
 <H4><A NAME = "start_9"></A>2.9 Tips for users of previous LAMMPS versions 
 </H4>
 <P>The current C++ began with a complete rewrite of LAMMPS 2001, which
 was written in F90.  Features of earlier versions of LAMMPS are listed
 in <A HREF = "Section_history.html">Section_history</A>.  The F90 and F77 versions
 (2001 and 99) are also freely distributed as open-source codes; check
 the <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> for distribution information if you prefer
 those versions.  The 99 and 2001 versions are no longer under active
 development; they do not have all the features of C++ LAMMPS.
 </P>
 <P>If you are a previous user of LAMMPS 2001, these are the most
 significant changes you will notice in C++ LAMMPS:
 </P>
 <P>(1) The names and arguments of many input script commands have
 changed.  All commands are now a single word (e.g. read_data instead
 of read data).
 </P>
 <P>(2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS,
 but you may need to specify the relevant commands in different ways.
 </P>
 <P>(3) The format of the data file can be streamlined for some problems.
 See the <A HREF = "read_data.html">read_data</A> command for details.  The data file
 section "Nonbond Coeff" has been renamed to "Pair Coeff" in C++ LAMMPS.
 </P>
 <P>(4) Binary restart files written by LAMMPS 2001 cannot be read by C++
 LAMMPS with a <A HREF = "read_restart.html">read_restart</A> command.  This is
 because they were output by F90 which writes in a different binary
 format than C or C++ writes or reads.  Use the <I>restart2data</I> tool
 provided with LAMMPS 2001 to convert the 2001 restart file to a text
 data file.  Then edit the data file as necessary before using the C++
 LAMMPS <A HREF = "read_data.html">read_data</A> command to read it in.
 </P>
 <P>(5) There are numerous small numerical changes in C++ LAMMPS that mean
 you will not get identical answers when comparing to a 2001 run.
 However, your initial thermodynamic energy and MD trajectory should be
 close if you have setup the problem for both codes the same.
 </P>
 </HTML>
diff --git a/doc/Section_start.txt b/doc/Section_start.txt
index e3c7400c2..74afbd7ce 100644
--- a/doc/Section_start.txt
+++ b/doc/Section_start.txt
@@ -1,1429 +1,1390 @@
 "Previous Section"_Section_intro.html - "LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc - "Next Section"_Section_commands.html :c
 
 :link(lws,http://lammps.sandia.gov)
 :link(ld,Manual.html)
 :link(lc,Section_commands.html#comm)
 
 :line
 
 2. Getting Started :h3
 
 This section describes how to build and run LAMMPS, for both new and
 experienced users.
 
 2.1 "What's in the LAMMPS distribution"_#start_1
 2.2 "Making LAMMPS"_#start_2
 2.3 "Making LAMMPS with optional packages"_#start_3
 2.4 "Building LAMMPS via the Make.py script"_#start_4
 2.5 "Building LAMMPS as a library"_#start_5
 2.6 "Running LAMMPS"_#start_6
 2.7 "Command-line options"_#start_7
 2.8 "Screen output"_#start_8
 2.9 "Tips for users of previous versions"_#start_9 :all(b)
 
 :line
 :line
 
 2.1 What's in the LAMMPS distribution :h4,link(start_1)
 
 When you download LAMMPS you will need to unzip and untar the
 downloaded file with the following commands, after placing the file in
 an appropriate directory.
 
 gunzip lammps*.tar.gz 
 tar xvf lammps*.tar :pre
 
 This will create a LAMMPS directory containing two files and several
 sub-directories:
     
 README: text file
 LICENSE: the GNU General Public License (GPL)
 bench: benchmark problems
 doc: documentation
 examples: simple test problems
 potentials: embedded atom method (EAM) potential files
 src: source files
 tools: pre- and post-processing tools :tb(s=:)
 
 If you download one of the Windows executables from the download page,
 then you just get a single file:
 
 lmp_windows.exe :pre
 
 Skip to the "Running LAMMPS"_#start_6 sections for info on how to
 launch these executables on a Windows box.
 
 The Windows executables for serial or parallel only include certain
 packages and bug-fixes/upgrades listed on "this
 page"_http://lammps.sandia.gov/bug.html up to a certain date, as
 stated on the download page.  If you want something with more packages
 or that is more current, you'll have to download the source tarball
 and build it yourself from source code using Microsoft Visual Studio,
 as described in the next section.
 
 :line
 
 2.2 Making LAMMPS :h4,link(start_2)
 
 This section has the following sub-sections:
 
 "Read this first"_#start_2_1
 "Steps to build a LAMMPS executable"_#start_2_2
 "Common errors that can occur when making LAMMPS"_#start_2_3
 "Additional build tips"_#start_2_4
 "Building for a Mac"_#start_2_5
 "Building for Windows"_#start_2_6 :ul
 
 :line
 
 [{Read this first:}] :link(start_2_1)
 
 Building LAMMPS can be non-trivial.  You may need to edit a makefile,
 there are compiler options to consider, additional libraries can be
 used (MPI, FFT, JPEG), LAMMPS packages may be included or excluded,
 some of these packages use auxiliary libraries which need to be
 pre-built, etc.
 
 Please read this section carefully.  If you are not comfortable with
 makefiles, or building codes on a Unix platform, or running an MPI job
 on your machine, please find a local expert to help you.  Many
 compiling, linking, and run problems that users have are often not
 LAMMPS issues - they are peculiar to the user's system, compilers,
 libraries, etc.  Such questions are better answered by a local expert.
 
 If you have a build problem that you are convinced is a LAMMPS issue
 (e.g. the compiler complains about a line of LAMMPS source code), then
 please post a question to the "LAMMPS mail
 list"_http://lammps.sandia.gov/mail.html.
 
 If you succeed in building LAMMPS on a new kind of machine, for which
 there isn't a similar Makefile for in the src/MAKE directory, send it
 to the developers and we can include it in the LAMMPS distribution.
 
 :line
 
 [{Steps to build a LAMMPS executable:}] :link(start_2_2)
 
 [Step 0]
 
 The src directory contains the C++ source and header files for LAMMPS.
 It also contains a top-level Makefile and a MAKE sub-directory with
 low-level Makefile.* files for many machines.  From within the src
 directory, type "make" or "gmake".  You should see a list of available
 choices.  If one of those is the machine and options you want, you can
 type a command like:
 
 make linux
 or
 gmake mac :pre
 
 Note that on a multi-processor or multi-core platform you can launch a
 parallel make, by using the "-j" switch with the make command, which
 will build LAMMPS more quickly.
 
 If you get no errors and an executable like lmp_linux or lmp_mac is
 produced, you're done; it's your lucky day.
 
 Note that by default only a few of LAMMPS optional packages are
 installed.  To build LAMMPS with optional packages, see "this
 section"_#start_3 below.
 
 [Step 1]
 
 If Step 0 did not work, you will need to create a low-level Makefile
 for your machine, like Makefile.foo.  You should make a copy of an
 existing src/MAKE/Makefile.* as a starting point.  The only portions
 of the file you need to edit are the first line, the "compiler/linker
 settings" section, and the "LAMMPS-specific settings" section.
 
 [Step 2]
 
 Change the first line of src/MAKE/Makefile.foo to list the word "foo"
 after the "#", and whatever other options it will set.  This is the
 line you will see if you just type "make".
 
 [Step 3]
 
 The "compiler/linker settings" section lists compiler and linker
 settings for your C++ compiler, including optimization flags.  You can
 use g++, the open-source GNU compiler, which is available on all Unix
 systems.  You can also use mpicc which will typically be available if
 MPI is installed on your system, though you should check which actual
 compiler it wraps.  Vendor compilers often produce faster code.  On
 boxes with Intel CPUs, we suggest using the commercial Intel icc
 compiler, which can be downloaded from "Intel's compiler site"_intel.
 
 :link(intel,http://www.intel.com/software/products/noncom)
 
 If building a C++ code on your machine requires additional libraries,
 then you should list them as part of the LIB variable.
 
 The DEPFLAGS setting is what triggers the C++ compiler to create a
 dependency list for a source file.  This speeds re-compilation when
 source (*.cpp) or header (*.h) files are edited.  Some compilers do
 not support dependency file creation, or may use a different switch
 than -D.  GNU g++ works with -D.  If your compiler can't create
 dependency files, then you'll need to create a Makefile.foo patterned
 after Makefile.storm, which uses different rules that do not involve
 dependency files.  Note that when you build LAMMPS for the first time
 on a new platform, a long list of *.d files will be printed out
 rapidly.  This is not an error; it is the Makefile doing its normal
 creation of dependencies.
 
 [Step 4]
 
 The "system-specific settings" section has several parts.  Note that
 if you change any -D setting in this section, you should do a full
 re-compile, after typing "make clean" (which will describe different
 clean options).
 
 The LMP_INC variable is used to include options that turn on ifdefs
 within the LAMMPS code.  The options that are currently recogized are:
 
 -DLAMMPS_GZIP
 -DLAMMPS_JPEG
 -DLAMMPS_MEMALIGN
 -DLAMMPS_XDR
 -DLAMMPS_SMALLBIG
 -DLAMMPS_BIGBIG
 -DLAMMPS_SMALLSMALL
 -DLAMMPS_LONGLONG_TO_LONG
 -DPACK_ARRAY
 -DPACK_POINTER
 -DPACK_MEMCPY :ul
 
 The read_data and dump commands will read/write gzipped files if you
 compile with -DLAMMPS_GZIP.  It requires that your Unix support the
 "popen" command.
 
 If you use -DLAMMPS_JPEG, the "dump image"_dump.html command will be
 able to write out JPEG image files.  If not, it will only be able to
 write out text-based PPM image files.  For JPEG files, you must also
 link LAMMPS with a JPEG library, as described below.
 
 Using -DLAMMPS_MEMALIGN=<bytes> enables the use of the
 posix_memalign() call instead of malloc() when large chunks or memory
 are allocated by LAMMPS.  This can help to make more efficient use of
 vector instructions of modern CPUS, since dynamically allocated memory
 has to be aligned on larger than default byte boundaries (e.g. 16
 bytes instead of 8 bytes on x86 type platforms) for optimal
 performance.
 
 If you use -DLAMMPS_XDR, the build will include XDR compatibility
 files for doing particle dumps in XTC format.  This is only necessary
 if your platform does have its own XDR files available.  See the
 Restrictions section of the "dump"_dump.html command for details.
 
 Use at most one of the -DLAMMPS_SMALLBIG, -DLAMMPS_BIGBIG,
 -D-DLAMMPS_SMALLSMALL settings.  The default is -DLAMMPS_SMALLBIG.
 These settings refer to use of 4-byte (small) vs 8-byte (big) integers
 within LAMMPS, as specified in src/lmptype.h.  The only reason to use
 the BIGBIG setting is to enable simulation of huge molecular systems
 with more than 2 billion atoms or to allow moving atoms to wrap back
 through a periodic box more than 512 times.  The only reason to use
 the SMALLSMALL setting is if your machine does not support 64-bit
 integers.  See the "Additional build tips"_#start_2_4 section below
 for more details.
 
 The -DLAMMPS_LONGLONG_TO_LONG setting may be needed if your system or
 MPI version does not recognize "long long" data types.  In this case a
 "long" data type is likely already 64-bits, in which case this setting
 will convert to that data type.
 
 Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY
 options can make for faster parallel FFTs (in the PPPM solver) on some
 platforms.  The -DPACK_ARRAY setting is the default.  See the
 "kspace_style"_kspace_style.html command for info about PPPM.  See
 Step 6 below for info about building LAMMPS with an FFT library.
 
 [Step 5]
 
 The 3 MPI variables are used to specify an MPI library to build LAMMPS
 with. 
 
 If you want LAMMPS to run in parallel, you must have an MPI library
 installed on your platform.  If you use an MPI-wrapped compiler, such
 as "mpicc" to build LAMMPS, you should be able to leave these 3
 variables blank; the MPI wrapper knows where to find the needed files.
 If not, and MPI is installed on your system in the usual place (under
 /usr/local), you also may not need to specify these 3 variables.  On
 some large parallel machines which use "modules" for their
 compile/link environements, you may simply need to include the correct
 module in your build environment.  Or the parallel machine may have a
 vendor-provided MPI which the compiler has no trouble finding.
 
 Failing this, with these 3 variables you can specify where the mpi.h
 file (MPI_INC) and the MPI library file (MPI_PATH) are found and the
 name of the library file (MPI_LIB).
 
 If you are installing MPI yourself, we recommend Argonne's MPICH2
 or OpenMPI.  MPICH can be downloaded from the "Argonne MPI
 site"_http://www.mcs.anl.gov/research/projects/mpich2/.  OpenMPI can
 be downloaded from the "OpenMPI site"_http://www.open-mpi.org.
 Other MPI packages should also work. If you are running on a big
 parallel platform, your system people or the vendor should have
 already installed a version of MPI, which is likely to be faster
 than a self-installed MPICH or OpenMPI, so find out how to build
 and link with it.  If you use MPICH or OpenMPI, you will have to
 configure and build it for your platform.  The MPI configure script
 should have compiler options to enable you to use the same compiler
 you are using for the LAMMPS build, which can avoid problems that can
 arise when linking LAMMPS to the MPI library.
 
 If you just want to run LAMMPS on a single processor, you can use the
 dummy MPI library provided in src/STUBS, since you don't need a true
 MPI library installed on your system.  See the
 src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
 in this case.  You will also need to build the STUBS library for your
-platform before making LAMMPS itself.  To build it as a static
-library, from the src directory, type "make stubs", or from the STUBS
-dir, type "make" and it should create a libmpi_stubs.a suitable for
-linking to LAMMPS.  To build it as a shared library, from the STUBS
-dir, type "make shlib" and it should create a libmpi_stubs.so suitable
-for dynamically loading when LAMMPS runs.  If either of these builds
-fail, you will need to edit the STUBS/Makefile for your platform.
+platform before making LAMMPS itself.  To build from the src
+directory, type "make stubs", or from the STUBS dir, type "make".
+This should create a libmpi_stubs.a file suitable for linking to
+LAMMPS.  If the build fails, you will need to edit the STUBS/Makefile
+for your platform.
 
 The file STUBS/mpi.cpp provides a CPU timer function called
 MPI_Wtime() that calls gettimeofday() .  If your system doesn't
 support gettimeofday() , you'll need to insert code to call another
 timer.  Note that the ANSI-standard function clock() rolls over after
 an hour or so, and is therefore insufficient for timing long LAMMPS
 simulations.
 
 [Step 6]
 
 The 3 FFT variables allow you to specify an FFT library which LAMMPS
 uses (for performing 1d FFTs) when running the particle-particle
 particle-mesh (PPPM) option for long-range Coulombics via the
 "kspace_style"_kspace_style.html command.
 
 LAMMPS supports various open-source or vendor-supplied FFT libraries
 for this purpose.  If you leave these 3 variables blank, LAMMPS will
 use the open-source "KISS FFT library"_http://kissfft.sf.net, which is
 included in the LAMMPS distribution.  This library is portable to all
 platforms and for typical LAMMPS simulations is almost as fast as FFTW
 or vendor optimized libraries.  If you are not including the KSPACE
 package in your build, you can also leave the 3 variables blank.
 
 Otherwise, select which kinds of FFTs to use as part of the FFT_INC
 setting by a switch of the form -DFFT_XXX.  Recommended values for XXX
 are: MKL, SCSL, FFTW2, and FFTW3.  Legacy options are: INTEL, SGI,
 ACML, and T3E.  For backward compatability, using -DFFT_FFTW will use
 the FFTW2 library.  Using -DFFT_NONE will use the KISS library
 described above.
 
 You may also need to set the FFT_INC, FFT_PATH, and FFT_LIB variables,
 so the compiler and linker can find the needed FFT header and library
 files.  Note that on some large parallel machines which use "modules"
 for their compile/link environements, you may simply need to include
 the correct module in your build environment.  Or the parallel machine
 may have a vendor-provided FFT library which the compiler has no
 trouble finding.
 
 FFTW is a fast, portable library that should also work on any
 platform.  You can download it from
 "www.fftw.org"_http://www.fftw.org.  Both the legacy version 2.1.X and
 the newer 3.X versions are supported as -DFFT_FFTW2 or -DFFT_FFTW3.
 Building FFTW for your box should be as simple as ./configure; make.
 Note that on some platforms FFTW2 has been pre-installed, and uses
 renamed files indicating the precision it was compiled with,
 e.g. sfftw.h, or dfftw.h instead of fftw.h.  In this case, you can
 specify an additional define variable for FFT_INC called -DFFTW_SIZE,
 which will select the correct include file.  In this case, for FFT_LIB
 you must also manually specify the correct library, namely -lsfftw or
 -ldfftw.
 
 The FFT_INC variable also allows for a -DFFT_SINGLE setting that will
 use single-precision FFTs with PPPM, which can speed-up long-range
 calulations, particularly in parallel or on GPUs.  Fourier transform
 and related PPPM operations are somewhat insensitive to floating point
 truncation errors and thus do not always need to be performed in
 double precision.  Using the -DFFT_SINGLE setting trades off a little
 accuracy for reduced memory use and parallel communication costs for
 transposing 3d FFT data.  Note that single precision FFTs have only
 been tested with the FFTW3, FFTW2, MKL, and KISS FFT options.
 
 [Step 7]
 
 The 3 JPG variables allow you to specify a JPEG library which LAMMPS
 uses when writing out JPEG files via the "dump image"_dump_image.html
 command.  These can be left blank if you do not use the -DLAMMPS_JPEG
 switch discussed above in Step 4, since in that case JPEG output will
 be disabled.
 
 A standard JPEG library usually goes by the name libjpeg.a and has an
 associated header file jpeglib.h.  Whichever JPEG library you have on
 your platform, you'll need to set the appropriate JPG_INC, JPG_PATH,
 and JPG_LIB variables, so that the compiler and linker can find it.
 
 As before, if these header and library files are in the usual place on
 your machine, you may not need to set these variables.
 
 [Step 8]
 
 Note that by default only a few of LAMMPS optional packages are
 installed.  To build LAMMPS with optional packages, see "this
 section"_#start_3 below, before proceeding to Step 9.
 
 [Step 9]
 
 That's it.  Once you have a correct Makefile.foo, you have installed
 the optional LAMMPS packages you want to include in your build, and
 you have pre-built any other needed libraries (e.g. MPI, FFT, package
 libraries), all you need to do from the src directory is type
 something like this:
 
 make foo
 or
 gmake foo :pre
 
 You should get the executable lmp_foo when the build is complete.
 
 :line
 
 [{Errors that can occur when making LAMMPS:}] :link(start_2_3)
 
 IMPORTANT NOTE: If an error occurs when building LAMMPS, the compiler
 or linker will state very explicitly what the problem is.  The error
 message should give you a hint as to which of the steps above has
 failed, and what you need to do in order to fix it.  Building a code
 with a Makefile is a very logical process.  The compiler and linker
 need to find the appropriate files and those files need to be
 compatible with LAMMPS source files.  When a make fails, there is
 usually a very simple reason, which you or a local expert will need to
 fix.
 
 Here are two non-obvious errors that can occur:
 
 (1) If the make command breaks immediately with errors that indicate
 it can't find files with a "*" in their names, this can be because
 your machine's native make doesn't support wildcard expansion in a
 makefile.  Try gmake instead of make.  If that doesn't work, try using
 a -f switch with your make command to use a pre-generated
 Makefile.list which explicitly lists all the needed files, e.g.
 
 make makelist
 make -f Makefile.list linux
 gmake -f Makefile.list mac :pre
 
 The first "make" command will create a current Makefile.list with all
 the file names in your src dir.  The 2nd "make" command (make or
 gmake) will use it to build LAMMPS.  Note that you should
 include/exclude any desired optional packages before using the "make
 makelist" command.
 
 (2) If you get an error that says something like 'identifier "atoll"
 is undefined', then your machine does not support "long long"
 integers.  Try using the -DLAMMPS_LONGLONG_TO_LONG setting described
 above in Step 4.
 
 :line
 
 [{Additional build tips:}] :link(start_2_4)
 
 (1) Building LAMMPS for multiple platforms.
 
 You can make LAMMPS for multiple platforms from the same src
 directory.  Each target creates its own object sub-directory called
 Obj_target where it stores the system-specific *.o files.
 
 (2) Cleaning up.
 
 Typing "make clean-all" or "make clean-foo" will delete *.o object
 files created when LAMMPS is built, for either all builds or for a
 particular machine.
 
 (3) Changing the LAMMPS size limits via -DLAMMPS_SMALLBIG or
 -DLAMMPS_BIBIG or -DLAMMPS_SMALLSMALL
 
 As explained above, any of these 3 settings can be specified on the
 LMP_INC line in your low-level src/MAKE/Makefile.foo.
 
 The default is -DLAMMPS_SMALLBIG which allows for systems with up to
 2^63 atoms and timesteps (about 9 billion billion).  The atom limit is
 for atomic systems that do not require atom IDs.  For molecular
 models, which require atom IDs, the limit is 2^31 atoms (about 2
 billion).  With this setting, image flags are stored in 32-bit
 integers, which means for 3 dimensions that atoms can only wrap around
 a periodic box at most 512 times.  If atoms move through the periodic
 box more than this limit, the image flags will "roll over", e.g. from
 511 to -512, which can cause diagnostics like the mean-squared
 displacement, as calculated by the "compute msd"_compute_msd.html
 command, to be faulty.
 
 To allow for larger molecular systems or larger image flags, compile
 with -DLAMMPS_BIGBIG.  This enables molecular systems with up to 2^63
 atoms (about 9 billion billion).  And image flags will not "roll over"
 until they reach 2^20 = 1048576.
 
 IMPORTANT NOTE: As of 6/2012, the BIGBIG setting does not yet enable
 molecular systems to grow as large as 2^63.  Only the image flag roll
 over is currently affected by this compile option.
 
 If your system does not support 8-byte integers, you will need to
 compile with the -DLAMMPS_SMALLSMALL setting.  This will restrict your
 total number of atoms (for atomic or molecular models) and timesteps
 to 2^31 (about 2 billion).  Image flags will roll over at 2^9 = 512.
 
 Note that in src/lmptype.h there are also settings for the MPI data
 types associated with the integers that store atom IDs and total
 system sizes.  These need to be consistent with the associated C data
 types, or else LAMMPS will generate a run-time error.
 
 In all cases, the size of problem that can be run on a per-processor
 basis is limited by 4-byte integer storage to 2^31 atoms per processor
 (about 2 billion).  This should not normally be a restriction since
 such a problem would have a huge per-processor memory footprint due to
 neighbor lists and would run very slowly in terms of CPU
 secs/timestep.
 
 :line
 
 [{Building for a Mac:}] :link(start_2_5)
 
 OS X is BSD Unix, so it should just work.  See the
 src/MAKE/Makefile.mac file.
 
 :line
 
 [{Building for Windows:}] :link(start_2_6)
 
 The LAMMPS download page has an option to download both a serial and
 parallel pre-built Windows exeutable.  See the "Running
 LAMMPS"_#start_6 section for instructions for running these
 executables on a Windows box.
 
 The pre-built executables are built with a subset of the available
 pacakges; see the download page for the list.  If you want
 a Windows version with specific packages included and excluded,
 you can build it yourself.
 
 One way to do this is install and use cygwin to build LAMMPS with a
 standard Linus make, just as you would on any Linux box; see
 src/MAKE/Makefile.cygwin.
 
 The other way to do this is using Visual Studio and project files.
 See the src/WINDOWS directory and its README.txt file for instructions
 on both a basic build and a customized build with pacakges you select.
 
 :line
 
 2.3 Making LAMMPS with optional packages :h4,link(start_3)
 
 This section has the following sub-sections:
 
 "Package basics"_#start_3_1
 "Including/excluding packages"_#start_3_2
 "Packages that require extra libraries"_#start_3_3
 "Additional Makefile settings for extra libraries"_#start_3_4 :ul
 
 :line
 
 [{Package basics:}] :link(start_3_1)
 
 The source code for LAMMPS is structured as a set of core files which
 are always included, plus optional packages.  Packages are groups of
 files that enable a specific set of features.  For example, force
 fields for molecular systems or granular systems are in packages.  You
 can see the list of all packages by typing "make package" from within
 the src directory of the LAMMPS distribution.
 
 If you use a command in a LAMMPS input script that is specific to a
 particular package, you must have built LAMMPS with that package, else
 you will get an error that the style is invalid or the command is
 unknown.  Every command's doc page specfies if it is part of a
 package.  You can also type
 
 lmp_machine -h :pre
 
 to run your executable with the optional "-h command-line
 switch"_#start_7 for "help", which will list the styles and commands
 known to your executable.
 
 There are two kinds of packages in LAMMPS, standard and user packages.
 More information about the contents of standard and user packages is
 given in "Section_packages"_Section_packages.html of the manual.  The
 difference between standard and user packages is as follows:
 
 Standard packages are supported by the LAMMPS developers and are
 written in a syntax and style consistent with the rest of LAMMPS.
 This means we will answer questions about them, debug and fix them if
 necessary, and keep them compatible with future changes to LAMMPS.
 
 User packages have been contributed by users, and always begin with
 the user prefix.  If they are a single command (single file), they are
 typically in the user-misc package.  Otherwise, they are a a set of
 files grouped together which add a specific functionality to the code.
 
 User packages don't necessarily meet the requirements of the standard
 packages.  If you have problems using a feature provided in a user
 package, you will likely need to contact the contributor directly to
 get help.  Information on how to submit additions you make to LAMMPS
 as a user-contributed package is given in "this
 section"_Section_modify.html#mod_14 of the documentation.
 
 :line
 
 [{Including/excluding packages:}] :link(start_3_2)
 
 To use or not use a package you must include or exclude it before
 building LAMMPS.  From the src directory, this is typically as simple
 as:
 
 make yes-colloid
 make g++ :pre
 
 or
 
 make no-manybody
 make g++ :pre
 
 Some packages have individual files that depend on other packages
 being included.  LAMMPS checks for this and does the right thing.
 I.e. individual files are only included if their dependencies are
 already included.  Likewise, if a package is excluded, other files
 dependent on that package are also excluded.
 
 The reason to exclude packages is if you will never run certain kinds
 of simulations.  For some packages, this will keep you from having to
 build auxiliary libraries (see below), and will also produce a smaller
 executable which may run a bit faster.
 
 When you download a LAMMPS tarball, these packages are pre-installed
 in the src directory: KSPACE, MANYBODY,MOLECULE.  When you download
 LAMMPS source files from the SVN or Git repositories, no packages are
 pre-installed.
 
 Packages are included or excluded by typing "make yes-name" or "make
 no-name", where "name" is the name of the package in lower-case, e.g.
 name = kspace for the KSPACE package or name = user-atc for the
 USER-ATC package.  You can also type "make yes-standard", "make
 no-standard", "make yes-user", "make no-user", "make yes-all" or "make
 no-all" to include/exclude various sets of packages.  Type "make
 package" to see the all of the package-related make options.
 
 IMPORTANT NOTE: Inclusion/exclusion of a package works by simply
 moving files back and forth between the main src directory and
 sub-directories with the package name (e.g. src/KSPACE, src/USER-ATC),
 so that the files are seen or not seen when LAMMPS is built.  After
 you have included or excluded a package, you must re-build LAMMPS.
 
 Additional package-related make options exist to help manage LAMMPS
 files that exist in both the src directory and in package
 sub-directories.  You do not normally need to use these commands
 unless you are editing LAMMPS files or have downloaded a patch from
 the LAMMPS WWW site.
 
 Typing "make package-update" will overwrite src files with files from
 the package sub-directories if the package has been included.  It
 should be used after a patch is installed, since patches only update
 the files in the package sub-directory, but not the src files.  Typing
 "make package-overwrite" will overwrite files in the package
 sub-directories with src files.
 
 Typing "make package-status" will show which packages are currently
 included. Of those that are included, it will list files that are
 different in the src directory and package sub-directory.  Typing
 "make package-diff" lists all differences between these files.  Again,
 type "make package" to see all of the package-related make options.
 
 :line
 
 [{Packages that require extra libraries:}] :link(start_3_3)
 
 A few of the standard and user packages require additional auxiliary
 libraries to be compiled first.  If you get a LAMMPS build error about
 a missing library, this is likely the reason.  The source code or
 hooks to these libraries is included in the LAMMPS distribution under
 the "lib" directory.  Look at the lib/README file for a list of these
 or see "Section_packages"_Section_packages.html of the doc pages.
 
 Each lib directory has a README file (e.g. lib/reax/README) with
 instructions on how to build that library.  Typically this is done 
 in this manner:
 
 make -f Makefile.g++ :pre
 
 in the appropriate directory, e.g. in lib/reax.  However, some of the
 libraries do not build this way.  Again, see the libary README file
 for details.
 
 If you are building the library, you will need to use a Makefile that
 is a match for your system.  If one of the provided Makefiles is not
 appropriate for your system you will need to edit or add one.  For
 example, in the case of Fortran-based libraries, your system must have
 a Fortran compiler, the settings for which will need to be listed in
 the Makefile.
 
 When you have built one of these libraries, there are 2 things to
 check:
 
 (1) The file libname.a should now exist in lib/name.
 E.g. lib/reax/libreax.a.  This is the library file LAMMPS will link
 against.  One exception is the lib/cuda library which produces the
 file liblammpscuda.a, because there is already a system library
 libcuda.a.
 
 (2) The file Makefile.lammps should exist in lib/name.  E.g.
 lib/cuda/Makefile.lammps.  This file may be auto-generated by the
 build of the library, or you may need to make a copy of the
 appropriate provided file (e.g. lib/meam/Makefile.lammps.gfortran).
 Either way you should insure that the settings in this file are
 appropriate for your system.
 
 There are typically 3 settings in the Makefile.lammps file (unless
 some are blank or not needed): a SYSINC, SYSPATH, and SYSLIB setting,
 specific to this package.  These are settings the LAMMPS build will
 import when compiling the LAMMPS package files (not the library
 files), and linking to the auxiliary library.  They typically list any
 other system libraries needed to support the package and where to find
 them.  An example is the BLAS and LAPACK libraries needed by the
 USER-ATC package.  Or the system libraries that support calling
 Fortran from C++, as the MEAM and REAX packages do.
 
 (3) One exception to these rules is the lib/linalg directory, which is
 simply BLAS and LAPACK files used by the USER-ATC package (and
 possibly other packages in the future).  If you do not have these
 libraries on your system, you can use one of the Makefiles in this
 directory (which you may need to modify) to build a dummy BLAS and
 LAPACK library.  It can then be included in the
 lib/atc/Makefile.lammps file as part of the SYSPATH and SYSLIB lines
 so that LAMMPS will build properly with the USER-ATC package.
 
 Note that if the Makefile.lammps settings are not correct for your
 box, the LAMMPS build will likely fail.
 
 There are also a few packages, like KIM and USER-MOLFILE, that use
 additional auxiliary libraries which are not provided with LAMMPS.  In
 these cases, there is no corresponding sub-directory under the lib
 directory.  You are expected to download and install these libraries
 yourself before building LAMMPS with the package installed, if they
 are not already on your system.
 
 However there is still a Makefile.lammps file with settings used when
 building LAMMPS with the package installed, as in (2) above.  Is is
 found in the package directory itself, e.g. src/KIM/Makefile.lammps.
 This file contains the same 3 settings described above for SYSINC,
 SYSPATH, and SYSLIB.  The Makefile.lammps file contains instructions
 on how to specify these settings for your system.  You need to specify
 the settings before building LAMMPS with one of those packages
 installed, else the LAMMPS build will likely fail.
 
 :line
 
 2.4 Building LAMMPS via the Make.py script :h4,link(start_4)
 
 The src directory includes a Make.py script, written
 in Python, which can be used to automate various steps
 of the build process.
 
 You can run the script from the src directory by typing either:
 
 Make.py
 python Make.py :pre
 
 which will give you info about the tool.  For the former to work, you
 may need to edit the 1st line of the script to point to your local
 Python.  And you may need to insure the script is executable:
 
 chmod +x Make.py :pre
 
 The following options are supported as switches:
 
 -i file1 file2 ...
 -p package1 package2 ...
 -u package1 package2 ...
 -e package1 arg1 arg2 package2 ...
 -o dir
 -b machine
 -s suffix1 suffix2 ...
 -l dir
 -j N
 -h switch1 switch2 ... :ul
 
 Help on any switch can be listed by using -h, e.g.
 
 Make.py -h -i -p :pre
 
 At a hi-level, these are the kinds of package management
 and build tasks that can be performed easily, using
 the Make.py tool:
 
 install/uninstall packages and build the associated external libs (use -p and -u and -e)
 install packages needed for one or more input scripts (use -i and -p)
 build LAMMPS, either in the src dir or new dir (use -b)
 create a new dir with only the source code needed for one or more input scripts (use -i and -o) :ul
 
 The last bullet can be useful when you wish to build a stripped-down
 version of LAMMPS to run a specific script(s).  Or when you wish to
 move the minimal amount of files to another platform for a remote
 LAMMPS build.
 
 Note that using Make.py is not a substitute for insuring you have a
 valid src/MAKE/Makefile.foo for your system, or that external library
 Makefiles in any lib/* directories you use are also valid for your
 system.  But once you have done that, you can use Make.py to quickly
 include/exclude the packages and external libraries needed by your
 input scripts.
 
 :line
 
 2.5 Building LAMMPS as a library :h4,link(start_5)
 
 LAMMPS can be built as either a static or shared library, which can
 then be called from another application or a scripting language.  See
 "this section"_Section_howto.html#howto_10 for more info on coupling
 LAMMPS to other codes.  See "this section"_Section_python.html for
 more info on wrapping and running LAMMPS from Python.
 
 [Static library:] :h5
 
 To build LAMMPS as a static library (*.a file on Linux), type
 
 make makelib
 make -f Makefile.lib foo :pre
 
 where foo is the machine name.  This kind of library is typically used
 to statically link a driver application to LAMMPS, so that you can
 insure all dependencies are satisfied at compile time.  Note that
 inclusion or exclusion of any desired optional packages should be done
 before typing "make makelib".  The first "make" command will create a
 current Makefile.lib with all the file names in your src dir.  The
 second "make" command will use it to build LAMMPS as a static library,
 using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo.  The
 build will create the file liblmp_foo.a which another application can
 link to.
 
 [Shared library:] :h5
 
 To build LAMMPS as a shared library (*.so file on Linux), which can be
 dynamically loaded, e.g. from Python, type
 
 make makeshlib
 make -f Makefile.shlib foo :pre
 
 where foo is the machine name.  This kind of library is required when
 wrapping LAMMPS with Python; see "Section_python"_Section_python.html
 for details.  Again, note that inclusion or exclusion of any desired
 optional packages should be done before typing "make makelib".  The
 first "make" command will create a current Makefile.shlib with all the
 file names in your src dir.  The second "make" command will use it to
 build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
 settings in src/MAKE/Makefile.foo.  The build will create the file
-liblmp_foo.so which another application can link to dyamically, as
-well as a soft link liblmp.so, which the Python wrapper uses by
-default.
-
-[Additional requirements for building a shared library:] :h5
+liblmp_foo.so which another application can link to dyamically.  It
+will also create a soft link liblmp.so, which the Python wrapper uses
+by default.
 
 Note that for a shared library to be usable by a calling program, all
 the auxiliary libraries it depends on must also exist as shared
-libraries, and the operating system must be able to find them.  For
-LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
-FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
-needed by MPI), any packages you have installed that require libraries
-provided with LAMMPS (e.g. the USER-ATC package require
-lib/atc/libatc.so), and any system libraries (e.g. BLAS or
-Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
-file.
-
-If one of these auxiliary libraries does not exist as a shared
-library, the second make command should generate a build error.  If a
-needed library is simply missing from the link list, this will not
-generate an error at build time, but will generate a run-time error
-when the library is loaded, so be sure all needed libraries are
-listed, just as they are when building LAMMPS as a stand-alone code.
-
-Note that if you install them yourself, some libraries, such as MPI,
-may not build by default as shared libraries.  The build instructions
-for the library should tell you how to do this.
+libraries.  This will be the case for libraries included with LAMMPS,
+such as the dummy MPI library in src/STUBS or any package libraries in
+lib/packges, since they are always built as shared libraries with the
+-fPIC switch.  However, if a library like MPI or FFTW does not exist
+as a shared library, the second make command will generate an error.
+This means you will need to install a shared library version of the
+package.  The build instructions for the library should tell you how
+to do this.
 
 As an example, here is how to build and install the "MPICH
 library"_mpich, a popular open-source version of MPI, distributed by
 Argonne National Labs, as a shared library in the default
 /usr/local/lib location:
 
 :link(mpich,http://www-unix.mcs.anl.gov/mpi)
 
 ./configure --enable-shared
 make
 make install :pre
 
 You may need to use "sudo make install" in place of the last line if
 you do not have write privileges for /usr/local/lib.  The end result
 should be the file /usr/local/lib/libmpich.so.
 
-Also note that not all of the auxiliary libraries provided with LAMMPS
-include Makefiles in their lib directories suitable for building them
-as shared libraries.  Typically this simply requires 3 steps: (a)
-adding a -fPIC switch when files are compiled, (b) adding "-fPIC
--shared" switches when the library is linked with a C++ (or Fortran)
-compiler, and (c) creating an output target that ends in ".so", like
-libatc.o.  As we or others create and contribute these Makefiles, we
-will add them to the LAMMPS distribution.
-
-[Additional requirements for using a shared library:] :h5
+[Additional requirement for using a shared library:] :h5
 
 The operating system finds shared libraries to load at run-time using
-the environment variable LD_LIBRARY_PATH.  So at a minimum you
-must set it to include the lammps src directory where the LAMMPS
-shared library file is created.
+the environment variable LD_LIBRARY_PATH.  So you may wish
+to copy the file src/liblmp.so or src/liblmp_g++.so (for example)
+to a place the system can find it by default, such as /usr/local/lib,
+or you may wish to add the lammps src directory to LD_LIBRARY_PATH.
 
 For the csh or tcsh shells, you could add something like this to your
 ~/.cshrc file:
 
 setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
 
-If any auxiliary libraries, used by LAMMPS, are not in default places
-where the operating system can find them, then you also have to add
-their paths to the LD_LIBRARY_PATH environment variable.
-
-For example, if you are using the dummy MPI library provided in
-src/STUBS, and have built the file libmpi_stubs.so, you would add
-something like this to your ~/.cshrc file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre
-
-If you are using the LAMMPS USER-ATC package, and have built the file
-lib/atc/libatc.so, you would add something like this to your ~/.cshrc
-file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre
-
 [Calling the LAMMPS library:] :h5
 
 Either flavor of library (static or shared0 allows one or more LAMMPS
 objects to be instantiated from the calling program.
 
 When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
 namespace; you can safely use any of its classes and methods from
 within the calling code, as needed.
 
 When used from a C or Fortran program or a scripting language like
 Python, the library has a simple function-style interface, provided in
 src/library.cpp and src/library.h.
 
 See the sample codes in examples/COUPLE/simple for examples of C++ and
 C and Fortran codes that invoke LAMMPS thru its library interface.
 There are other examples as well in the COUPLE directory which are
 discussed in "Section_howto 10"_Section_howto.html#howto_10 of the
 manual.  See "Section_python"_Section_python.html of the manual for a
 description of the Python wrapper provided with LAMMPS that operates
 through the LAMMPS library interface.
 
 The files src/library.cpp and library.h define the C-style API for
 using LAMMPS as a library.  See "Section_howto
 19"_Section_howto.html#howto_19 of the manual for a description of the
 interface and how to extend it for your needs.
 
 :line
 
 2.6 Running LAMMPS :h4,link(start_6)
 
 By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux
 < in.file.  This means you first create an input script (e.g. in.file)
 containing the desired commands.  "This section"_Section_commands.html
 describes how input scripts are structured and what commands they
 contain.
 
 You can test LAMMPS on any of the sample inputs provided in the
 examples or bench directory.  Input scripts are named in.* and sample
 outputs are named log.*.name.P where name is a machine and P is the
 number of processors it was run on.
 
 Here is how you might run a standard Lennard-Jones benchmark on a
 Linux box, using mpirun to launch a parallel job:
 
 cd src
 make linux
 cp lmp_linux ../bench
 cd ../bench
 mpirun -np 4 lmp_linux < in.lj :pre
 
 See "this page"_bench for timings for this and the other benchmarks
 on various platforms.
 
 :link(bench,http://lammps.sandia.gov/bench.html)
 
 :line
 
 On a Windows box, you can skip making LAMMPS and simply download an
 executable, as described above, though the pre-packaged executables
 include only certain packages.
 
 To run a LAMMPS executable on a Windows machine, first decide whether
 you want to download the non-MPI (serial) or the MPI (parallel)
 version of the executable. Download and save the version you have
 chosen.
 
 For the non-MPI version, follow these steps:
 
 Get a command prompt by going to Start->Run... , 
 then typing "cmd". :ulb,l
 
 Move to the directory where you have saved lmp_win_no-mpi.exe
 (e.g. by typing: cd "Documents"). :l
 
 At the command prompt, type "lmp_win_no-mpi -in in.lj", replacing in.lj
 with the name of your LAMMPS input script. :l,ule
 
 For the MPI version, which allows you to run LAMMPS under Windows on 
 multiple processors, follow these steps:
 
 Download and install 
 "MPICH2"_http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads
 for Windows. :ulb,l
 
 You'll need to use the mpiexec.exe and smpd.exe files from the MPICH2 package. Put them in 
 same directory (or path) as the LAMMPS Windows executable. :l
 
 Get a command prompt by going to Start->Run... , 
 then typing "cmd". :l
 
 Move to the directory where you have saved lmp_win_mpi.exe
 (e.g. by typing: cd "Documents"). :l
 
 Then type something like this: "mpiexec -np 4 -localonly lmp_win_mpi -in in.lj", 
 replacing in.lj with the name of your LAMMPS input script. :l
 Note that you may need to provide smpd with a passphrase --- it doesn't matter what you 
 type. :l
 In this mode, output may not immediately show up on the screen, so 
 if your input script takes a long time to execute, you may need to be 
 patient before the output shows up. :l
 Alternatively, you can still use this executable to run on a single processor by
 typing something like: "lmp_win_mpi -in in.lj". :l,ule
 
 :line
 
 The screen output from LAMMPS is described in the next section.  As it
 runs, LAMMPS also writes a log.lammps file with the same information.
 
 Note that this sequence of commands copies the LAMMPS executable
 (lmp_linux) to the directory with the input files.  This may not be
 necessary, but some versions of MPI reset the working directory to
 where the executable is, rather than leave it as the directory where
 you launch mpirun from (if you launch lmp_linux on its own and not
 under mpirun).  If that happens, LAMMPS will look for additional input
 files and write its output files to the executable directory, rather
 than your working directory, which is probably not what you want.
 
 If LAMMPS encounters errors in the input script or while running a
 simulation it will print an ERROR message and stop or a WARNING
 message and continue.  See "Section_errors"_Section_errors.html for a
 discussion of the various kinds of errors LAMMPS can or can't detect,
 a list of all ERROR and WARNING messages, and what to do about them.
 
 LAMMPS can run a problem on any number of processors, including a
 single processor.  In theory you should get identical answers on any
 number of processors and on any machine.  In practice, numerical
 round-off can cause slight differences and eventual divergence of
 molecular dynamics phase space trajectories.
 
 LAMMPS can run as large a problem as will fit in the physical memory
 of one or more processors.  If you run out of memory, you must run on
 more processors or setup a smaller problem.
 
 :line
 
 2.7 Command-line options :h4,link(start_7)
 
 At run time, LAMMPS recognizes several optional command-line switches
 which may be used in any order.  Either the full word or a one-or-two
 letter abbreviation can be used:
 
 -c or -cuda
 -e or -echo
 -i or -in
 -h or -help
 -l or -log
 -p or -partition
 -pl or -plog
 -ps or -pscreen
 -r or -reorder
 -sc or -screen
 -sf or -suffix
 -v or -var :ul
 
 For example, lmp_ibm might be launched as follows:
 
 mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none < in.alloy
 mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy :pre
 
 Here are the details on the options:
 
 -cuda on/off :pre
 
 Explicitly enable or disable CUDA support, as provided by the
 USER-CUDA package.  If LAMMPS is built with this package, as described
 above in "Section 2.3"_#start_3, then by default LAMMPS will run in
 CUDA mode.  If this switch is set to "off", then it will not, even if
 it was built with the USER-CUDA package, which means you can run
 standard LAMMPS or with the GPU package for testing or benchmarking
 purposes.  The only reason to set the switch to "on", is to check if
 LAMMPS was built with the USER-CUDA package, since an error will be
 generated if it was not.
 
 -echo style :pre
 
 Set the style of command echoing.  The style can be {none} or {screen}
 or {log} or {both}.  Depending on the style, each command read from
 the input script will be echoed to the screen and/or logfile.  This
 can be useful to figure out which line of your script is causing an
 input error.  The default value is {log}.  The echo style can also be
 set by using the "echo"_echo.html command in the input script itself.
 
 -in file :pre
 
 Specify a file to use as an input script.  This is an optional switch
 when running LAMMPS in one-partition mode.  If it is not specified,
 LAMMPS reads its input script from stdin - e.g. lmp_linux < in.run.
 This is a required switch when running LAMMPS in multi-partition mode,
 since multiple processors cannot all read from stdin.
 
 -help :pre
 
 Print a list of options compiled into this executable for each LAMMPS
 style (atom_style, fix, compute, pair_style, bond_style, etc).  This
 can help you know if the command you want to use was included via the
 appropriate package.  LAMMPS will print the info and immediately exit
 if this switch is used.
 
 -log file :pre
 
 Specify a log file for LAMMPS to write status information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 file log.lammps.  If this switch is used, LAMMPS writes to the
 specified file.  In multi-partition mode, if the switch is not used, a
 log.lammps file is created with hi-level status information.  Each
 partition also writes to a log.lammps.N file where N is the partition
 ID.  If the switch is specified in multi-partition mode, the hi-level
 logfile is named "file" and each partition also logs information to a
 file.N.  For both one-partition and multi-partition mode, if the
 specified file is "none", then no log files are created.  Using a
 "log"_log.html command in the input script will override this setting.
 Option -plog will override the name of the partition log files file.N.
 
 -partition 8x2 4 5 ... :pre
 
 Invoke LAMMPS in multi-partition mode.  When LAMMPS is run on P
 processors and this switch is not used, LAMMPS runs in one partition,
 i.e. all P processors run a single simulation.  If this switch is
 used, the P processors are split into separate partitions and each
 partition runs its own simulation.  The arguments to the switch
 specify the number of processors in each partition.  Arguments of the
 form MxN mean M partitions, each with N processors.  Arguments of the
 form N mean a single partition with N processors.  The sum of
 processors in all partitions must equal P.  Thus the command
 "-partition 8x2 4 5" has 10 partitions and runs on a total of 25
 processors.
 
 Running with multiple partitions can e useful for running
 "multi-replica simulations"_Section_howto.html#howto_5, where each
 replica runs on on one or a few processors.  Note that with MPI
 installed on a machine (e.g. your desktop), you can run on more
 (virtual) processors than you have physical processors.
 
 To run multiple independent simulatoins from one input script, using
 multiple partitions, see "Section_howto 4"_Section_howto.html#howto_4
 of the manual.  World- and universe-style "variables"_variable.html
 are useful in this context.
 
 -plog file :pre
  
 Specify the base name for the partition log files, so partition N
 writes log information to file.N. If file is none, then no partition
 log files are created.  This overrides the filename specified in the
 -log command-line option.  This option is useful when working with
 large numbers of partitions, allowing the partition log files to be
 suppressed (-plog none) or placed in a sub-directory (-plog
 replica_files/log.lammps) If this option is not used the log file for
 partition N is log.lammps.N or whatever is specified by the -log
 command-line option.
 
 -pscreen file :pre 
 
 Specify the base name for the partition screen file, so partition N
 writes screen information to file.N. If file is none, then no
 partition screen files are created.  This overrides the filename
 specified in the -screen command-line option.  This option is useful
 when working with large numbers of partitions, allowing the partition
 screen files to be suppressed (-pscreen none) or placed in a
 sub-directory (-pscreen replica_files/screen).  If this option is not
 used the screen file for partition N is screen.N or whatever is
 specified by the -screen command-line option.
 
 -reorder nth N
 -reorder custom filename :pre
 
 Reorder the processors in the MPI communicator used to instantiate
 LAMMPS, in one of several ways.  The original MPI communicator ranks
 all P processors from 0 to P-1.  The mapping of these ranks to
 physical processors is done by MPI before LAMMPS begins.  It may be
 useful in some cases to alter the rank order.  E.g. to insure that
 cores within each node are ranked in a desired order.  Or when using
 the "run_style verlet/split"_run_style.html command with 2 partitions
 to insure that a specific Kspace processor (in the 2nd partition) is
 matched up with a specific set of processors in the 1st partition.
 See the "Section_accelerate"_Section_accelerate.html doc pages for
 more details.
 
 If the keyword {nth} is used with a setting {N}, then it means every
 Nth processor will be moved to the end of the ranking.  This is useful
 when using the "run_style verlet/split"_run_style.html command with 2
 partitions via the -partition command-line switch.  The first set of
 processors will be in the first partition, the 2nd set in the 2nd
 partition.  The -reorder command-line switch can alter this so that
 the 1st N procs in the 1st partition and one proc in the 2nd partition
 will be ordered consecutively, e.g. as the cores on one physical node.
 This can boost performance.  For example, if you use "-reorder nth 4"
 and "-partition 9 3" and you are running on 12 processors, the
 processors will be reordered from
 
 0 1 2 3 4 5 6 7 8 9 10 11 :pre
 
 to
 
 0 1 2 4 5 6 8 9 10 3 7 11 :pre
 
 so that the processors in each partition will be
 
 0 1 2 4 5 6 8 9 10 
 3 7 11 :pre
 
 See the "processors" command for how to insure processors from each
 partition could then be grouped optimally for quad-core nodes.
 
 If the keyword is {custom", then a file that specifies a permutation
 of the processor ranks is also specified.  The format of the reorder
 file is as follows.  Any number of initial blank or comment lines
 (starting with a "#" character) can be present.  These should be
 followed by P lines of the form:
 
 I J :pre
 
 where P is the number of processors LAMMPS was launched with.  Note
 that if running in multi-partition mode (see the -partition switch
 above) P is the total number of processors in all partitions.  The I
 and J values describe a permutation of the P processors.  Every I and
 J should be values from 0 to P-1 inclusive.  In the set of P I values,
 every proc ID should appear exactly once.  Ditto for the set of P J
 values.  A single I,J pairing means that the physical processor with
 rank I in the original MPI communicator will have rank J in the
 reordered communicator.
 
 Note that rank ordering can also be specified by many MPI
 implementations, either by environment variables that specify how to
 order physical processors, or by config files that specify what
 physical processors to assign to each MPI rank.  The -reorder switch
 simply gives you a portable way to do this without relying on MPI
 itself.  See the "processors out"_processors command for how to output
 info on the final assignment of physical processors to the LAMMPS
 simulation domain.
 
 -screen file :pre
 
 Specify a file for LAMMPS to write its screen information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 screen.  If this switch is used, LAMMPS writes to the specified file
 instead and you will see no screen output.  In multi-partition mode,
 if the switch is not used, hi-level status information is written to
 the screen.  Each partition also writes to a screen.N file where N is
 the partition ID.  If the switch is specified in multi-partition mode,
 the hi-level screen dump is named "file" and each partition also
 writes screen information to a file.N.  For both one-partition and
 multi-partition mode, if the specified file is "none", then no screen
 output is performed. Option -pscreen will override the name of the
 partition screen files file.N.
 
 -suffix style :pre
 
 Use variants of various styles if they exist.  The specified style can
 be {opt}, {omp}, {gpu}, or {cuda}.  These refer to optional packages that
 LAMMPS can be built with, as described above in "Section
 2.3"_#start_3.  The "opt" style corrsponds to the OPT package, the
 "omp" style to the USER-OMP package, the "gpu" style to the GPU 
 package, and the "cuda" style to the USER-CUDA package.
 
 As an example, all of the packages provide a "pair_style
 lj/cut"_pair_lj.html variant, with style names lj/cut/opt, lj/cut/omp,
 lj/cut/gpu, or lj/cut/cuda.  A variant styles can be specified
 explicitly in your input script, e.g. pair_style lj/cut/gpu.  If the
 -suffix switch is used, you do not need to modify your input script.
 The specified suffix (opt,omp,gpu,cuda) is automatically appended
 whenever your input script command creates a new
 "atom"_atom_style.html, "pair"_pair_style.html, "fix"_fix.html,
 "compute"_compute.html, or "run"_run_style.html style.  If the variant
 version does not exist, the standard version is created.
 
 For the GPU package, using this command-line switch also invokes the
 default GPU settings, as if the command "package gpu force/neigh 0 0
 1" were used at the top of your input script.  These settings can be
 changed by using the "package gpu"_package.html command in your script
 if desired.
 
 For the OMP package, using this command-line switch also invokes the
 default OMP settings, as if the command "package omp *" were used at
 the top of your input script.  These settings can be changed by using
 the "package omp"_package.html command in your script if desired.
 
 The "suffix"_suffix.html command can also set a suffix and it can also
 turn off/on any suffix setting made via the command line.
 
 -var name value1 value2 ... :pre
 
 Specify a variable that will be defined for substitution purposes when
 the input script is read.  "Name" is the variable name which can be a
 single character (referenced as $x in the input script) or a full
 string (referenced as $\{abc\}).  An "index-style
 variable"_variable.html will be created and populated with the
 subsequent values, e.g. a set of filenames.  Using this command-line
 option is equivalent to putting the line "variable name index value1
 value2 ..."  at the beginning of the input script.  Defining an index
 variable as a command-line argument overrides any setting for the same
 index variable in the input script, since index variables cannot be
 re-defined.  See the "variable"_variable.html command for more info on
 defining index and other kinds of variables and "this
 section"_Section_commands.html#cmd_2 for more info on using variables
 in input scripts.
 
 NOTE: Currently, the command-line parser looks for arguments that
 start with "-" to indicate new switches.  Thus you cannot specify
 multiple variable values if any of they start with a "-", e.g. a
 negative numeric value.  It is OK if the first value1 starts with a
 "-", since it is automatically skipped.
 
 :line
 
 2.8 LAMMPS screen output :h4,link(start_8)
 
 As LAMMPS reads an input script, it prints information to both the
 screen and a log file about significant actions it takes to setup a
 simulation.  When the simulation is ready to begin, LAMMPS performs
 various initializations and prints the amount of memory (in MBytes per
 processor) that the simulation requires.  It also prints details of
 the initial thermodynamic state of the system.  During the run itself,
 thermodynamic information is printed periodically, every few
 timesteps.  When the run concludes, LAMMPS prints the final
 thermodynamic state and a total run time for the simulation.  It then
 appends statistics about the CPU time and storage requirements for the
 simulation.  An example set of statistics is shown here:
 
 Loop time of 49.002 on 2 procs for 2004 atoms :pre
 
 Pair   time (%) = 35.0495 (71.5267)
 Bond   time (%) = 0.092046 (0.187841)
 Kspce  time (%) = 6.42073 (13.103)
 Neigh  time (%) = 2.73485 (5.5811)
 Comm   time (%) = 1.50291 (3.06703)
 Outpt  time (%) = 0.013799 (0.0281601)
 Other  time (%) = 2.13669 (4.36041) :pre
 
 Nlocal:    1002 ave, 1015 max, 989 min
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 Nghost:    8720 ave, 8724 max, 8716 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1
 Neighs:    354141 ave, 361422 max, 346860 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1 :pre
 
 Total # of neighbors = 708282
 Ave neighs/atom = 353.434
 Ave special neighs/atom = 2.34032
 Number of reneighborings = 42
 Dangerous reneighborings = 2 :pre
 
 The first section gives the breakdown of the CPU run time (in seconds)
 into major categories.  The second section lists the number of owned
 atoms (Nlocal), ghost atoms (Nghost), and pair-wise neighbors stored
 per processor.  The max and min values give the spread of these values
 across processors with a 10-bin histogram showing the distribution.
 The total number of histogram counts is equal to the number of
 processors.
 
 The last section gives aggregate statistics for pair-wise neighbors
 and special neighbors that LAMMPS keeps track of (see the
 "special_bonds"_special_bonds.html command).  The number of times
 neighbor lists were rebuilt during the run is given as well as the
 number of potentially "dangerous" rebuilds.  If atom movement
 triggered neighbor list rebuilding (see the
 "neigh_modify"_neigh_modify.html command), then dangerous
 reneighborings are those that were triggered on the first timestep
 atom movement was checked for.  If this count is non-zero you may wish
 to reduce the delay factor to insure no force interactions are missed
 by atoms moving beyond the neighbor skin distance before a rebuild
 takes place.
 
 If an energy minimization was performed via the
 "minimize"_minimize.html command, additional information is printed,
 e.g.
 
 Minimization stats:
   E initial, next-to-last, final = -0.895962 -2.94193 -2.94342
   Gradient 2-norm init/final= 1920.78 20.9992
   Gradient inf-norm init/final= 304.283 9.61216
   Iterations = 36
   Force evaluations = 177 :pre
 
 The first line lists the initial and final energy, as well as the
 energy on the next-to-last iteration.  The next 2 lines give a measure
 of the gradient of the energy (force on all atoms).  The 2-norm is the
 "length" of this force vector; the inf-norm is the largest component.
 The last 2 lines are statistics on how many iterations and
 force-evaluations the minimizer required.  Multiple force evaluations
 are typically done at each iteration to perform a 1d line minimization
 in the search direction.
 
 If a "kspace_style"_kspace_style.html long-range Coulombics solve was
 performed during the run (PPPM, Ewald), then additional information is
 printed, e.g.
 
 FFT time (% of Kspce) = 0.200313 (8.34477)
 FFT Gflps 3d 1d-only = 2.31074 9.19989 :pre
 
 The first line gives the time spent doing 3d FFTs (4 per timestep) and
 the fraction it represents of the total KSpace time (listed above).
 Each 3d FFT requires computation (3 sets of 1d FFTs) and communication
 (transposes).  The total flops performed is 5Nlog_2(N), where N is the
 number of points in the 3d grid.  The FFTs are timed with and without
 the communication and a Gflop rate is computed.  The 3d rate is with
 communication; the 1d rate is without (just the 1d FFTs).  Thus you
 can estimate what fraction of your FFT time was spent in
 communication, roughly 75% in the example above.
 
 :line
 
 2.9 Tips for users of previous LAMMPS versions :h4,link(start_9)
 
 The current C++ began with a complete rewrite of LAMMPS 2001, which
 was written in F90.  Features of earlier versions of LAMMPS are listed
 in "Section_history"_Section_history.html.  The F90 and F77 versions
 (2001 and 99) are also freely distributed as open-source codes; check
 the "LAMMPS WWW Site"_lws for distribution information if you prefer
 those versions.  The 99 and 2001 versions are no longer under active
 development; they do not have all the features of C++ LAMMPS.
 
 If you are a previous user of LAMMPS 2001, these are the most
 significant changes you will notice in C++ LAMMPS:
 
 (1) The names and arguments of many input script commands have
 changed.  All commands are now a single word (e.g. read_data instead
 of read data).
 
 (2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS,
 but you may need to specify the relevant commands in different ways.
 
 (3) The format of the data file can be streamlined for some problems.
 See the "read_data"_read_data.html command for details.  The data file
 section "Nonbond Coeff" has been renamed to "Pair Coeff" in C++ LAMMPS.
 
 (4) Binary restart files written by LAMMPS 2001 cannot be read by C++
 LAMMPS with a "read_restart"_read_restart.html command.  This is
 because they were output by F90 which writes in a different binary
 format than C or C++ writes or reads.  Use the {restart2data} tool
 provided with LAMMPS 2001 to convert the 2001 restart file to a text
 data file.  Then edit the data file as necessary before using the C++
 LAMMPS "read_data"_read_data.html command to read it in.
 
 (5) There are numerous small numerical changes in C++ LAMMPS that mean
 you will not get identical answers when comparing to a 2001 run.
 However, your initial thermodynamic energy and MD trajectory should be
 close if you have setup the problem for both codes the same.