diff --git a/doc/Section_start.html b/doc/Section_start.html
index 17f58e376..03fb6baa2 100644
--- a/doc/Section_start.html
+++ b/doc/Section_start.html
@@ -1,1097 +1,1137 @@
 <HTML>
 <CENTER><A HREF = "Section_intro.html">Previous Section</A> - <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> - <A HREF = "Manual.html">LAMMPS Documentation</A> - <A HREF = "Section_commands.html#comm">LAMMPS Commands</A> - <A HREF = "Section_commands.html">Next Section</A> 
 </CENTER>
 
 
 
 
 
 
 <HR>
 
 <H3>2. Getting Started 
 </H3>
 <P>This section describes how to build and run LAMMPS, for both new and
 experienced users.
 </P>
 2.1 <A HREF = "#2_1">What's in the LAMMPS distribution</A><BR>
 2.2 <A HREF = "#2_2">Making LAMMPS</A><BR>
 2.3 <A HREF = "#2_3">Making LAMMPS with optional packages</A><BR>
 2.4 <A HREF = "#2_4">Building LAMMPS as a library</A><BR>
 2.5 <A HREF = "#2_5">Running LAMMPS</A><BR>
 2.6 <A HREF = "#2_6">Command-line options</A><BR>
 2.7 <A HREF = "#2_7">Screen output</A><BR>
 2.8 <A HREF = "#2_8">Tips for users of previous versions</A> <BR>
 
 <HR>
 
 <H4><A NAME = "2_1"></A>2.1 What's in the LAMMPS distribution 
 </H4>
 <P>When you download LAMMPS you will need to unzip and untar the
 downloaded file with the following commands, after placing the file in
 an appropriate directory.
 </P>
 <PRE>gunzip lammps*.tar.gz 
 tar xvf lammps*.tar 
 </PRE>
 <P>This will create a LAMMPS directory containing two files and several
 sub-directories:
 </P>
-<DIV ALIGN=center><TABLE  BORDER=1 >
+<DIV ALIGN=center><TABLE  WIDTH="0%"  BORDER=1 >
 <TR><TD >README</TD><TD > text file</TD></TR>
 <TR><TD >LICENSE</TD><TD > the GNU General Public License (GPL)</TD></TR>
 <TR><TD >bench</TD><TD > benchmark problems</TD></TR>
 <TR><TD >couple</TD><TD > code coupling examples, using LAMMPS as a library</TD></TR>
 <TR><TD >doc</TD><TD > documentation</TD></TR>
 <TR><TD >examples</TD><TD > simple test problems</TD></TR>
 <TR><TD >potentials</TD><TD > embedded atom method (EAM) potential files</TD></TR>
 <TR><TD >src</TD><TD > source files</TD></TR>
 <TR><TD >tools</TD><TD > pre- and post-processing tools 
 </TD></TR></TABLE></DIV>
 
 <P>If you download one of the Windows executables from the download page,
 then you just get a single file:
 </P>
 <PRE>lmp_windows.exe 
 </PRE>
 <P>Skip to the <A HREF = "#2_5">Running LAMMPS</A> sections for info on how to launch
 these executables on a Windows box.
 </P>
 <P>The Windows executables for serial or parallel only include certain
 packages and bug-fixes/upgrades listed on <A HREF = "http://lammps.sandia.gov/bug.html">this
 page</A> up to a certain date, as
 stated on the download page.  If you want something with more packages
 or that is more current, you'll have to download the source tarball
 and build it yourself from source code using Microsoft Visual Studio,
 as described in the next section.
 </P>
 <HR>
 
 <H4><A NAME = "2_2"></A>2.2 Making LAMMPS 
 </H4>
 <P>This section has the following sub-sections:
 </P>
 <UL><LI><A HREF = "#2_2_1">Read this first</A>
 <LI><A HREF = "#2_2_2">Building a LAMMPS executable</A>
 <LI><A HREF = "#2_2_3">Common errors that can occur when making LAMMPS</A>
 <LI><A HREF = "#2_2_4">Editing a new low-level Makefile</A>
 <LI><A HREF = "#2_2_5">Additional build tips</A>
 <LI><A HREF = "#2_2_6">Building for a Mac</A>
 <LI><A HREF = "#2_2_7">Building for Windows</A> 
 </UL>
 <HR>
 
 <A NAME = "2_2_1"></A><B><I>Read this first:</I></B> 
 
 <P>Building LAMMPS can be non-trivial.  You will likely need to edit a
 makefile, there are compiler options, additional libraries can be used
 (MPI, FFT, JPEG), etc.  Please read this section carefully.  If you
 are not comfortable with makefiles, or building codes on a Unix
 platform, or running an MPI job on your machine, please find a local
 expert to help you.  Many compiling, linking, and run problems that
 users are not really LAMMPS issues - they are peculiar to the user's
 system, compilers, libraries, etc.  Such questions are better answered
 by a local expert.
 </P>
 <P>If you have a build problem that you are convinced is a LAMMPS issue
 (e.g. the compiler complains about a line of LAMMPS source code), then
 please send an email to the
 <A HREF = "http://lammps.sandia.gov/authors.html">developers</A>.
 </P>
 <P>If you succeed in building LAMMPS on a new kind of machine, for which
 there isn't a similar Makefile for in the src/MAKE directory, send it
 to the developers and we'll include it in future LAMMPS releases.
 </P>
 <HR>
 
 <A NAME = "2_2_2"></A><B><I>Building a LAMMPS executable:</I></B> 
 
 <P>The src directory contains the C++ source and header files for LAMMPS.
 It also contains a top-level Makefile and a MAKE sub-directory with
 low-level Makefile.* files for several machines.  From within the src
 directory, type "make" or "gmake".  You should see a list of available
 choices.  If one of those is the machine and options you want, you can
 type a command like:
 </P>
 <PRE>make linux
 gmake mac 
 </PRE>
 <P>Note that on a multi-processor or multi-core platform you can launch a
 parallel make, by using the "-j" switch with the make command, which
 will build LAMMPS more quickly.
 </P>
 <P>If you get no errors and an executable like lmp_linux or lmp_mac is
 produced, you're done; it's your lucky day.
 </P>
 <HR>
 
 <A NAME = "2_2_3"></A><B><I>Common errors that can occur when making LAMMPS:</I></B> 
 
 <P>(1) If the make command breaks immediately with errors that indicate
 it can't find files with a "*" in their names, this can be because
 your machine's make doesn't support wildcard expansion in a makefile.
 Try gmake instead of make.  If that doesn't work, try using a -f
 switch with your make command to use Makefile.list which explicitly
 lists all the needed files, e.g.
 </P>
 <PRE>make makelist
 make -f Makefile.list linux
 gmake -f Makefile.list mac 
 </PRE>
 <P>The first "make" command will create a current Makefile.list with all
 the file names in your src dir.  The 2nd "make" command (make or
 gmake) will use it to build LAMMPS.
 </P>
 <P>(2) Other errors typically occur because the low-level Makefile isn't
 setup correctly for your machine.  If your platform is named "foo",
 you will need to create a Makefile.foo in the MAKE sub-directory.  Use
 whatever existing file is closest to your platform as a starting
 point.  See the next section for more instructions.
 </P>
 <P>(3) If you get a link-time error about missing libraries or missing
 dependencies, then it can be because:
 </P>
 <UL><LI>you are including a package that needs an extra library, but have not pre-built the necessary <A HREF = "#2_3_3">package library</A>
 <LI>you are linking to a library that doesn't exist on your system
 <LI>you are not linking to the necessary system library 
 </UL>
 <P>The first issue is discussed below.  The other two issue mean you need
 to edit your low-level Makefile.foo, as discussed in the next
 sub-section.
 </P>
 <HR>
 
 <A NAME = "2_2_4"></A><B><I>Editing a new low-level Makefile.foo:</I></B> 
 
 <P>These are the issues you need to address when editing a low-level
 Makefile for your machine.  The portions of the file you typically
 need to edit are the first line, the "compiler/linker settings"
 section, and the "system-specific settings" section.
 </P>
 <P>(1) Change the first line of Makefile.foo to list the word "foo" after
 the "#", and whatever other options you set.  This is the line you
 will see if you just type "make".
 </P>
 <P>(3) The "compiler/linker settings" section lists compiler and linker
 settings for your C++ compiler, including optimization flags.  You can
 use g++, the open-source GNU compiler, which is available on all Unix
 systems.  You can also use mpicc which will typically be available if
 MPI is installed on your system, though you should check which actual
 compiler it wraps.  Vendor compilers often produce faster code.  On
 boxes with Intel CPUs, we suggest using the free Intel icc compiler,
 which you can download from <A HREF = "http://www.intel.com/software/products/noncom">Intel's compiler site</A>.
 </P>
 
 
 <P>If building a C++ code on your machine requires additional libraries,
 then you should list them as part of the LIB variable.
 </P>
 <P>The DEPFLAGS setting is what triggers the C++ compiler to create a
 dependency list for a source file.  This speeds re-compilation when
 source (*.cpp) or header (*.h) files are edited.  Some compilers do
 not support dependency file creation, or may use a different switch
 than -D.  GNU g++ works with -D.  If your compiler can't create
 dependency files (a long list of errors involving *.d files), then
 you'll need to create a Makefile.foo patterned after Makefile.storm,
 which uses different rules that do not involve dependency files.
 </P>
 <P>(3) The "system-specific settings" section has 4 parts.
 </P>
 <P>(3.a) The LMP_INC variable is used to include options that turn on
 system-dependent ifdefs within the LAMMPS code.  The settings
 that are currently recogized are:
 </P>
 <UL><LI>-DLAMMPS_GZIP
 <LI>-DPACK_ARRAY
 <LI>-DPACK_POINTER
 <LI>-DPACK_MEMCPY
 <LI>-DLAMMPS_XDR
 <LI>-DLAMMPS_JPEG 
 </UL>
 <P>The read_data and dump commands will read/write gzipped files if you
 compile with -DLAMMPS_GZIP.  It requires that your Unix support the
 "popen" command.
 </P>
 <P>Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY
 options can make for faster parallel FFTs (in the PPPM solver) on some
 platforms.  The -DPACK_ARRAY setting is the default.  See the
 <A HREF = "kspace_style.html">kspace_style</A> command for info about PPPM.  See
 section (3.c) below for info about building LAMMPS with an FFT
 library.
 </P>
 <P>If you use -DLAMMPS_XDR, the build will include XDR compatibility
 files for doing particle dumps in XTC format.  This is only necessary
 if your platform does have its own XDR files available.  See the
 Restrictions section of the <A HREF = "dump.html">dump</A> command for details.
 </P>
 <P>If you use -DLAMMPS_JPEG, the <A HREF = "dump.html">dump image</A> command will be
 able to write out JPEG image files.  If not, it will only be able to
 write out text-based PPM image files.  For JPEG files, you must also
 link LAMMPS with a JPEG library.  See section (3.d) below for more
 details on this.
 </P>
 <P>(3.b) The 3 MPI variables are used to specify an MPI library to build
 LAMMPS with.
 </P>
 <P>If you want LAMMPS to run in parallel, you must have an MPI library
 installed on your platform.  If you use an MPI-wrapped compiler, such
 as "mpicc" to build LAMMPS, you can probably leave these 3 variables
 blank.  If you do not use "mpicc" as your compiler/linker, then you
 need to specify where the mpi.h file (MPI_INC) and the MPI library
 (MPI_PATH) is found and its name (MPI_LIB).
 </P>
 <P>If you are installing MPI yourself, we recommend Argonne's MPICH 1.2
 or 2.0 or OpenMPI.  MPICH can be downloaded from the <A HREF = "http://www-unix.mcs.anl.gov/mpi">Argonne MPI
 site</A>.  OpenMPI can be downloaded the
 <A HREF = "http://www.open-mpi.org">OpenMPI site</A>.  LAM MPI should also work.  If
 you are running on a big parallel platform, your system people or the
 vendor should have already installed a version of MPI, which will be
 faster than MPICH or OpenMPI or LAM, so find out how to build and link
 with it.  If you use MPICH or OpenMPI or LAM, you will have to
 configure and build it for your platform.  The MPI configure script
 should have compiler options to enable you to use the same compiler
 you are using for the LAMMPS build, which can avoid problems that can
 arise when linking LAMMPS to the MPI library.
 </P>
 <P>If you just want LAMMPS to run on a single processor, you can use the
 STUBS library in place of MPI, since you don't need a true MPI library
 installed on your system.  See the Makefile.serial file for how to
 specify the 3 MPI variables.  You will also need to build the STUBS
 library for your platform before making LAMMPS itself.  From the STUBS
 dir, type "make" and it will hopefully create a libmpi.a suitable for
 linking to LAMMPS.  If this build fails, you will need to edit the
 STUBS/Makefile for your platform.
 </P>
 <P>The file STUBS/mpi.cpp has a CPU timer function MPI_Wtime() that calls
 gettimeofday() .  If your system doesn't support gettimeofday() ,
 you'll need to insert code to call another timer.  Note that the
 ANSI-standard function clock() rolls over after an hour or so, and is
 therefore insufficient for timing long LAMMPS simulations.
 </P>
 <P>(3.c) The 3 FFT variables are used to specify an FFT library which
 LAMMPS uses when using the particle-particle particle-mesh (PPPM)
 option in LAMMPS for long-range Coulombics via the
 <A HREF = "kspace_style.html">kspace_style</A> command.
 </P>
 <P>To use this option, you must have a 1d FFT library installed on your
 platform.  This is specified by a switch of the form -DFFT_XXX where
 XXX = INTEL, DEC, SGI, SCSL, or FFTW.  All but the last one are native
 vendor-provided libraries.  FFTW is a fast, portable library that
 should work on any platform.  You can download it from
 <A HREF = "http://www.fftw.org">www.fftw.org</A>.  Use version 2.1.X, not the newer
 3.0.X.  Building FFTW for your box should be as simple as ./configure;
 make.  Whichever FFT library you have on your platform, you'll need to
 set the appropriate FFT_INC, FFT_PATH, and FFT_LIB variables in
 Makefile.foo, so the compiler and linker can find it.
 </P>
 <P>If you examine src/fft3d.c and src.fft3d.h you'll see it's possible to
 add other vendor FFT libraries via #ifdef statements in the
 appropriate places.  If you successfully add a new FFT option, like
 -DFFT_IBM, please send the LAMMPS developers an email; we'd like to
 add it to LAMMPS.
 </P>
 <P>If you don't plan to use PPPM, you don't need an FFT library.  In this
 case you can set FFT_INC to -DFFT_NONE and leave the other 2 FFT
 variables blank.  Or you can exclude the KSPACE package when you build
 LAMMPS (see below).
 </P>
 <P>(3.d) The 3 JPG variables are used to specify a JPEG library which
 LAMMPS uses when writing a JPEG file via the <A HREF = "dump_image.html">dump
 image</A> command.  These can be left blank if you are
 not using the -DLAMMPS_JPEG switch discussed above in section (3.a).
 </P>
 <P>A standard JPEG library usually goes by the name libjpeg.a and has an
 associated header file jpeglib.h.  Whichever JPEG library you have on
 your platform, you'll need to set the appropriate JPG_INC, JPG_PATH,
 and JPG_LIB variables in Makefile.foo so that the compiler and linker
 can find it.
 </P>
 <P>(3.e) The several SYSLIB and SYSPATH variables can be ignored unless
 you are building LAMMPS with one or more of the LAMMPS packages that
 require these extra system libraries.  The names of these packages are
 the prefixes on the SYSLIB and SYSPATH variables.  See the <A HREF = "#2_3_4">section
 below</A> for more details.  The SYSLIB variables list the system
 libraries.  The SYSPATH variables are where they are located on your
 machine, which is typically only needed if they are in some
 non-standard place, that is not in your library search path.
 </P>
 <P>That's it.  Once you have a correct Makefile.foo and you have
 pre-built any other libraries it will use (e.g. MPI, FFT, package
 libraries), all you need to do from the src directory is type one of
 these 2 commands:
 </P>
 <PRE>make foo
 gmake foo 
 </PRE>
 <P>You should get the executable lmp_foo when the build is complete.
 </P>
 <HR>
 
 <A NAME = "2_2_5"></A><B><I>Additional build tips:</I></B> 
 
 <P>(1) Building LAMMPS for multiple platforms.
 </P>
 <P>You can make LAMMPS for multiple platforms from the same src
 directory.  Each target creates its own object sub-directory called
 Obj_name where it stores the system-specific *.o files.
 </P>
 <P>(2) Cleaning up.
 </P>
 <P>Typing "make clean-all" or "make clean-foo" will delete *.o object
 files created when LAMMPS is built, for either all builds or for a
 particular machine.
 </P>
 <HR>
 
 <A NAME = "2_2_6"></A><B><I>Building for a Mac:</I></B> 
 
 <P>OS X is BSD Unix, so it should just work.  See the Makefile.mac file.
 </P>
 <HR>
 
 <A NAME = "2_2_7"></A><B><I>Building for Windows:</I></B> 
 
 <P>The LAMMPS download page has an option to download both a serial and
 parallel pre-built Windows exeutable.  See the <A HREF = "#2_5">Running LAMMPS</A>
 section for instructions for running these executables on a Windows
 box.
 </P>
 <P>If the pre-built executable doesn't have the options you want, then
 you can build LAMMPS from its source files on a Windows box.  One way
 to do this is install and use cygwin to build LAMMPS with a standard
 Linus make, just as you would on any Linux box; see
 src/MAKE/Makefile.cygwin.
 </P>
 <P>There is a also a src/WINDOWS directory that contains project files
 for Microsoft Visual Studio 2005, which should also work with later
 versions of VS.  That directory contains a README.txt file which
 provides instructions for building LAMMPS from source code using
 Visual Studio that are hopefully easy to follow for Windows and VS
 users.
 </P>
 <P>Four VS project options are provided.  The first includes the default
 packages (MANYBODY, MOLECULE, and KSPACE).  The second includes all
 standard packages (except GPU, MEAM, and REAX which are not yet
 included because they require NVIDIA or Fortran compilation).  The
 third includes all standard packages (with the exceptions) and some
 user packages.  The included user packages are USER-EFF, USER-CG-CMM,
 and USER-REAXC.  The fourth project includes the USER-AWPMD package.
 </P>
 <P>(5) Changing the size limits in src/lmptype.h
 </P>
 <P>If you are running a very large problem (billions of atoms or more)
 and get a run-time error about the system being too big, either on a
 per-processor basis or in total size, then you may need to change one
 or more settings in src/lmptype.h and re-compile LAMMPS.
 </P>
 <P>As the documentation in that file explains, you have basically
 two choices to make:
 </P>
 <UL><LI>set the data type size of integer atom IDs to 4 or 8 bytes
 <LI>set the data type size of integers that store the total system size to 4 or 8 bytes 
 </UL>
 <P>The default for atom IDs is 4-byte integers since there is a memory
 and communication cost for 8-byte integers.  Non-molecular problems do
 not need atom IDs so this does not restrict their size.  Molecular
 problems (which use IDs to define molecular topology), are limited to
 about 2 billion atoms (2^31) with 4-byte IDs.  With 8-byte IDs they
 are effectively unlimited in size (2^63).
 </P>
 <P>The default for total system size quantities (like the number of atoms
 or timesteps) is 8-byte integers by default which is effectively
 unlimited in size (2^63).  If your system does not support 8-byte
 integers, an error will be generated, and you will need to set
 "bigint" to 4-byte integers.  This restricts your total system size to
 about 2 billion atoms or timesteps (2^31).
 </P>
 <P>Note that in src/lmptype.h there are also settings for the MPI data
 types associated with the integers that store atom IDs and total
 system sizes, which need to be set consistent with the associated C
 data types.
 </P>
 <P>In all cases, the size of problem that can be run on a per-processor
 basis is limited by 4-byte integer storage to about 2 billion atoms
 per processor (2^31), which should not normally be a restriction since
 such a problem would have a huge per-processor memory footprint due to
 neighbor lists and would run very slowly in terms of CPU
 secs/timestep.
 </P>
 <HR>
 
 <H4><A NAME = "2_3"></A>2.3 Making LAMMPS with optional packages 
 </H4>
 <P>This section has the following sub-sections:
 </P>
 <UL><LI><A HREF = "#2_3_1">Package basics</A>
 <LI><A HREF = "#2_3_2">Including/excluding packages</A>
 <LI><A HREF = "#2_3_3">Packages that require extra LAMMPS libraries</A>
 <LI><A HREF = "#2_3_4">Additional Makefile settings for extra libraries</A> 
 </UL>
 <HR>
 
 <A NAME = "2_3_1"></A><B><I>Package basics:</I></B> 
 
 <P>The source code for LAMMPS is structured as a large set of core files
 which are always included, plus optional packages.  Packages are
 groups of files that enable a specific set of features.  For example,
 force fields for molecular systems or granular systems are in
 packages.  You can see the list of all packages by typing "make
 package".
 </P>
 <P>The current list of standard packages is as follows:
 </P>
-<DIV ALIGN=center><TABLE  BORDER=1 >
+<DIV ALIGN=center><TABLE  WIDTH="0%"  BORDER=1 >
 <TR><TD >asphere </TD><TD > aspherical particles and force fields</TD></TR>
 <TR><TD >class2 </TD><TD > class 2 force fields</TD></TR>
 <TR><TD >colloid </TD><TD > colloidal particle force fields</TD></TR>
 <TR><TD >dipole </TD><TD > point dipole particles and force fields</TD></TR>
 <TR><TD >dsmc </TD><TD > Direct Simulation Monte Carlo (DMSC) pair style</TD></TR>
 <TR><TD >gpu </TD><TD > GPU-enabled force field styles</TD></TR>
 <TR><TD >granular </TD><TD > force fields and boundary conditions for granular systems</TD></TR>
 <TR><TD >kspace </TD><TD > long-range Ewald and particle-mesh (PPPM) solvers</TD></TR>
 <TR><TD >manybody </TD><TD > metal, 3-body, bond-order potentials</TD></TR>
 <TR><TD >meam </TD><TD > modified embedded atom method (MEAM) potential</TD></TR>
 <TR><TD >molecule </TD><TD > force fields for molecular systems</TD></TR>
 <TR><TD >opt </TD><TD > optimized versions of a few pair potentials</TD></TR>
 <TR><TD >peri </TD><TD > Peridynamics model and potential</TD></TR>
 <TR><TD >poems </TD><TD > coupled rigid body motion</TD></TR>
 <TR><TD >reax </TD><TD > ReaxFF potential</TD></TR>
 <TR><TD >replica </TD><TD > multi-replica methods</TD></TR>
 <TR><TD >shock </TD><TD > methods for MD simulations of shock loading</TD></TR>
 <TR><TD >srd </TD><TD > stochastic rotation dynamics (SRD)</TD></TR>
 <TR><TD >xtc </TD><TD > dump atom snapshots in XTC format 
 </TD></TR></TABLE></DIV>
 
 <P>There are also user-contributed packages which may be as simple as a
 single additional file or many files grouped together which add a
 specific functionality to the code.
 </P>
 <P>The difference between a <I>standard</I> package versus a <I>user</I> package is
 as follows.
 </P>
 <P>Standard packages are supported by the LAMMPS developers and are
 written in a syntax and style consistent with the rest of LAMMPS.
 This means we will answer questions about them, debug and fix them if
 necessary, and keep them compatible with future changes to LAMMPS.
 </P>
 <P>User packages don't necessarily meet these requirements.  If you have
 problems using a feature provided in a user package, you will likely
 need to contact the contributor directly to get help.  Information on
 how to submit additions you make to LAMMPS as a user-contributed
 package is given in <A HREF = "Section_modify.html#package">this section</A> of the
 documentation.
 </P>
 <HR>
 
 <A NAME = "2_3_2"></A><B><I>Including/excluding packages:</I></B> 
 
 <P>To use or not use a package you must be include or exclude it before
 LAMMPS is built.
 </P>
 <P>Some packages have individual files that depend on other packages
 being included, but LAMMPS checks for this and does the right thing.
 I.e. individual files are only included if their dependencies are
 already included.  Likewise, if a package is excluded, other files
 dependent on that package are also excluded.
 </P>
 <P>The reason to exclude packages is if you will never run certain kinds
 of simulations.  This will keep you from having to build auxiliary
 libraries (see below) and will produce a smaller executable which may
 run a bit faster.
 </P>
 <P>By default, LAMMPS includes only the "kspace", "manybody", and
 "molecule" packages.
 </P>
 <P>Packages are included or excluded by typing "make yes-name" or "make
 no-name", where "name" is the name of the package.  You can also type
 "make yes-standard", "make no-standard", "make yes-user", "make
 no-user", "make yes-all" or "make no-all" to include/exclude various
-sets of packages.  Type "make package" to see the various options.
+sets of packages.  Type "make package" to see the all of the
+package-related make options.
 </P>
 <P>IMPORTANT NOTE: These make commands work by simply moving files back
 and forth between the main src directory and sub-directories with the
 package name, so that the files are seen or not seen when LAMMPS is
 built.  After you have included or excluded a package, you must
 re-build LAMMPS.
 </P>
-<P>Additional make options exist to help manage LAMMPS files that exist
+<P>Additional package-related make options exist to help manage LAMMPS 
+files that exist
 in both the src directory and in package sub-directories.  You do not
 normally need to use these commands unless you are editing LAMMPS
 files or have downloaded a patch from the LAMMPS WWW site.
 </P>
 <P>Typing "make package-update" will overwrite src files with files from
-the package directories if the package has been included.  It should
+the package sub-directories if the package has been included.  It should
 be used after a patch is installed, since patches only update the
-master package version of a file.  Typing "make package-overwrite"
-will overwrite files in the package directories with src files.
-Typing "make package-check" will list differences between src and
-package versions of the same files.  Again, type "make package" to see
-the various options.
+files in the package sub-directory, but not the src files.  
+Typing "make package-overwrite"
+will overwrite files in the package sub-directories with src files.
+</P>
+<P>Typing "make package-status" will show which packages are currently
+included. Of those that are included, it will list files that
+are different in the src directory and package sub-directory.
+Typing "make package-diff" lists all differences between these files. 
+Again, type "make package" to see all of the
+package-related make options.
 </P>
 <HR>
 
 <A NAME = "2_3_3"></A><B><I>Packages that require extra LAMMPS libraries:</I></B> 
 
 <P>A few standard or user packages require that additional libraries be
 compiled first, which LAMMPS will link to when it builds.  The source
 code for these libraries is included in the LAMMPS distribution under
 the "lib" directory.  Look at the README files in the lib directories
 (e.g. lib/reax/README) for instructions on how to build each library.
 </P>
 <P>IMPORTANT NOTE: If you are including a package in your LAMMPS build
 that uses one of these libraries, then you must build the library
 BEFORE building LAMMPS itself, since the LAMMPS build will attempt to
 link with the library file.
 </P>
 <P>Here is a bit of information about each library:
 </P>
 <P>The "atc" library in lib/atc is used by the user-atc package.  It
 provides continuum field estimation and molecular dynamics-finite
 element coupling methods.  It was written by Reese Jones, Jeremy
 Templeton and Jonathan Zimmerman at Sandia.
 </P>
 <P>The "cuda" library in lib/cuda is used by the user-cuda package.  It
 was written by Christian Trott at U of Technology Ilmenau in Germany.
 It contains code to enable portions of LAMMPS to run on NVIDIA GPUs
 associated with your CPUs.  Currently, only NVIDIA GPUs are supported.
 Building this library requires NVIDIA Cuda tools to be installed on
 your system.  See <A HREF = "Section_accelerate.html#10_3">this section</A> of the
 manual for more information about using this package effectively and
 how it differs from the gpu package.
 </P>
 <P>The "gpu" library in lib/gpu is used by the gpu package.  It was
 written by Mike Brown at ORNL.  It contains code to enable portions of
 LAMMPS to run on GPUs associated with your CPUs.  Currently, only
 NVIDIA GPUs are supported, but eventually this may be extended to
 OpenCL.  Building this library requires NVIDIA Cuda tools to be
 installed on your system.  See <A HREF = "Section_accelerate.html#10_2">this
 section</A> of the manual for more
 information about using this package effectively and how it differs
 from the user-cuda package.
 </P>
 <P>The "meam" library in lib/meam is used by the meam package.  It was
 written by Greg Wagner at Sandia.  It computes the modified embedded
 atom method potential, which is a generalization of EAM potentials
 that can be used to model a wider variety of materials.  This MEAM
 implementation was written by Greg Wagner at Sandia.  It requires a
 F90 compiler to build.  The C++ to FORTRAN function calls in
 pair_meam.cpp assumes that FORTRAN object names are converted to C
 object names by appending an underscore character. This is generally
 the case, but on machines that do not conform to this convention, you
 will need to modify either the C++ code or your compiler settings.
 </P>
 <P>The "poems" library in lib/poems is used by the poems package.  It was
 written by Rudra Mukherjee at JPL.  It computes the constrained
 rigid-body motion of articulated (jointed) multibody systems.  POEMS
 is distributed by Prof Kurt Anderson's group at Rensselaer Polytechnic
 Institute (RPI).
 </P>
 <P>The "reax" library in lib/reax is used by the reax package.  It was
 written by Aidan Thompson at Sandia.  It computes the Reactive Force
 Field (ReaxFF) potential, developed by Adri van Duin in Bill Goddard's
 group at CalTech.  This implementation in LAMMPS uses many of Adri's
 files and was developed by Aidan Thompson at Sandia and Hansohl Cho at
 MIT.  It requires a F77 or F90 compiler to build.  The C++ to FORTRAN
 function calls in pair_reax.cpp assume that FORTRAN object names are
 converted to C object names by appending an underscore character. This
 is generally the case, but on machines that do not conform to this
 convention, you will need to modify either the C++ code or your
 compiler settings. The name conversion is handled by the preprocessor
 macro called FORTRAN in pair_reax_fortran.h.  Different definitions of
 this macro can be obtained by adding a machine-specific macro
 definition to the CCFLAGS variable in your Makefile e.g. -D_IBM. See
 pair_reax_fortran.h for more info.
 </P>
 <P>As described in the README file in each lib directory, each library is
 typically built by typing something like
 </P>
 <PRE>make -f Makefile.g++ 
 </PRE>
 <P>in the appropriate directory, e.g. in lib/reax.
 </P>
 <P>You must use a Makefile that is a match for your system.  If one of
 the provided Makefiles is not appropriate for your system you will
 need to edit or add one.  For example, in the case of Fotran-based
 libraries, your system must have a Fortran compiler, the settings for
 which will be in the Makefile.
 </P>
 <P>Note that the cuda library, used by the user-cuda package is an
 exception.  See its README file and <A HREF = "Section_accelerate.html#10_3">this
 section</A> of the manual for instructions
 on how to build it.
 </P>
 <HR>
 
 <A NAME = "2_3_4"></A><B><I>Additional Makefile settings for extra libraries:</I></B> 
 
 <P>After the desired library or libraries are built, and the package has
 been included, you can build LAMMPS itself.  For example, from the
 lammps/src directory you would type this, to build LAMMPS with ReaxFF.
 Note that as discussed in the preceding section, the package library
 itself, namely lib/reax/libreax.a, must already have been built, for
 the LAMMPS build to be successful.
 </P>
 <PRE>make yes-reax
 make g++ 
 </PRE>
 <P>Also note that simply building the library is not sufficient to use it
 from LAMMPS.  As in this example, you must also include the package
 that uses and wraps the library before you build LAMMPS itself.
 </P>
 <P>As discussed in point (3.e) of <A HREF = "#2_2_4">this section</A> above, there are
 settings in the low-level Makefile that specify additional system
 libraries needed by some of the LAMMPS add-on libraries.  These are
 the settings you must specify correctly in your low-level Makefile in
 lammps/src/MAKE, such as Makefile.foo:
 </P>
 <P>To use the gpu package and library, the settings for gpu_SYSLIB and
 gpu_SYSPATH must be correct.  These are specific to the NVIDIA CUDA
 software which must be installed on your system.
 </P>
 <P>To use the meam or reax packages and their libraries which are Fortran
 based, the settings for meam_SYSLIB, reax_SYSLIB, meam_SYSPATH, and
 reax_SYSPATH must be correct.  This is so that the C++ compiler can
 perform a cross-language link using the appropriate system Fortran
 libraries.
 </P>
 <P>To use the user-atc package and atc library, the settings for
 user-atc_SYSLIB and user-atc_SYSPATH must be correct.  This is so that
 the appropriate BLAS and LAPACK libs, used by the user-atc library,
 can be found.
 </P>
 <HR>
 
 <H4><A NAME = "2_4"></A>2.4 Building LAMMPS as a library 
 </H4>
 <P>LAMMPS can be built as a library, which can then be called from
 another application or a scripting language.  See <A HREF = "Section_howto.html#4_10">this
 section</A> for more info on coupling LAMMPS to
 other codes.  Building LAMMPS as a library is done by typing
 </P>
 <PRE>make makelib
 make -f Makefile.lib foo 
 </PRE>
 <P>where foo is the machine name.  The first "make" command will create a
 current Makefile.lib with all the file names in your src dir.  The 2nd
 "make" command will use it to build LAMMPS as a library.  This
 requires that Makefile.foo have a library target (lib) and
 system-specific settings for ARCHIVE and ARFLAGS.  See Makefile.linux
 for an example.  The build will create the file liblmp_foo.a which
 another application can link to.
 </P>
 <P>When used from a C++ program, the library allows one or more LAMMPS
 objects to be instantiated.  All of LAMMPS is wrapped in a LAMMPS_NS
 namespace; you can safely use any of its classes and methods from
 within your application code, as needed. 
 </P>
 <P>When used from a C or Fortran program or a scripting language, the
 library has a simple function-style interface, provided in
 src/library.cpp and src/library.h.
 </P>
 <P>See the sample codes couple/simple/simple.cpp and simple.c as examples
 of C++ and C codes that invoke LAMMPS thru its library interface.
 There are other examples as well in the couple directory which are
 discussed in <A HREF = "Section_howto.html#4_10">this section</A> of the manual.
 See <A HREF = "Section_python.html">this section</A> of the manual for a description
 of the Python wrapper provided with LAMMPS that operates through the
 LAMMPS library interface.
 </P>
 <P>The files src/library.cpp and library.h contain the C-style interface
 to LAMMPS.  See <A HREF = "Section_howto.html#4_19">this section</A> of the manual
 for a description of the interface and how to extend it for your
 needs.
 </P>
 <HR>
 
 <H4><A NAME = "2_5"></A>2.5 Running LAMMPS 
 </H4>
 <P>By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux
 < in.file.  This means you first create an input script (e.g. in.file)
 containing the desired commands.  <A HREF = "Section_commands.html">This section</A>
 describes how input scripts are structured and what commands they
 contain.
 </P>
 <P>You can test LAMMPS on any of the sample inputs provided in the
 examples or bench directory.  Input scripts are named in.* and sample
 outputs are named log.*.name.P where name is a machine and P is the
 number of processors it was run on.
 </P>
 <P>Here is how you might run a standard Lennard-Jones benchmark on a
 Linux box, using mpirun to launch a parallel job:
 </P>
 <PRE>cd src
 make linux
 cp lmp_linux ../bench
 cd ../bench
 mpirun -np 4 lmp_linux < in.lj 
 </PRE>
 <P>See <A HREF = "http://lammps.sandia.gov/bench.html">this page</A> for timings for this and the other benchmarks
 on various platforms.
 </P>
 
 
 <HR>
 
 <P>On a Windows box, you can skip making LAMMPS and simply download an
 executable, as described above. though the pre-packaged executables
 make only certain packages available.
 </P>
 <P>To run a LAMMPS executable on a Windows machine, first decide whether
 you want to download the non-MPI (serial) or the MPI (parallel)
 version of the executable. Download and save the version you have
 chosen.
 </P>
 <P>For the non-MPI version, follow these steps:
 </P>
 <UL><LI>Get a command prompt by going to Start->Run... , 
 then typing "cmd". 
 
 <LI>Move to the directory where you have saved lmp_win_no-mpi.exe
 (e.g. by typing: cd "Documents"). 
 
 <LI>At the command prompt, type "lmp_win_no-mpi -in in.lj", replacing in.lj
 with the name of your LAMMPS input script. 
 </UL>
 <P>For the MPI version, which allows you to run LAMMPS under Windows on 
 multiple processors, follow these steps:
 </P>
 <UL><LI>Download and install 
 <A HREF = "http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads">MPICH2</A>
 for Windows. 
 
 <LI>You'll need to use the mpiexec.exe and smpd.exe files from the MPICH2 package. Put them in 
 same directory (or path) as the LAMMPS Windows executable. 
 
 <LI>Get a command prompt by going to Start->Run... , 
 then typing "cmd". 
 
 <LI>Move to the directory where you have saved lmp_win_mpi.exe
 (e.g. by typing: cd "Documents"). 
 
 <LI>Then type something like this: "mpiexec -np 4 -localonly lmp_win_mpi -in in.lj", 
 replacing in.lj with the name of your LAMMPS input script. 
 
 <LI>Note that you may need to provide smpd with a passphrase --- it doesn't matter what you 
 type. 
 
 <LI>In this mode, output may not immediately show up on the screen, so 
 if your input script takes a long time to execute, you may need to be 
 patient before the output shows up. 
 
 <LI>Alternatively, you can still use this executable to run on a single processor by
 typing something like: "lmp_win_mpi -in in.lj". 
 </UL>
 <HR>
 
 <P>The screen output from LAMMPS is described in the next section.  As it
 runs, LAMMPS also writes a log.lammps file with the same information.
 </P>
 <P>Note that this sequence of commands copies the LAMMPS executable
 (lmp_linux) to the directory with the input files.  This may not be
 necessary, but some versions of MPI reset the working directory to
 where the executable is, rather than leave it as the directory where
 you launch mpirun from (if you launch lmp_linux on its own and not
 under mpirun).  If that happens, LAMMPS will look for additional input
 files and write its output files to the executable directory, rather
 than your working directory, which is probably not what you want.
 </P>
 <P>If LAMMPS encounters errors in the input script or while running a
 simulation it will print an ERROR message and stop or a WARNING
 message and continue.  See <A HREF = "Section_errors.html">this section</A> for a
 discussion of the various kinds of errors LAMMPS can or can't detect,
 a list of all ERROR and WARNING messages, and what to do about them.
 </P>
 <P>LAMMPS can run a problem on any number of processors, including a
 single processor.  In theory you should get identical answers on any
 number of processors and on any machine.  In practice, numerical
 round-off can cause slight differences and eventual divergence of
 molecular dynamics phase space trajectories.
 </P>
 <P>LAMMPS can run as large a problem as will fit in the physical memory
 of one or more processors.  If you run out of memory, you must run on
 more processors or setup a smaller problem.
 </P>
 <HR>
 
 <H4><A NAME = "2_6"></A>2.6 Command-line options 
 </H4>
 <P>At run time, LAMMPS recognizes several optional command-line switches
 which may be used in any order.  Either the full word or a one-letter
 abbreviation can be used:
 </P>
 <UL><LI>-c or -cuda
 <LI>-e or -echo
 <LI>-i or -in
 <LI>-l or -log
 <LI>-p or -partition
+<LI>-pl or -plog
+<LI>-ps or -pscreen
 <LI>-sc or -screen
 <LI>-sf or -suffix
 <LI>-v or -var 
 </UL>
 <P>For example, lmp_ibm might be launched as follows:
 </P>
 <PRE>mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none < in.alloy
 mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy 
 </PRE>
 <P>Here are the details on the options:
 </P>
 <PRE>-cuda on/off 
 </PRE>
 <P>Explicitly enable or disable CUDA support, as provided by the
 USER-CUDA package.  If LAMMPS is built with this package, as described
 above in <A HREF = "#2_3">Section 2.3</A>, then by default LAMMPS will run in CUDA
 mode.  If this switch is set to "off", then it will not, even if it
 was built with the USER-CUDA package, which means you can run standard
 LAMMPS or with the GPU package for testing or benchmarking purposes.
 The only reason to set the switch to "on", is to check if LAMMPS was
 built with the USER-CUDA package, since an error will be generated if
 it was not.
 </P>
 <PRE>-echo style 
 </PRE>
 <P>Set the style of command echoing.  The style can be <I>none</I> or <I>screen</I>
 or <I>log</I> or <I>both</I>.  Depending on the style, each command read from
 the input script will be echoed to the screen and/or logfile.  This
 can be useful to figure out which line of your script is causing an
 input error.  The default value is <I>log</I>.  The echo style can also be
 set by using the <A HREF = "echo.html">echo</A> command in the input script itself.
 </P>
 <PRE>-in file 
 </PRE>
 <P>Specify a file to use as an input script.  This is an optional switch
 when running LAMMPS in one-partition mode.  If it is not specified,
 LAMMPS reads its input script from stdin - e.g. lmp_linux < in.run.
 This is a required switch when running LAMMPS in multi-partition mode,
 since multiple processors cannot all read from stdin.
 </P>
 <PRE>-log file 
 </PRE>
 <P>Specify a log file for LAMMPS to write status information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 file log.lammps.  If this switch is used, LAMMPS writes to the
 specified file.  In multi-partition mode, if the switch is not used, a
 log.lammps file is created with hi-level status information.  Each
 partition also writes to a log.lammps.N file where N is the partition
 ID.  If the switch is specified in multi-partition mode, the hi-level
 logfile is named "file" and each partition also logs information to a
 file.N.  For both one-partition and multi-partition mode, if the
 specified file is "none", then no log files are created.  Using a
 <A HREF = "log.html">log</A> command in the input script will override this setting.
+Option -plog will override the name of the partition log files file.N.
 </P>
 <PRE>-partition 8x2 4 5 ... 
 </PRE>
 <P>Invoke LAMMPS in multi-partition mode.  When LAMMPS is run on P
 processors and this switch is not used, LAMMPS runs in one partition,
 i.e. all P processors run a single simulation.  If this switch is
 used, the P processors are split into separate partitions and each
 partition runs its own simulation.  The arguments to the switch
 specify the number of processors in each partition.  Arguments of the
 form MxN mean M partitions, each with N processors.  Arguments of the
 form N mean a single partition with N processors.  The sum of
 processors in all partitions must equal P.  Thus the command
 "-partition 8x2 4 5" has 10 partitions and runs on a total of 25
 processors.
 </P>
 <P>Note that with MPI installed on a machine (e.g. your desktop), you can
 run on more (virtual) processors than you have physical processors.
 This can be useful for running <A HREF = "Section_howto.html#4_5">multi-replica
 simulations</A>, on one or a few processors.
 </P>
 <P>The input script specifies what simulation is run on which partition;
 see the <A HREF = "variable.html">variable</A> and <A HREF = "next.html">next</A> commands.  This
 <A HREF = "Section_howto.html#4_4">howto section</A> gives examples of how to use
 these commands in this way.  Simulations running on different
 partitions can also communicate with each other; see the
 <A HREF = "temper.html">temper</A> command.
 </P>
+<PRE>-plog file 
+</PRE>
+<P>Specify the base name for the partition log files, 
+so partition N writes log information to file.N. If file is 
+none, then no partition log files are created. 
+This overrides the
+filename specified in the -log command-line option.
+This option is useful when working with large numbers of partitions,
+allowing the partition log files to be suppressed (-plog none) or
+placed in a sub-directory (-plog replica_files/log.lammps)
+If this option is not used
+the log file for partition N is log.lammps.N or whatever is specified by
+the -log command-line option. 
+</P>
+<PRE>-pscreen file 
+</PRE>
+<P>Specify the base name for the 
+partition screen file, so partition N writes 
+screen information to file.N. If file is 
+none, then no partition screen files are created. 
+This overrides the
+filename specified in the -screen command-line option.
+This option is useful when working with large numbers of partitions,
+allowing the partition screen files to be suppressed (-pscreen none) or
+placed in a sub-directory (-pscreen replica_files/screen)
+If this option is not used
+the screen file for partition N is screen.N or whatever is specified by
+the -screen command-line option. 
+</P>
 <PRE>-screen file 
 </PRE>
 <P>Specify a file for LAMMPS to write its screen information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 screen.  If this switch is used, LAMMPS writes to the specified file
 instead and you will see no screen output.  In multi-partition mode,
 if the switch is not used, hi-level status information is written to
 the screen.  Each partition also writes to a screen.N file where N is
 the partition ID.  If the switch is specified in multi-partition mode,
 the hi-level screen dump is named "file" and each partition also
 writes screen information to a file.N.  For both one-partition and
 multi-partition mode, if the specified file is "none", then no screen
-output is performed.
+output is performed. Option -pscreen will override the name of the 
+partition screen files file.N.
 </P>
 <PRE>-suffix style 
 </PRE>
 <P>Use variants of various styles if they exist.  The specified style can
 be <I>opt</I> or <I>gpu</I> or <I>cuda</I>.  These refer to optional packages that
 LAMMPS can be built with, as described above in <A HREF = "#2_3">Section 2.3</A>.
 The "opt" style corrsponds to the OPT package, the "gpu" style to the
 GPU package, and the "cuda" style to the USER-CUDA package.
 </P>
 <P>As an example, all of the packages provide a <A HREF = "pair_lj.html">pair_style
 lj/cut</A> variant, with style names lj/cut/opt or
 lj/cut/gpu or lj/cut/cuda.  A variant styles can be specified
 explicitly in your input script, e.g. pair_style lj/cut/gpu.  If the
 -suffix switch is used, you do not need to modify your input script.
 The specified suffix (opt,gpu,cuda) is automatically appended whenever
 your input script command creates a new <A HREF = "atom_style.html">atom</A>,
 <A HREF = "pair_style.html">pair</A>, <A HREF = "fix.html">fix</A>, <A HREF = "compute.html">compute</A>, or
 <A HREF = "run_style.html">run</A> style.  atom, pair, fix, compute, or integrate
 style.  If the variant version does not exist, the standard version is
 created.
 </P>
 <P>The <A HREF = "suffix.html">suffix</A> command can also set a suffix and it can also
 turn off/on any suffix setting made via the command line.
 </P>
 <PRE>-var name value1 value2 ... 
 </PRE>
 <P>Specify a variable that will be defined for substitution purposes when
 the input script is read.  "Name" is the variable name which can be a
 single character (referenced as $x in the input script) or a full
 string (referenced as ${abc}).  An <A HREF = "variable.html">index-style
 variable</A> will be created and populated with the
 subsequent values, e.g. a set of filenames.  Using this command-line
 option is equivalent to putting the line "variable name index value1
 value2 ..."  at the beginning of the input script.  Defining an index
 variable as a command-line argument overrides any setting for the same
 index variable in the input script, since index variables cannot be
 re-defined.  See the <A HREF = "variable.html">variable</A> command for more info on
 defining index and other kinds of variables and <A HREF = "Section_commands.html#3_2">this
 section</A> for more info on using variables in
 input scripts.
 </P>
 <HR>
 
 <H4><A NAME = "2_7"></A>2.7 LAMMPS screen output 
 </H4>
 <P>As LAMMPS reads an input script, it prints information to both the
 screen and a log file about significant actions it takes to setup a
 simulation.  When the simulation is ready to begin, LAMMPS performs
 various initializations and prints the amount of memory (in MBytes per
 processor) that the simulation requires.  It also prints details of
 the initial thermodynamic state of the system.  During the run itself,
 thermodynamic information is printed periodically, every few
 timesteps.  When the run concludes, LAMMPS prints the final
 thermodynamic state and a total run time for the simulation.  It then
 appends statistics about the CPU time and storage requirements for the
 simulation.  An example set of statistics is shown here:
 </P>
 <PRE>Loop time of 49.002 on 2 procs for 2004 atoms 
 </PRE>
 <PRE>Pair   time (%) = 35.0495 (71.5267)
 Bond   time (%) = 0.092046 (0.187841)
 Kspce  time (%) = 6.42073 (13.103)
 Neigh  time (%) = 2.73485 (5.5811)
 Comm   time (%) = 1.50291 (3.06703)
 Outpt  time (%) = 0.013799 (0.0281601)
 Other  time (%) = 2.13669 (4.36041) 
 </PRE>
 <PRE>Nlocal:    1002 ave, 1015 max, 989 min
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 Nghost:    8720 ave, 8724 max, 8716 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1
 Neighs:    354141 ave, 361422 max, 346860 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 </PRE>
 <PRE>Total # of neighbors = 708282
 Ave neighs/atom = 353.434
 Ave special neighs/atom = 2.34032
 Number of reneighborings = 42
 Dangerous reneighborings = 2 
 </PRE>
 <P>The first section gives the breakdown of the CPU run time (in seconds)
 into major categories.  The second section lists the number of owned
 atoms (Nlocal), ghost atoms (Nghost), and pair-wise neighbors stored
 per processor.  The max and min values give the spread of these values
 across processors with a 10-bin histogram showing the distribution.
 The total number of histogram counts is equal to the number of
 processors.
 </P>
 <P>The last section gives aggregate statistics for pair-wise neighbors
 and special neighbors that LAMMPS keeps track of (see the
 <A HREF = "special_bonds.html">special_bonds</A> command).  The number of times
 neighbor lists were rebuilt during the run is given as well as the
 number of potentially "dangerous" rebuilds.  If atom movement
 triggered neighbor list rebuilding (see the
 <A HREF = "neigh_modify.html">neigh_modify</A> command), then dangerous
 reneighborings are those that were triggered on the first timestep
 atom movement was checked for.  If this count is non-zero you may wish
 to reduce the delay factor to insure no force interactions are missed
 by atoms moving beyond the neighbor skin distance before a rebuild
 takes place.
 </P>
 <P>If an energy minimization was performed via the
 <A HREF = "minimize.html">minimize</A> command, additional information is printed,
 e.g.
 </P>
 <PRE>Minimization stats:
   E initial, next-to-last, final = -0.895962 -2.94193 -2.94342
   Gradient 2-norm init/final= 1920.78 20.9992
   Gradient inf-norm init/final= 304.283 9.61216
   Iterations = 36
   Force evaluations = 177 
 </PRE>
 <P>The first line lists the initial and final energy, as well as the
 energy on the next-to-last iteration.  The next 2 lines give a measure
 of the gradient of the energy (force on all atoms).  The 2-norm is the
 "length" of this force vector; the inf-norm is the largest component.
 The last 2 lines are statistics on how many iterations and
 force-evaluations the minimizer required.  Multiple force evaluations
 are typically done at each iteration to perform a 1d line minimization
 in the search direction.
 </P>
 <P>If a <A HREF = "kspace_style.html">kspace_style</A> long-range Coulombics solve was
 performed during the run (PPPM, Ewald), then additional information is
 printed, e.g.
 </P>
 <PRE>FFT time (% of Kspce) = 0.200313 (8.34477)
 FFT Gflps 3d 1d-only = 2.31074 9.19989 
 </PRE>
 <P>The first line gives the time spent doing 3d FFTs (4 per timestep) and
 the fraction it represents of the total KSpace time (listed above).
 Each 3d FFT requires computation (3 sets of 1d FFTs) and communication
 (transposes).  The total flops performed is 5Nlog_2(N), where N is the
 number of points in the 3d grid.  The FFTs are timed with and without
 the communication and a Gflop rate is computed.  The 3d rate is with
 communication; the 1d rate is without (just the 1d FFTs).  Thus you
 can estimate what fraction of your FFT time was spent in
 communication, roughly 75% in the example above.
 </P>
 <HR>
 
 <H4><A NAME = "2_8"></A>2.8 Tips for users of previous LAMMPS versions 
 </H4>
 <P>The current C++ began with a complete rewrite of LAMMPS 2001, which
 was written in F90.  Features of earlier versions of LAMMPS are listed
 in <A HREF = "Section_history.html">this section</A>.  The F90 and F77 versions
 (2001 and 99) are also freely distributed as open-source codes; check
 the <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> for distribution information if you prefer
 those versions.  The 99 and 2001 versions are no longer under active
 development; they do not have all the features of C++ LAMMPS.
 </P>
 <P>If you are a previous user of LAMMPS 2001, these are the most
 significant changes you will notice in C++ LAMMPS:
 </P>
 <P>(1) The names and arguments of many input script commands have
 changed.  All commands are now a single word (e.g. read_data instead
 of read data).
 </P>
 <P>(2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS,
 but you may need to specify the relevant commands in different ways.
 </P>
 <P>(3) The format of the data file can be streamlined for some problems.
 See the <A HREF = "read_data.html">read_data</A> command for details.  The data file
 section "Nonbond Coeff" has been renamed to "Pair Coeff" in C++ LAMMPS.
 </P>
 <P>(4) Binary restart files written by LAMMPS 2001 cannot be read by C++
 LAMMPS with a <A HREF = "read_restart.html">read_restart</A> command.  This is
 because they were output by F90 which writes in a different binary
 format than C or C++ writes or reads.  Use the <I>restart2data</I> tool
 provided with LAMMPS 2001 to convert the 2001 restart file to a text
 data file.  Then edit the data file as necessary before using the C++
 LAMMPS <A HREF = "read_data.html">read_data</A> command to read it in.
 </P>
 <P>(5) There are numerous small numerical changes in C++ LAMMPS that mean
 you will not get identical answers when comparing to a 2001 run.
 However, your initial thermodynamic energy and MD trajectory should be
 close if you have setup the problem for both codes the same.
 </P>
 </HTML>
diff --git a/doc/Section_start.txt b/doc/Section_start.txt
index df64f6a84..4946914de 100644
--- a/doc/Section_start.txt
+++ b/doc/Section_start.txt
@@ -1,1085 +1,1125 @@
 "Previous Section"_Section_intro.html - "LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc - "Next Section"_Section_commands.html :c
 
 :link(lws,http://lammps.sandia.gov)
 :link(ld,Manual.html)
 :link(lc,Section_commands.html#comm)
 
 :line
 
 2. Getting Started :h3
 
 This section describes how to build and run LAMMPS, for both new and
 experienced users.
 
 2.1 "What's in the LAMMPS distribution"_#2_1
 2.2 "Making LAMMPS"_#2_2
 2.3 "Making LAMMPS with optional packages"_#2_3
 2.4 "Building LAMMPS as a library"_#2_4
 2.5 "Running LAMMPS"_#2_5
 2.6 "Command-line options"_#2_6
 2.7 "Screen output"_#2_7
 2.8 "Tips for users of previous versions"_#2_8 :all(b)
 
 :line
 
 2.1 What's in the LAMMPS distribution :h4,link(2_1)
 
 When you download LAMMPS you will need to unzip and untar the
 downloaded file with the following commands, after placing the file in
 an appropriate directory.
 
 gunzip lammps*.tar.gz 
 tar xvf lammps*.tar :pre
 
 This will create a LAMMPS directory containing two files and several
 sub-directories:
     
 README: text file
 LICENSE: the GNU General Public License (GPL)
 bench: benchmark problems
 couple: code coupling examples, using LAMMPS as a library
 doc: documentation
 examples: simple test problems
 potentials: embedded atom method (EAM) potential files
 src: source files
 tools: pre- and post-processing tools :tb(s=:)
 
 If you download one of the Windows executables from the download page,
 then you just get a single file:
 
 lmp_windows.exe :pre
 
 Skip to the "Running LAMMPS"_#2_5 sections for info on how to launch
 these executables on a Windows box.
 
 The Windows executables for serial or parallel only include certain
 packages and bug-fixes/upgrades listed on "this
 page"_http://lammps.sandia.gov/bug.html up to a certain date, as
 stated on the download page.  If you want something with more packages
 or that is more current, you'll have to download the source tarball
 and build it yourself from source code using Microsoft Visual Studio,
 as described in the next section.
 
 :line
 
 2.2 Making LAMMPS :h4,link(2_2)
 
 This section has the following sub-sections:
 
 "Read this first"_#2_2_1
 "Building a LAMMPS executable"_#2_2_2
 "Common errors that can occur when making LAMMPS"_#2_2_3
 "Editing a new low-level Makefile"_#2_2_4
 "Additional build tips"_#2_2_5
 "Building for a Mac"_#2_2_6
 "Building for Windows"_#2_2_7 :ul
 
 :line
 
 [{Read this first:}] :link(2_2_1)
 
 Building LAMMPS can be non-trivial.  You will likely need to edit a
 makefile, there are compiler options, additional libraries can be used
 (MPI, FFT, JPEG), etc.  Please read this section carefully.  If you
 are not comfortable with makefiles, or building codes on a Unix
 platform, or running an MPI job on your machine, please find a local
 expert to help you.  Many compiling, linking, and run problems that
 users are not really LAMMPS issues - they are peculiar to the user's
 system, compilers, libraries, etc.  Such questions are better answered
 by a local expert.
 
 If you have a build problem that you are convinced is a LAMMPS issue
 (e.g. the compiler complains about a line of LAMMPS source code), then
 please send an email to the
 "developers"_http://lammps.sandia.gov/authors.html.
 
 If you succeed in building LAMMPS on a new kind of machine, for which
 there isn't a similar Makefile for in the src/MAKE directory, send it
 to the developers and we'll include it in future LAMMPS releases.
 
 :line
 
 [{Building a LAMMPS executable:}] :link(2_2_2)
 
 The src directory contains the C++ source and header files for LAMMPS.
 It also contains a top-level Makefile and a MAKE sub-directory with
 low-level Makefile.* files for several machines.  From within the src
 directory, type "make" or "gmake".  You should see a list of available
 choices.  If one of those is the machine and options you want, you can
 type a command like:
 
 make linux
 gmake mac :pre
 
 Note that on a multi-processor or multi-core platform you can launch a
 parallel make, by using the "-j" switch with the make command, which
 will build LAMMPS more quickly.
 
 If you get no errors and an executable like lmp_linux or lmp_mac is
 produced, you're done; it's your lucky day.
 
 :line
 
 [{Common errors that can occur when making LAMMPS:}] :link(2_2_3)
 
 (1) If the make command breaks immediately with errors that indicate
 it can't find files with a "*" in their names, this can be because
 your machine's make doesn't support wildcard expansion in a makefile.
 Try gmake instead of make.  If that doesn't work, try using a -f
 switch with your make command to use Makefile.list which explicitly
 lists all the needed files, e.g.
 
 make makelist
 make -f Makefile.list linux
 gmake -f Makefile.list mac :pre
 
 The first "make" command will create a current Makefile.list with all
 the file names in your src dir.  The 2nd "make" command (make or
 gmake) will use it to build LAMMPS.
 
 (2) Other errors typically occur because the low-level Makefile isn't
 setup correctly for your machine.  If your platform is named "foo",
 you will need to create a Makefile.foo in the MAKE sub-directory.  Use
 whatever existing file is closest to your platform as a starting
 point.  See the next section for more instructions.
 
 (3) If you get a link-time error about missing libraries or missing
 dependencies, then it can be because:
 
 you are including a package that needs an extra library, but have not pre-built the necessary "package library"_#2_3_3
 you are linking to a library that doesn't exist on your system
 you are not linking to the necessary system library :ul
 
 The first issue is discussed below.  The other two issue mean you need
 to edit your low-level Makefile.foo, as discussed in the next
 sub-section.
 
 :line
 
 [{Editing a new low-level Makefile.foo:}] :link(2_2_4)
 
 These are the issues you need to address when editing a low-level
 Makefile for your machine.  The portions of the file you typically
 need to edit are the first line, the "compiler/linker settings"
 section, and the "system-specific settings" section.
 
 (1) Change the first line of Makefile.foo to list the word "foo" after
 the "#", and whatever other options you set.  This is the line you
 will see if you just type "make".
 
 (3) The "compiler/linker settings" section lists compiler and linker
 settings for your C++ compiler, including optimization flags.  You can
 use g++, the open-source GNU compiler, which is available on all Unix
 systems.  You can also use mpicc which will typically be available if
 MPI is installed on your system, though you should check which actual
 compiler it wraps.  Vendor compilers often produce faster code.  On
 boxes with Intel CPUs, we suggest using the free Intel icc compiler,
 which you can download from "Intel's compiler site"_intel.
 
 :link(intel,http://www.intel.com/software/products/noncom)
 
 If building a C++ code on your machine requires additional libraries,
 then you should list them as part of the LIB variable.
 
 The DEPFLAGS setting is what triggers the C++ compiler to create a
 dependency list for a source file.  This speeds re-compilation when
 source (*.cpp) or header (*.h) files are edited.  Some compilers do
 not support dependency file creation, or may use a different switch
 than -D.  GNU g++ works with -D.  If your compiler can't create
 dependency files (a long list of errors involving *.d files), then
 you'll need to create a Makefile.foo patterned after Makefile.storm,
 which uses different rules that do not involve dependency files.
 
 (3) The "system-specific settings" section has 4 parts.
 
 (3.a) The LMP_INC variable is used to include options that turn on
 system-dependent ifdefs within the LAMMPS code.  The settings
 that are currently recogized are:
 
 -DLAMMPS_GZIP
 -DPACK_ARRAY
 -DPACK_POINTER
 -DPACK_MEMCPY
 -DLAMMPS_XDR
 -DLAMMPS_JPEG :ul
 
 The read_data and dump commands will read/write gzipped files if you
 compile with -DLAMMPS_GZIP.  It requires that your Unix support the
 "popen" command.
 
 Using one of the -DPACK_ARRAY, -DPACK_POINTER, and -DPACK_MEMCPY
 options can make for faster parallel FFTs (in the PPPM solver) on some
 platforms.  The -DPACK_ARRAY setting is the default.  See the
 "kspace_style"_kspace_style.html command for info about PPPM.  See
 section (3.c) below for info about building LAMMPS with an FFT
 library.
 
 If you use -DLAMMPS_XDR, the build will include XDR compatibility
 files for doing particle dumps in XTC format.  This is only necessary
 if your platform does have its own XDR files available.  See the
 Restrictions section of the "dump"_dump.html command for details.
 
 If you use -DLAMMPS_JPEG, the "dump image"_dump.html command will be
 able to write out JPEG image files.  If not, it will only be able to
 write out text-based PPM image files.  For JPEG files, you must also
 link LAMMPS with a JPEG library.  See section (3.d) below for more
 details on this.
 
 (3.b) The 3 MPI variables are used to specify an MPI library to build
 LAMMPS with.
 
 If you want LAMMPS to run in parallel, you must have an MPI library
 installed on your platform.  If you use an MPI-wrapped compiler, such
 as "mpicc" to build LAMMPS, you can probably leave these 3 variables
 blank.  If you do not use "mpicc" as your compiler/linker, then you
 need to specify where the mpi.h file (MPI_INC) and the MPI library
 (MPI_PATH) is found and its name (MPI_LIB).
 
 If you are installing MPI yourself, we recommend Argonne's MPICH 1.2
 or 2.0 or OpenMPI.  MPICH can be downloaded from the "Argonne MPI
 site"_http://www-unix.mcs.anl.gov/mpi.  OpenMPI can be downloaded the
 "OpenMPI site"_http://www.open-mpi.org.  LAM MPI should also work.  If
 you are running on a big parallel platform, your system people or the
 vendor should have already installed a version of MPI, which will be
 faster than MPICH or OpenMPI or LAM, so find out how to build and link
 with it.  If you use MPICH or OpenMPI or LAM, you will have to
 configure and build it for your platform.  The MPI configure script
 should have compiler options to enable you to use the same compiler
 you are using for the LAMMPS build, which can avoid problems that can
 arise when linking LAMMPS to the MPI library.
 
 If you just want LAMMPS to run on a single processor, you can use the
 STUBS library in place of MPI, since you don't need a true MPI library
 installed on your system.  See the Makefile.serial file for how to
 specify the 3 MPI variables.  You will also need to build the STUBS
 library for your platform before making LAMMPS itself.  From the STUBS
 dir, type "make" and it will hopefully create a libmpi.a suitable for
 linking to LAMMPS.  If this build fails, you will need to edit the
 STUBS/Makefile for your platform.
 
 The file STUBS/mpi.cpp has a CPU timer function MPI_Wtime() that calls
 gettimeofday() .  If your system doesn't support gettimeofday() ,
 you'll need to insert code to call another timer.  Note that the
 ANSI-standard function clock() rolls over after an hour or so, and is
 therefore insufficient for timing long LAMMPS simulations.
 
 (3.c) The 3 FFT variables are used to specify an FFT library which
 LAMMPS uses when using the particle-particle particle-mesh (PPPM)
 option in LAMMPS for long-range Coulombics via the
 "kspace_style"_kspace_style.html command.
 
 To use this option, you must have a 1d FFT library installed on your
 platform.  This is specified by a switch of the form -DFFT_XXX where
 XXX = INTEL, DEC, SGI, SCSL, or FFTW.  All but the last one are native
 vendor-provided libraries.  FFTW is a fast, portable library that
 should work on any platform.  You can download it from
 "www.fftw.org"_http://www.fftw.org.  Use version 2.1.X, not the newer
 3.0.X.  Building FFTW for your box should be as simple as ./configure;
 make.  Whichever FFT library you have on your platform, you'll need to
 set the appropriate FFT_INC, FFT_PATH, and FFT_LIB variables in
 Makefile.foo, so the compiler and linker can find it.
 
 If you examine src/fft3d.c and src.fft3d.h you'll see it's possible to
 add other vendor FFT libraries via #ifdef statements in the
 appropriate places.  If you successfully add a new FFT option, like
 -DFFT_IBM, please send the LAMMPS developers an email; we'd like to
 add it to LAMMPS.
 
 If you don't plan to use PPPM, you don't need an FFT library.  In this
 case you can set FFT_INC to -DFFT_NONE and leave the other 2 FFT
 variables blank.  Or you can exclude the KSPACE package when you build
 LAMMPS (see below).
 
 (3.d) The 3 JPG variables are used to specify a JPEG library which
 LAMMPS uses when writing a JPEG file via the "dump
 image"_dump_image.html command.  These can be left blank if you are
 not using the -DLAMMPS_JPEG switch discussed above in section (3.a).
 
 A standard JPEG library usually goes by the name libjpeg.a and has an
 associated header file jpeglib.h.  Whichever JPEG library you have on
 your platform, you'll need to set the appropriate JPG_INC, JPG_PATH,
 and JPG_LIB variables in Makefile.foo so that the compiler and linker
 can find it.
 
 (3.e) The several SYSLIB and SYSPATH variables can be ignored unless
 you are building LAMMPS with one or more of the LAMMPS packages that
 require these extra system libraries.  The names of these packages are
 the prefixes on the SYSLIB and SYSPATH variables.  See the "section
 below"_#2_3_4 for more details.  The SYSLIB variables list the system
 libraries.  The SYSPATH variables are where they are located on your
 machine, which is typically only needed if they are in some
 non-standard place, that is not in your library search path.
 
 That's it.  Once you have a correct Makefile.foo and you have
 pre-built any other libraries it will use (e.g. MPI, FFT, package
 libraries), all you need to do from the src directory is type one of
 these 2 commands:
 
 make foo
 gmake foo :pre
 
 You should get the executable lmp_foo when the build is complete.
 
 :line
 
 [{Additional build tips:}] :link(2_2_5)
 
 (1) Building LAMMPS for multiple platforms.
 
 You can make LAMMPS for multiple platforms from the same src
 directory.  Each target creates its own object sub-directory called
 Obj_name where it stores the system-specific *.o files.
 
 (2) Cleaning up.
 
 Typing "make clean-all" or "make clean-foo" will delete *.o object
 files created when LAMMPS is built, for either all builds or for a
 particular machine.
 
 :line
 
 [{Building for a Mac:}] :link(2_2_6)
 
 OS X is BSD Unix, so it should just work.  See the Makefile.mac file.
 
 :line
 
 [{Building for Windows:}] :link(2_2_7)
 
 The LAMMPS download page has an option to download both a serial and
 parallel pre-built Windows exeutable.  See the "Running LAMMPS"_#2_5
 section for instructions for running these executables on a Windows
 box.
 
 If the pre-built executable doesn't have the options you want, then
 you can build LAMMPS from its source files on a Windows box.  One way
 to do this is install and use cygwin to build LAMMPS with a standard
 Linus make, just as you would on any Linux box; see
 src/MAKE/Makefile.cygwin.
 
 There is a also a src/WINDOWS directory that contains project files
 for Microsoft Visual Studio 2005, which should also work with later
 versions of VS.  That directory contains a README.txt file which
 provides instructions for building LAMMPS from source code using
 Visual Studio that are hopefully easy to follow for Windows and VS
 users.
 
 Four VS project options are provided.  The first includes the default
 packages (MANYBODY, MOLECULE, and KSPACE).  The second includes all
 standard packages (except GPU, MEAM, and REAX which are not yet
 included because they require NVIDIA or Fortran compilation).  The
 third includes all standard packages (with the exceptions) and some
 user packages.  The included user packages are USER-EFF, USER-CG-CMM,
 and USER-REAXC.  The fourth project includes the USER-AWPMD package.
 
 (5) Changing the size limits in src/lmptype.h
 
 If you are running a very large problem (billions of atoms or more)
 and get a run-time error about the system being too big, either on a
 per-processor basis or in total size, then you may need to change one
 or more settings in src/lmptype.h and re-compile LAMMPS.
 
 As the documentation in that file explains, you have basically
 two choices to make:
 
 set the data type size of integer atom IDs to 4 or 8 bytes
 set the data type size of integers that store the total system size to 4 or 8 bytes :ul
 
 The default for atom IDs is 4-byte integers since there is a memory
 and communication cost for 8-byte integers.  Non-molecular problems do
 not need atom IDs so this does not restrict their size.  Molecular
 problems (which use IDs to define molecular topology), are limited to
 about 2 billion atoms (2^31) with 4-byte IDs.  With 8-byte IDs they
 are effectively unlimited in size (2^63).
 
 The default for total system size quantities (like the number of atoms
 or timesteps) is 8-byte integers by default which is effectively
 unlimited in size (2^63).  If your system does not support 8-byte
 integers, an error will be generated, and you will need to set
 "bigint" to 4-byte integers.  This restricts your total system size to
 about 2 billion atoms or timesteps (2^31).
 
 Note that in src/lmptype.h there are also settings for the MPI data
 types associated with the integers that store atom IDs and total
 system sizes, which need to be set consistent with the associated C
 data types.
 
 In all cases, the size of problem that can be run on a per-processor
 basis is limited by 4-byte integer storage to about 2 billion atoms
 per processor (2^31), which should not normally be a restriction since
 such a problem would have a huge per-processor memory footprint due to
 neighbor lists and would run very slowly in terms of CPU
 secs/timestep.
 
 :line
 
 2.3 Making LAMMPS with optional packages :h4,link(2_3)
 
 This section has the following sub-sections:
 
 "Package basics"_#2_3_1
 "Including/excluding packages"_#2_3_2
 "Packages that require extra LAMMPS libraries"_#2_3_3
 "Additional Makefile settings for extra libraries"_#2_3_4 :ul
 
 :line
 
 [{Package basics:}] :link(2_3_1)
 
 The source code for LAMMPS is structured as a large set of core files
 which are always included, plus optional packages.  Packages are
 groups of files that enable a specific set of features.  For example,
 force fields for molecular systems or granular systems are in
 packages.  You can see the list of all packages by typing "make
 package".
 
 The current list of standard packages is as follows:
 
 asphere : aspherical particles and force fields
 class2 : class 2 force fields
 colloid : colloidal particle force fields
 dipole : point dipole particles and force fields
 dsmc : Direct Simulation Monte Carlo (DMSC) pair style
 gpu : GPU-enabled force field styles
 granular : force fields and boundary conditions for granular systems
 kspace : long-range Ewald and particle-mesh (PPPM) solvers
 manybody : metal, 3-body, bond-order potentials
 meam : modified embedded atom method (MEAM) potential
 molecule : force fields for molecular systems
 opt : optimized versions of a few pair potentials
 peri : Peridynamics model and potential
 poems : coupled rigid body motion
 reax : ReaxFF potential
 replica : multi-replica methods
 shock : methods for MD simulations of shock loading
 srd : stochastic rotation dynamics (SRD)
 xtc : dump atom snapshots in XTC format :tb(s=:)
 
 There are also user-contributed packages which may be as simple as a
 single additional file or many files grouped together which add a
 specific functionality to the code.
 
 The difference between a {standard} package versus a {user} package is
 as follows.
 
 Standard packages are supported by the LAMMPS developers and are
 written in a syntax and style consistent with the rest of LAMMPS.
 This means we will answer questions about them, debug and fix them if
 necessary, and keep them compatible with future changes to LAMMPS.
 
 User packages don't necessarily meet these requirements.  If you have
 problems using a feature provided in a user package, you will likely
 need to contact the contributor directly to get help.  Information on
 how to submit additions you make to LAMMPS as a user-contributed
 package is given in "this section"_Section_modify.html#package of the
 documentation.
 
 :line
 
 [{Including/excluding packages:}] :link(2_3_2)
 
 To use or not use a package you must be include or exclude it before
 LAMMPS is built.
 
 Some packages have individual files that depend on other packages
 being included, but LAMMPS checks for this and does the right thing.
 I.e. individual files are only included if their dependencies are
 already included.  Likewise, if a package is excluded, other files
 dependent on that package are also excluded.
 
 The reason to exclude packages is if you will never run certain kinds
 of simulations.  This will keep you from having to build auxiliary
 libraries (see below) and will produce a smaller executable which may
 run a bit faster.
 
 By default, LAMMPS includes only the "kspace", "manybody", and
 "molecule" packages.
 
 Packages are included or excluded by typing "make yes-name" or "make
 no-name", where "name" is the name of the package.  You can also type
 "make yes-standard", "make no-standard", "make yes-user", "make
 no-user", "make yes-all" or "make no-all" to include/exclude various
-sets of packages.  Type "make package" to see the various options.
+sets of packages.  Type "make package" to see the all of the
+package-related make options.
 
 IMPORTANT NOTE: These make commands work by simply moving files back
 and forth between the main src directory and sub-directories with the
 package name, so that the files are seen or not seen when LAMMPS is
 built.  After you have included or excluded a package, you must
 re-build LAMMPS.
 
-Additional make options exist to help manage LAMMPS files that exist
+Additional package-related make options exist to help manage LAMMPS 
+files that exist
 in both the src directory and in package sub-directories.  You do not
 normally need to use these commands unless you are editing LAMMPS
 files or have downloaded a patch from the LAMMPS WWW site.
 
 Typing "make package-update" will overwrite src files with files from
-the package directories if the package has been included.  It should
+the package sub-directories if the package has been included.  It should
 be used after a patch is installed, since patches only update the
-master package version of a file.  Typing "make package-overwrite"
-will overwrite files in the package directories with src files.
-Typing "make package-check" will list differences between src and
-package versions of the same files.  Again, type "make package" to see
-the various options.
+files in the package sub-directory, but not the src files.  
+Typing "make package-overwrite"
+will overwrite files in the package sub-directories with src files.
+
+Typing "make package-status" will show which packages are currently
+included. Of those that are included, it will list files that
+are different in the src directory and package sub-directory.
+Typing "make package-diff" lists all differences between these files. 
+Again, type "make package" to see all of the
+package-related make options.
 
 :line
 
 [{Packages that require extra LAMMPS libraries:}] :link(2_3_3)
 
 A few standard or user packages require that additional libraries be
 compiled first, which LAMMPS will link to when it builds.  The source
 code for these libraries is included in the LAMMPS distribution under
 the "lib" directory.  Look at the README files in the lib directories
 (e.g. lib/reax/README) for instructions on how to build each library.
 
 IMPORTANT NOTE: If you are including a package in your LAMMPS build
 that uses one of these libraries, then you must build the library
 BEFORE building LAMMPS itself, since the LAMMPS build will attempt to
 link with the library file.
 
 Here is a bit of information about each library:
 
 The "atc" library in lib/atc is used by the user-atc package.  It
 provides continuum field estimation and molecular dynamics-finite
 element coupling methods.  It was written by Reese Jones, Jeremy
 Templeton and Jonathan Zimmerman at Sandia.
 
 The "cuda" library in lib/cuda is used by the user-cuda package.  It
 was written by Christian Trott at U of Technology Ilmenau in Germany.
 It contains code to enable portions of LAMMPS to run on NVIDIA GPUs
 associated with your CPUs.  Currently, only NVIDIA GPUs are supported.
 Building this library requires NVIDIA Cuda tools to be installed on
 your system.  See "this section"_Section_accelerate.html#10_3 of the
 manual for more information about using this package effectively and
 how it differs from the gpu package.
 
 The "gpu" library in lib/gpu is used by the gpu package.  It was
 written by Mike Brown at ORNL.  It contains code to enable portions of
 LAMMPS to run on GPUs associated with your CPUs.  Currently, only
 NVIDIA GPUs are supported, but eventually this may be extended to
 OpenCL.  Building this library requires NVIDIA Cuda tools to be
 installed on your system.  See "this
 section"_Section_accelerate.html#10_2 of the manual for more
 information about using this package effectively and how it differs
 from the user-cuda package.
 
 The "meam" library in lib/meam is used by the meam package.  It was
 written by Greg Wagner at Sandia.  It computes the modified embedded
 atom method potential, which is a generalization of EAM potentials
 that can be used to model a wider variety of materials.  This MEAM
 implementation was written by Greg Wagner at Sandia.  It requires a
 F90 compiler to build.  The C++ to FORTRAN function calls in
 pair_meam.cpp assumes that FORTRAN object names are converted to C
 object names by appending an underscore character. This is generally
 the case, but on machines that do not conform to this convention, you
 will need to modify either the C++ code or your compiler settings.
 
 The "poems" library in lib/poems is used by the poems package.  It was
 written by Rudra Mukherjee at JPL.  It computes the constrained
 rigid-body motion of articulated (jointed) multibody systems.  POEMS
 is distributed by Prof Kurt Anderson's group at Rensselaer Polytechnic
 Institute (RPI).
 
 The "reax" library in lib/reax is used by the reax package.  It was
 written by Aidan Thompson at Sandia.  It computes the Reactive Force
 Field (ReaxFF) potential, developed by Adri van Duin in Bill Goddard's
 group at CalTech.  This implementation in LAMMPS uses many of Adri's
 files and was developed by Aidan Thompson at Sandia and Hansohl Cho at
 MIT.  It requires a F77 or F90 compiler to build.  The C++ to FORTRAN
 function calls in pair_reax.cpp assume that FORTRAN object names are
 converted to C object names by appending an underscore character. This
 is generally the case, but on machines that do not conform to this
 convention, you will need to modify either the C++ code or your
 compiler settings. The name conversion is handled by the preprocessor
 macro called FORTRAN in pair_reax_fortran.h.  Different definitions of
 this macro can be obtained by adding a machine-specific macro
 definition to the CCFLAGS variable in your Makefile e.g. -D_IBM. See
 pair_reax_fortran.h for more info.
 
 As described in the README file in each lib directory, each library is
 typically built by typing something like
 
 make -f Makefile.g++ :pre
 
 in the appropriate directory, e.g. in lib/reax.
 
 You must use a Makefile that is a match for your system.  If one of
 the provided Makefiles is not appropriate for your system you will
 need to edit or add one.  For example, in the case of Fotran-based
 libraries, your system must have a Fortran compiler, the settings for
 which will be in the Makefile.
 
 Note that the cuda library, used by the user-cuda package is an
 exception.  See its README file and "this
 section"_Section_accelerate.html#10_3 of the manual for instructions
 on how to build it.
 
 :line
 
 [{Additional Makefile settings for extra libraries:}] :link(2_3_4)
 
 After the desired library or libraries are built, and the package has
 been included, you can build LAMMPS itself.  For example, from the
 lammps/src directory you would type this, to build LAMMPS with ReaxFF.
 Note that as discussed in the preceding section, the package library
 itself, namely lib/reax/libreax.a, must already have been built, for
 the LAMMPS build to be successful.
 
 make yes-reax
 make g++ :pre
 
 Also note that simply building the library is not sufficient to use it
 from LAMMPS.  As in this example, you must also include the package
 that uses and wraps the library before you build LAMMPS itself.
 
 As discussed in point (3.e) of "this section"_#2_2_4 above, there are
 settings in the low-level Makefile that specify additional system
 libraries needed by some of the LAMMPS add-on libraries.  These are
 the settings you must specify correctly in your low-level Makefile in
 lammps/src/MAKE, such as Makefile.foo:
 
 To use the gpu package and library, the settings for gpu_SYSLIB and
 gpu_SYSPATH must be correct.  These are specific to the NVIDIA CUDA
 software which must be installed on your system.
 
 To use the meam or reax packages and their libraries which are Fortran
 based, the settings for meam_SYSLIB, reax_SYSLIB, meam_SYSPATH, and
 reax_SYSPATH must be correct.  This is so that the C++ compiler can
 perform a cross-language link using the appropriate system Fortran
 libraries.
 
 To use the user-atc package and atc library, the settings for
 user-atc_SYSLIB and user-atc_SYSPATH must be correct.  This is so that
 the appropriate BLAS and LAPACK libs, used by the user-atc library,
 can be found.
 
 :line
 
 2.4 Building LAMMPS as a library :h4,link(2_4)
 
 LAMMPS can be built as a library, which can then be called from
 another application or a scripting language.  See "this
 section"_Section_howto.html#4_10 for more info on coupling LAMMPS to
 other codes.  Building LAMMPS as a library is done by typing
 
 make makelib
 make -f Makefile.lib foo :pre
 
 where foo is the machine name.  The first "make" command will create a
 current Makefile.lib with all the file names in your src dir.  The 2nd
 "make" command will use it to build LAMMPS as a library.  This
 requires that Makefile.foo have a library target (lib) and
 system-specific settings for ARCHIVE and ARFLAGS.  See Makefile.linux
 for an example.  The build will create the file liblmp_foo.a which
 another application can link to.
 
 When used from a C++ program, the library allows one or more LAMMPS
 objects to be instantiated.  All of LAMMPS is wrapped in a LAMMPS_NS
 namespace; you can safely use any of its classes and methods from
 within your application code, as needed. 
 
 When used from a C or Fortran program or a scripting language, the
 library has a simple function-style interface, provided in
 src/library.cpp and src/library.h.
 
 See the sample codes couple/simple/simple.cpp and simple.c as examples
 of C++ and C codes that invoke LAMMPS thru its library interface.
 There are other examples as well in the couple directory which are
 discussed in "this section"_Section_howto.html#4_10 of the manual.
 See "this section"_Section_python.html of the manual for a description
 of the Python wrapper provided with LAMMPS that operates through the
 LAMMPS library interface.
 
 The files src/library.cpp and library.h contain the C-style interface
 to LAMMPS.  See "this section"_Section_howto.html#4_19 of the manual
 for a description of the interface and how to extend it for your
 needs.
 
 :line
 
 2.5 Running LAMMPS :h4,link(2_5)
 
 By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux
 < in.file.  This means you first create an input script (e.g. in.file)
 containing the desired commands.  "This section"_Section_commands.html
 describes how input scripts are structured and what commands they
 contain.
 
 You can test LAMMPS on any of the sample inputs provided in the
 examples or bench directory.  Input scripts are named in.* and sample
 outputs are named log.*.name.P where name is a machine and P is the
 number of processors it was run on.
 
 Here is how you might run a standard Lennard-Jones benchmark on a
 Linux box, using mpirun to launch a parallel job:
 
 cd src
 make linux
 cp lmp_linux ../bench
 cd ../bench
 mpirun -np 4 lmp_linux < in.lj :pre
 
 See "this page"_bench for timings for this and the other benchmarks
 on various platforms.
 
 :link(bench,http://lammps.sandia.gov/bench.html)
 
 :line
 
 On a Windows box, you can skip making LAMMPS and simply download an
 executable, as described above. though the pre-packaged executables
 make only certain packages available.
 
 To run a LAMMPS executable on a Windows machine, first decide whether
 you want to download the non-MPI (serial) or the MPI (parallel)
 version of the executable. Download and save the version you have
 chosen.
 
 For the non-MPI version, follow these steps:
 
 Get a command prompt by going to Start->Run... , 
 then typing "cmd". :ulb,l
 
 Move to the directory where you have saved lmp_win_no-mpi.exe
 (e.g. by typing: cd "Documents"). :l
 
 At the command prompt, type "lmp_win_no-mpi -in in.lj", replacing in.lj
 with the name of your LAMMPS input script. :l,ule
 
 For the MPI version, which allows you to run LAMMPS under Windows on 
 multiple processors, follow these steps:
 
 Download and install 
 "MPICH2"_http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads
 for Windows. :ulb,l
 
 You'll need to use the mpiexec.exe and smpd.exe files from the MPICH2 package. Put them in 
 same directory (or path) as the LAMMPS Windows executable. :l
 
 Get a command prompt by going to Start->Run... , 
 then typing "cmd". :l
 
 Move to the directory where you have saved lmp_win_mpi.exe
 (e.g. by typing: cd "Documents"). :l
 
 Then type something like this: "mpiexec -np 4 -localonly lmp_win_mpi -in in.lj", 
 replacing in.lj with the name of your LAMMPS input script. :l
 Note that you may need to provide smpd with a passphrase --- it doesn't matter what you 
 type. :l
 In this mode, output may not immediately show up on the screen, so 
 if your input script takes a long time to execute, you may need to be 
 patient before the output shows up. :l
 Alternatively, you can still use this executable to run on a single processor by
 typing something like: "lmp_win_mpi -in in.lj". :l,ule
 
 :line
 
 The screen output from LAMMPS is described in the next section.  As it
 runs, LAMMPS also writes a log.lammps file with the same information.
 
 Note that this sequence of commands copies the LAMMPS executable
 (lmp_linux) to the directory with the input files.  This may not be
 necessary, but some versions of MPI reset the working directory to
 where the executable is, rather than leave it as the directory where
 you launch mpirun from (if you launch lmp_linux on its own and not
 under mpirun).  If that happens, LAMMPS will look for additional input
 files and write its output files to the executable directory, rather
 than your working directory, which is probably not what you want.
 
 If LAMMPS encounters errors in the input script or while running a
 simulation it will print an ERROR message and stop or a WARNING
 message and continue.  See "this section"_Section_errors.html for a
 discussion of the various kinds of errors LAMMPS can or can't detect,
 a list of all ERROR and WARNING messages, and what to do about them.
 
 LAMMPS can run a problem on any number of processors, including a
 single processor.  In theory you should get identical answers on any
 number of processors and on any machine.  In practice, numerical
 round-off can cause slight differences and eventual divergence of
 molecular dynamics phase space trajectories.
 
 LAMMPS can run as large a problem as will fit in the physical memory
 of one or more processors.  If you run out of memory, you must run on
 more processors or setup a smaller problem.
 
 :line
 
 2.6 Command-line options :h4,link(2_6)
 
 At run time, LAMMPS recognizes several optional command-line switches
 which may be used in any order.  Either the full word or a one-letter
 abbreviation can be used:
 
 -c or -cuda
 -e or -echo
 -i or -in
 -l or -log
 -p or -partition
+-pl or -plog
+-ps or -pscreen
 -sc or -screen
 -sf or -suffix
 -v or -var :ul
 
 For example, lmp_ibm might be launched as follows:
 
 mpirun -np 16 lmp_ibm -v f tmp.out -l my.log -sc none < in.alloy
 mpirun -np 16 lmp_ibm -var f tmp.out -log my.log -screen none < in.alloy :pre
 
 Here are the details on the options:
 
 -cuda on/off :pre
 
 Explicitly enable or disable CUDA support, as provided by the
 USER-CUDA package.  If LAMMPS is built with this package, as described
 above in "Section 2.3"_#2_3, then by default LAMMPS will run in CUDA
 mode.  If this switch is set to "off", then it will not, even if it
 was built with the USER-CUDA package, which means you can run standard
 LAMMPS or with the GPU package for testing or benchmarking purposes.
 The only reason to set the switch to "on", is to check if LAMMPS was
 built with the USER-CUDA package, since an error will be generated if
 it was not.
 
 -echo style :pre
 
 Set the style of command echoing.  The style can be {none} or {screen}
 or {log} or {both}.  Depending on the style, each command read from
 the input script will be echoed to the screen and/or logfile.  This
 can be useful to figure out which line of your script is causing an
 input error.  The default value is {log}.  The echo style can also be
 set by using the "echo"_echo.html command in the input script itself.
 
 -in file :pre
 
 Specify a file to use as an input script.  This is an optional switch
 when running LAMMPS in one-partition mode.  If it is not specified,
 LAMMPS reads its input script from stdin - e.g. lmp_linux < in.run.
 This is a required switch when running LAMMPS in multi-partition mode,
 since multiple processors cannot all read from stdin.
 
 -log file :pre
 
 Specify a log file for LAMMPS to write status information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 file log.lammps.  If this switch is used, LAMMPS writes to the
 specified file.  In multi-partition mode, if the switch is not used, a
 log.lammps file is created with hi-level status information.  Each
 partition also writes to a log.lammps.N file where N is the partition
 ID.  If the switch is specified in multi-partition mode, the hi-level
 logfile is named "file" and each partition also logs information to a
 file.N.  For both one-partition and multi-partition mode, if the
 specified file is "none", then no log files are created.  Using a
 "log"_log.html command in the input script will override this setting.
+Option -plog will override the name of the partition log files file.N.
 
 -partition 8x2 4 5 ... :pre
 
 Invoke LAMMPS in multi-partition mode.  When LAMMPS is run on P
 processors and this switch is not used, LAMMPS runs in one partition,
 i.e. all P processors run a single simulation.  If this switch is
 used, the P processors are split into separate partitions and each
 partition runs its own simulation.  The arguments to the switch
 specify the number of processors in each partition.  Arguments of the
 form MxN mean M partitions, each with N processors.  Arguments of the
 form N mean a single partition with N processors.  The sum of
 processors in all partitions must equal P.  Thus the command
 "-partition 8x2 4 5" has 10 partitions and runs on a total of 25
 processors.
 
 Note that with MPI installed on a machine (e.g. your desktop), you can
 run on more (virtual) processors than you have physical processors.
 This can be useful for running "multi-replica
 simulations"_Section_howto.html#4_5, on one or a few processors.
 
 The input script specifies what simulation is run on which partition;
 see the "variable"_variable.html and "next"_next.html commands.  This
 "howto section"_Section_howto.html#4_4 gives examples of how to use
 these commands in this way.  Simulations running on different
 partitions can also communicate with each other; see the
 "temper"_temper.html command.
 
+-plog file :pre
+ 
+Specify the base name for the partition log files, 
+so partition N writes log information to file.N. If file is 
+none, then no partition log files are created. 
+This overrides the
+filename specified in the -log command-line option.
+This option is useful when working with large numbers of partitions,
+allowing the partition log files to be suppressed (-plog none) or
+placed in a sub-directory (-plog replica_files/log.lammps)
+If this option is not used
+the log file for partition N is log.lammps.N or whatever is specified by
+the -log command-line option. 
+
+-pscreen file :pre 
+
+Specify the base name for the 
+partition screen file, so partition N writes 
+screen information to file.N. If file is 
+none, then no partition screen files are created. 
+This overrides the
+filename specified in the -screen command-line option.
+This option is useful when working with large numbers of partitions,
+allowing the partition screen files to be suppressed (-pscreen none) or
+placed in a sub-directory (-pscreen replica_files/screen)
+If this option is not used
+the screen file for partition N is screen.N or whatever is specified by
+the -screen command-line option. 
+
 -screen file :pre
 
 Specify a file for LAMMPS to write its screen information to.  In
 one-partition mode, if the switch is not used, LAMMPS writes to the
 screen.  If this switch is used, LAMMPS writes to the specified file
 instead and you will see no screen output.  In multi-partition mode,
 if the switch is not used, hi-level status information is written to
 the screen.  Each partition also writes to a screen.N file where N is
 the partition ID.  If the switch is specified in multi-partition mode,
 the hi-level screen dump is named "file" and each partition also
 writes screen information to a file.N.  For both one-partition and
 multi-partition mode, if the specified file is "none", then no screen
-output is performed.
+output is performed. Option -pscreen will override the name of the 
+partition screen files file.N.
 
 -suffix style :pre
 
 Use variants of various styles if they exist.  The specified style can
 be {opt} or {gpu} or {cuda}.  These refer to optional packages that
 LAMMPS can be built with, as described above in "Section 2.3"_#2_3.
 The "opt" style corrsponds to the OPT package, the "gpu" style to the
 GPU package, and the "cuda" style to the USER-CUDA package.
 
 As an example, all of the packages provide a "pair_style
 lj/cut"_pair_lj.html variant, with style names lj/cut/opt or
 lj/cut/gpu or lj/cut/cuda.  A variant styles can be specified
 explicitly in your input script, e.g. pair_style lj/cut/gpu.  If the
 -suffix switch is used, you do not need to modify your input script.
 The specified suffix (opt,gpu,cuda) is automatically appended whenever
 your input script command creates a new "atom"_atom_style.html,
 "pair"_pair_style.html, "fix"_fix.html, "compute"_compute.html, or
 "run"_run_style.html style.  atom, pair, fix, compute, or integrate
 style.  If the variant version does not exist, the standard version is
 created.
 
 The "suffix"_suffix.html command can also set a suffix and it can also
 turn off/on any suffix setting made via the command line.
 
 -var name value1 value2 ... :pre
 
 Specify a variable that will be defined for substitution purposes when
 the input script is read.  "Name" is the variable name which can be a
 single character (referenced as $x in the input script) or a full
 string (referenced as $\{abc\}).  An "index-style
 variable"_variable.html will be created and populated with the
 subsequent values, e.g. a set of filenames.  Using this command-line
 option is equivalent to putting the line "variable name index value1
 value2 ..."  at the beginning of the input script.  Defining an index
 variable as a command-line argument overrides any setting for the same
 index variable in the input script, since index variables cannot be
 re-defined.  See the "variable"_variable.html command for more info on
 defining index and other kinds of variables and "this
 section"_Section_commands.html#3_2 for more info on using variables in
 input scripts.
 
 :line
 
 2.7 LAMMPS screen output :h4,link(2_7)
 
 As LAMMPS reads an input script, it prints information to both the
 screen and a log file about significant actions it takes to setup a
 simulation.  When the simulation is ready to begin, LAMMPS performs
 various initializations and prints the amount of memory (in MBytes per
 processor) that the simulation requires.  It also prints details of
 the initial thermodynamic state of the system.  During the run itself,
 thermodynamic information is printed periodically, every few
 timesteps.  When the run concludes, LAMMPS prints the final
 thermodynamic state and a total run time for the simulation.  It then
 appends statistics about the CPU time and storage requirements for the
 simulation.  An example set of statistics is shown here:
 
 Loop time of 49.002 on 2 procs for 2004 atoms :pre
 
 Pair   time (%) = 35.0495 (71.5267)
 Bond   time (%) = 0.092046 (0.187841)
 Kspce  time (%) = 6.42073 (13.103)
 Neigh  time (%) = 2.73485 (5.5811)
 Comm   time (%) = 1.50291 (3.06703)
 Outpt  time (%) = 0.013799 (0.0281601)
 Other  time (%) = 2.13669 (4.36041) :pre
 
 Nlocal:    1002 ave, 1015 max, 989 min
 Histogram: 1 0 0 0 0 0 0 0 0 1 
 Nghost:    8720 ave, 8724 max, 8716 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1
 Neighs:    354141 ave, 361422 max, 346860 min 
 Histogram: 1 0 0 0 0 0 0 0 0 1 :pre
 
 Total # of neighbors = 708282
 Ave neighs/atom = 353.434
 Ave special neighs/atom = 2.34032
 Number of reneighborings = 42
 Dangerous reneighborings = 2 :pre
 
 The first section gives the breakdown of the CPU run time (in seconds)
 into major categories.  The second section lists the number of owned
 atoms (Nlocal), ghost atoms (Nghost), and pair-wise neighbors stored
 per processor.  The max and min values give the spread of these values
 across processors with a 10-bin histogram showing the distribution.
 The total number of histogram counts is equal to the number of
 processors.
 
 The last section gives aggregate statistics for pair-wise neighbors
 and special neighbors that LAMMPS keeps track of (see the
 "special_bonds"_special_bonds.html command).  The number of times
 neighbor lists were rebuilt during the run is given as well as the
 number of potentially "dangerous" rebuilds.  If atom movement
 triggered neighbor list rebuilding (see the
 "neigh_modify"_neigh_modify.html command), then dangerous
 reneighborings are those that were triggered on the first timestep
 atom movement was checked for.  If this count is non-zero you may wish
 to reduce the delay factor to insure no force interactions are missed
 by atoms moving beyond the neighbor skin distance before a rebuild
 takes place.
 
 If an energy minimization was performed via the
 "minimize"_minimize.html command, additional information is printed,
 e.g.
 
 Minimization stats:
   E initial, next-to-last, final = -0.895962 -2.94193 -2.94342
   Gradient 2-norm init/final= 1920.78 20.9992
   Gradient inf-norm init/final= 304.283 9.61216
   Iterations = 36
   Force evaluations = 177 :pre
 
 The first line lists the initial and final energy, as well as the
 energy on the next-to-last iteration.  The next 2 lines give a measure
 of the gradient of the energy (force on all atoms).  The 2-norm is the
 "length" of this force vector; the inf-norm is the largest component.
 The last 2 lines are statistics on how many iterations and
 force-evaluations the minimizer required.  Multiple force evaluations
 are typically done at each iteration to perform a 1d line minimization
 in the search direction.
 
 If a "kspace_style"_kspace_style.html long-range Coulombics solve was
 performed during the run (PPPM, Ewald), then additional information is
 printed, e.g.
 
 FFT time (% of Kspce) = 0.200313 (8.34477)
 FFT Gflps 3d 1d-only = 2.31074 9.19989 :pre
 
 The first line gives the time spent doing 3d FFTs (4 per timestep) and
 the fraction it represents of the total KSpace time (listed above).
 Each 3d FFT requires computation (3 sets of 1d FFTs) and communication
 (transposes).  The total flops performed is 5Nlog_2(N), where N is the
 number of points in the 3d grid.  The FFTs are timed with and without
 the communication and a Gflop rate is computed.  The 3d rate is with
 communication; the 1d rate is without (just the 1d FFTs).  Thus you
 can estimate what fraction of your FFT time was spent in
 communication, roughly 75% in the example above.
 
 :line
 
 2.8 Tips for users of previous LAMMPS versions :h4,link(2_8)
 
 The current C++ began with a complete rewrite of LAMMPS 2001, which
 was written in F90.  Features of earlier versions of LAMMPS are listed
 in "this section"_Section_history.html.  The F90 and F77 versions
 (2001 and 99) are also freely distributed as open-source codes; check
 the "LAMMPS WWW Site"_lws for distribution information if you prefer
 those versions.  The 99 and 2001 versions are no longer under active
 development; they do not have all the features of C++ LAMMPS.
 
 If you are a previous user of LAMMPS 2001, these are the most
 significant changes you will notice in C++ LAMMPS:
 
 (1) The names and arguments of many input script commands have
 changed.  All commands are now a single word (e.g. read_data instead
 of read data).
 
 (2) All the functionality of LAMMPS 2001 is included in C++ LAMMPS,
 but you may need to specify the relevant commands in different ways.
 
 (3) The format of the data file can be streamlined for some problems.
 See the "read_data"_read_data.html command for details.  The data file
 section "Nonbond Coeff" has been renamed to "Pair Coeff" in C++ LAMMPS.
 
 (4) Binary restart files written by LAMMPS 2001 cannot be read by C++
 LAMMPS with a "read_restart"_read_restart.html command.  This is
 because they were output by F90 which writes in a different binary
 format than C or C++ writes or reads.  Use the {restart2data} tool
 provided with LAMMPS 2001 to convert the 2001 restart file to a text
 data file.  Then edit the data file as necessary before using the C++
 LAMMPS "read_data"_read_data.html command to read it in.
 
 (5) There are numerous small numerical changes in C++ LAMMPS that mean
 you will not get identical answers when comparing to a 2001 run.
 However, your initial thermodynamic energy and MD trajectory should be
 close if you have setup the problem for both codes the same.