diff --git a/doc/sphinx/source/model.rst b/doc/sphinx/source/model.rst index 36201f9..cda746b 100644 --- a/doc/sphinx/source/model.rst +++ b/doc/sphinx/source/model.rst @@ -1,213 +1,214 @@ Model and integral operators ============================ The class :cpp:class:`tamaas::Model` (and its counterpart :py:class:`Model `) is both a central class in Tamaas and one of the simplest. It mostly serves as holder for material properties, fields and integral operators, and apart from a linear elastic behavior does not perform any computation on its own. Units in Tamaas --------------- All quantities in Tamaas are unitless. To put real units onto them, one can use the following equation, which must have a consistent set of units: .. math:: \frac{u}{L} = \frac{\pi}{k} \frac{p}{\frac{E}{1-\nu^2}}. It relates the surface displacement :math:`u` to the pressure :math:`p`, with the Young's modulus :math:`E`, the Poisson coefficient :math:`\nu` and the *horizontal* physical size of the system :math:`L` (i.e. the horizontal period of the system). The wavenumber :math:`k` is adimensional. This means that :math:`L` is the natural unit for lengths (which can be specified when creating the ``Model`` object), and :math:`E^* := E / (1 - \nu^2)` is the natural unit for pressures (which can be accessed with ``model.E_star``). All other units (areas, forces, energies, etc.) are derived from these two units. In general, it is a good idea to work with a unit set that gives numerical values of the same order of magnitude to minimize floating point errors in algebraic operations. While this is not always possible, because in elastic contact we have :math:`u \sim h_\mathrm{rms}` and :math:`p \sim h'_\mathrm{rms} -E^*`, which are dependent, one should avoid using nanometers for lengths and GPa -for pressures for example. +E^*`, which are dependent, one should avoid using meters if expected lengths are +nanometers for example. Model types ----------- :cpp:class:`tamaas::Model` has a concrete subclass :cpp:class:`tamaas::ModelTemplate` which implements the model function for a given :cpp:type:`tamaas::model_type`: :cpp:enumerator:`tamaas::basic_2d` Model type used in normal frictionless contact: traction and displacement are 2D fields with only one component. :cpp:enumerator:`tamaas::surface_2d` Model type used in frictional contact: traction and displacement are 2D fields with three components. :cpp:enumerator:`tamaas::volume_2d` Model type used in elastoplastic contact: tractions are the same as with :cpp:enumerator:`tamaas::surface_2d` but the displacement is a 3D field. The enumeration values suffixed with ``_1d`` are the one dimensional (line contact) counterparts of the above model types. The domain physical dimension and number of components are encoded in the class :cpp:class:`tamaas::model_type_traits`. Model creation and basic functionality -------------------------------------- -The instanciation of a :py:class:`Model ` is done with the :py:class:`ModelFactory ` class and its :py:func:`createModel ` function:: +The instanciation of a :py:class:`Model ` requires the +model type, the physical size of the system, and the number of points in each direction:: physical_size = [1., 1.] discretization = [512, 512] - model = tm.ModelFactory.createModel(tm.model_type.basic_2d, physical_size, discretization) + model = tm.Model(tm.model_type.basic_2d, physical_size, discretization) .. warning:: For models of type ``volume_*d``, the first component of the ``physical_size`` and ``discretization`` arrays corresponds to the depth dimension (:math:`z` in most cases). For example:: - tm.ModelFactory.createModel(tm.model_type.basic_2d, [0.3, 1, 1], [64, 81, 81]) + tm.Model(tm.model_type.basic_2d, [0.3, 1, 1], [64, 81, 81]) creates a model of depth 0.3 and surface size 1\ :superscript:`2`, while the number of points is 64 in depth and 81\ :sup:`2` on the surface. This is done for data contiguity reasons, as we do discrete Fourier transforms in the horizontal plane. .. note:: - If ran in an MPI context, the method :py:meth:`createModel - ` expects the *global* system sizes + If ran in an MPI context, the constructor to :py:class:`Model + ` expects the *global* system sizes and discretization of the model. .. tip:: Deep copies of model objects can be done with the :py:func:`copy.deepcopy` function:: import copy model_copy = copy.deepcopy(model) Note that it only copies fields, not operators or dumpers. Model properties ~~~~~~~~~~~~~~~~ The properties ``E`` and ``nu`` can be used to set the Young's modulus and Poisson ratio respectively:: model.E = 1 model.nu = 0.3 Fields ~~~~~~ Fields can be easlily accessed with the ``[]`` operator, similar to Python's dictionaries:: surface_traction = model['traction'] # surface_traction is a numpy array To know what fields are available, you can call the :py:class:`list` function on a model (``list(model)``). You can add new fields to a model object with the ``[]`` operator:``model['new_field'] = new_field``, which is convenient for dumping fields that are computed outside of Tamaas. .. note:: The fields ``traction`` and ``displacement`` are always registered in models, and are accessible via :py:attr:`model.traction ` and :py:attr:`model.displacement `. A model can also be used to compute stresses from a strain field:: import numpy strain = numpy.zeros(model.shape + [6]) # Mandel--Voigt notation stress = numpy.zeros_like(strain) model.applyElasticity(stress, strain) .. tip:: ``print(model)`` gives a lot of information about the model: the model type, shape, registered fields, and more! Integral operators ------------------ Integral operators are a central part of Tamaas: they are carefully designed for performance in periodic system. When a :py:class:`Model ` object is used with contact solvers or with a residual object (for plasticty), the objects using the model register integral operators with the model, so the user typically does not have to worry about creating integral operators. Integral operators are accessed through the :py:attr:`operators ` property of a model object. The ``[]`` operator gives access to the operators, and ``list(model.operators)`` gives the list of registered operators:: # Accessing operator elasticity = model.operators['hooke'] # Applying operator elasticity(strain, stress) # Print all registered operators print(list(model.operators)) .. note:: At model creation, these operators are automatically registered: - ``hooke``: Hooke's elasticity law - ``von_mises``: computes Von Mises stresses - ``deviatoric``: computes the deviatoric part of a stress tensor - ``eigenvalues``: computes the eigenvalues of a symetric tensor field :cpp:class:`Westergaard ` operators are automatically registered when :py:meth:`solveNeumann ` or :py:meth:`solveDirichlet ` are called. Model dumpers ------------- The submodule `tamaas.dumpers` contains a number of classes to save model data into different formats: :py:class:`UVWDumper ` Dumps a model to `VTK `_ format. Requires the `UVW `_ python package which you can install with pip:: pip install uvw This dumper is made for visualization with VTK based software like `Paraview `_. :py:class:`NumpyDumper ` Dumps a model to a compressed Numpy file. :py:class:`H5Dumper ` Dumps a model to a compressed `HDF5 `_ file. Requires the `h5py `_ package. Saves separate files for each dump of a model. :py:class:`NetCDFDumper ` Dumps a model to a compressed `NetCDF `_ file. Requires the `netCDF4 `_ package. Saves sequential dumps of a model into a single file, with the ``frame`` dimension containing the model dumps. The dumpers are initialzed with a basename and the fields that you wish to write to file (optionally you can set ``all_fields`` to ``True`` to dump all fields in the model). By default, each write operation creates a new file in a separate directory (e.g. :py:class:`UVWDumper ` creates a ``paraview`` directory). To write to a specific file you can use the `dump_to_file` method. Here is a usage example:: from tamaas.dumpers import UVWDumper, H5Dumper # Create dumper uvw_dumper = UVWDumper('rough_contact_example', 'stress', 'plastic_strain') # Dump model uvw_dumper << model # Or alternatively model.addDumper(H5Dumper('rough_contact_archive', all_fields=True)) model.addDumper(uvw_dumper) model.dump() The last ``model.dump()`` call will trigger all dumpers. The resulting files will have the following hierachy:: ./paraview/rough_contact_example_0000.vtr ./paraview/rough_contact_example_0001.vtr ./hdf5/rough_contact_archive_0000.h5 .. important:: Currently, only :py:class:`H5Dumper ` and :py:class:`NetCDFDumper ` support parallel output with MPI. diff --git a/doc/sphinx/source/mpi.rst b/doc/sphinx/source/mpi.rst index 83c4b03..1e7b60b 100644 --- a/doc/sphinx/source/mpi.rst +++ b/doc/sphinx/source/mpi.rst @@ -1,190 +1,190 @@ Working with MPI ================ Distributed memory parallelism in Tamaas is implemented with `MPI `_. Due to the bottleneck role of the fast-Fourier transform in Tamaas' core routines, the data layout of Tamaas is that of `FFTW `_. Tamaas is somewhat affected by limitations of FFTW, and MPI only works on systems with a 2D boundary, i.e. ``basic_2d``, ``surface_2d`` and ``volume_2d`` model types (which are the most important anyways, since rough contact mechanics can yield different scaling laws in 1D). MPI support in Tamaas is still experimental, but the following parts are tested: - Rough surface generation - Surface statistics computation - Elastic normal contact - Elastic-plastic contact (with :py:class:`DFSANECXXSolver `) - Dumping models with :py:class:`H5Dumper ` and :py:class:`NetCDFDumper `. .. tip:: One can look at ``examples/plasticity.py`` for a full example of an elastic-plastic contact simulation that can run in MPI. Transparent MPI context ----------------------- Some parts of Tamaas work transparently with MPI and no additional work or logic is needed. .. warning:: ``MPI_Init()`` is automatically called when importing the ``tamaas`` module in Python. While this works transparently most of the time, in some situations, e.g. in Singularity containers, the program can hang if ``tamaas`` is imported first. It is therefore advised to run ``from mpi4py import MPI`` *before* ``import tamaas`` to avoid issues. Creating a model ~~~~~~~~~~~~~~~~~~~~ The following snippet creates a model whose global shape is ``[16, 2048, 2048]``:: import tamaas as tm - model = tm.ModelFactory.createModel(tm.model_type.volume_2d, - [0.1, 1, 1], [16, 2048, 2048]) + model = tm.Model(tm.model_type.volume_2d, + [0.1, 1, 1], [16, 2048, 2048]) print(model.shape, model.global_shape) Running this code with ``mpirun -np 3`` will print the following (not necessarily in this order):: [16, 683, 2048] [16, 2048, 2048] [16, 682, 2048] [16, 2048, 2048] [16, 683, 2048] [16, 2048, 2048] Note that the partitioning occurs on the `x` dimension of the model (see below for more information on the data layout imposed by FFTW). Creating a rough surface ~~~~~~~~~~~~~~~~~~~~~~~~ Similarly, rough surface generators expect a global shape and return the partionned data:: iso = tm.Isopowerlaw2D() iso.q0, iso.q1, iso.q2, iso.hurst = 4, 4, 32, .5 gen = tm.SurfaceGeneratorRandomPhase2D([2048, 2048]) gen.spectrum = iso surface = gen.buildSurface() print(surface.shape, tm.mpi.global_shape(surface.shape)) With ``mpirun -np 3`` this should print:: (682, 2048) [2048, 2048] (683, 2048) [2048, 2048] (683, 2048) [2048, 2048] Handling partitioning edge cases ................................ Under certain conditions, FFTW may assign to one or more processes a size of zero to the `x` dimension of the model. If that happens, the surface generator will raise a runtime error, which causes a deadlock because it does not exit the processes with zero data. The correct way to handle this edge case is:: from mpi4py import MPI try: gen = tm.SurfaceGeneratorRandomPhase2D([128, 128]) except RuntimeError as e: print(e) MPI.COMM_WORLD.Abort(1) This will correctly kill all processes. Alternatively, ``os._exit()`` can be used, but one should avoid ``sys.exit()``, as it kills the process by raising an exception, which still results in a deadlock. Computing statistics ~~~~~~~~~~~~~~~~~~~~ With a model's data distributed among independent process, computing global properties, like the true contact area, must be done in a collective fashion. This is transparently handled by the :py:class:`Statistics ` class, e.g. with:: contact = tm.Statistics2D.contact(model.traction) This gives the correct contact fraction, whereas something like ``np.mean(model.traction > 0)`` will give a different result on each processor. Nonlinear solvers ~~~~~~~~~~~~~~~~~ The only nonlinear solver (for plastic contact) that works with MPI is :py:class:`DFSANECXXSolver `, which is a C++ implementation of :py:class:`DFSANESolver ` that works in an MPI context. .. note:: Scipy and Numpy use optimized BLAS routines for array operations, while Tamaas does not, which results in `serial` performance of the C++ implementation of the DF-SANE algorithm being lower than the Scipy version. Dumping models ~~~~~~~~~~~~~~ The only dumpers that properly works in MPI are the :py:class:`H5Dumper ` and :py:class:`NetCDFDumper `. Output is then as simple as:: from tamaas.dumpers import H5Dumper H5Dumper('output', all_fields=True) << model This is useful for doing post-processing separately from the main simulation: the post-processing can then be done in serial. MPI convenience methods ----------------------- Not every use case can be handled transparently, but although adapting existing scripts to work in an MPI context can require some work, especially if said scripts rely on numpy and scipy for pre- and post-processing (e.g. constructing a parabolic surface for hertzian contact, computing the total contact area), the module :py:mod:`mpi ` provides some convenience functions to make that task easier. The functions :py:func:`mpi.scatter ` and :py:func:`mpi.gather ` can be used to scatter/gather 2D data using the partitioning scheme expected from FFTW (see figure below). The functions :py:func:`mpi.rank ` and :py:func:`mpi.size ` are used to determine the local process rank and the total number of processes respectively. If finer control is needed, the function :py:func:`mpi.local_shape ` gives the 2D shape of the local data if given the global 2D shape (its counterpart :py:func:`mpi.global_shape ` does the exact opposite), while :py:func:`mpi.local_offset ` gives the offset of the local data in the global :math:`x` dimension. These two functions mirror FFTW's own data distribution `functions `_. .. figure:: figures/mpi_data_distribution.svg :align: center :width: 75% 2D Data distribution scheme from FFTW. ``N0`` and ``N1`` are the number of points in the :math:`x` and :math:`y` directions respectively. The array ``local_N0``, indexed by the process rank, give the local size of the :math:`x` dimension. The :py:func:`local_offset ` function gives the offset in :math:`x` for each process rank. The :py:mod:`mpi ` module also contains a function :py:func:`sequential ` whose return value is meant to be used as a context manager. Within the sequential context the default communicator is ``MPI_COMM_SELF`` instead of ``MPI_COMM_WORLD``. For other MPI functionality not covered by Tamaas that may be required, one can use `mpi4py `_, which in conjunction with the methods in :py:mod:`mpi ` should handle just about any use case.