diff --git a/CHANGELOG.md b/CHANGELOG.md index db4c213..9790c74 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,368 +1,369 @@ # Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) for final versions and [PEP440](https://www.python.org/dev/peps/pep-0440/) in case intermediate versions need to be released (e.g. development version `2.2.3.dev1` or release candidates `2.2.3rc1`), or individual commits are packaged. ## Unreleased ### Added - Added `matvec` interface for the following operators: - `Hooke` - `Kelvin` - `Westergaard` - Documented error handling with MPI - Added `span` class - `FieldContainer` class that defines an interface to access fields registered with an object - Operators now expose their fields, which can be modified from Python - Added `HookeField` operator for heterogeneous elastic properties - Documented proper unit choice - Added `Statistics::computeFDRMSSlope()`, finite difference approximation of RMS of surface slopes - Kato saturated uses L2 norm on residual displacement instead of L∞ - Request system to avoid erasing model data on restart - Boundary field names are dumped ### Changed - Resizing a grid no longer sets values to zero - Wheels built for `manylinux2014` instead of `manylinux2010` - Cleaned up `Array` interface - Shared-memory compile options disabled by default - Unknown build variables raise a warning - Removed conditional branches from PKR - NetCDF dumps in `NETCDF4` format instead of `NETCDF4_CLASSIC` +- VTK no longer writes FieldData, this was causing issues reading converted files ### Fixed - Fixed flood fill algorithm to properly account for PBCs ## v2.5.0 -- 2022-07-05 ### Added - Added `tamaas.utils.publications` to print a list of publications relevant to the parts of Tamaas used in a script. Call at the end of a script for an exhaustive list. - Added `tamaas.mpi.gather` and `tamaas.mpi.scatter` to help handling of 2D data in MPI contexts. - Added `getBoundaryFields()/boundary_fields` function/property to `Model`. - Added Python constructor to `Model`, which is an alias to `ModelFactory.createModel`. - Added convenience `tamaas.dumpers._helper.netCDFtoParaview` function to convert model sequence to Paraview data file. - Added dump of surface fields as `FieldData` in VTK files - Added `scipy.sparse.linalg.LinearOperator` interface to `Westergaard`. A `LinearOperator` can be created by calling `scipy.sparse.linalg.aslinearoperator` on the operator object. - Allowed sub-classing of `ContactSolver` in Python. - Exposed elastic contact functionals in Python to help with custom solver implementation. - Added an example of custom solver with penalty method using Scipy. - Added `graphArea` function in statistics to compute area of function graph - Added `tamaas.utils.radial_average` to radialy average a 2D field - Added option to not make a dump directory (e.g. `netcdf`, `hdf5` directories for dump files). This allows to specify an absolute path for dump files, and is useful if simulation workflow is managed with file targets, as do systems like GNU Make and Snakemake. ### Changed - Filter for boundary fields in dumper should be smarter. - `read` functions in dumpers are now `@classmethod` - Changed sign of plastic correction in KatoSaturated solver to match the EPICSolver sign. - Plastic correction of KatoSaturated is registered in the model as `KatoSaturated::residual_displacement`. - Changed to modern Python packaging (PEP517) with `setup.cfg` and `pyproject.toml`. A `setup.py` script is still provided for editable installations, see [gh#7953](https://github.com/pypa/pip/issues/7953). Once setuptools has stable support for [PEP621](https://peps.python.org/pep-0621/), `setup.cfg` will be merged into `pyproject.toml`. Note that source distributions that can be built with [`build`](https://pypi.org/project/build/) are not true source distributions, since they contain the compiled Python module of Tamaas. - Updated copyright to reflect Tamaas' development history since leaving EPFL ### Fixed - Fixed `tamaas.dumpers.NetCDFDumper` to dump in MPI context. ### Deprecated - SCons versions < 3 and Python < 3.6 are explicitely no longer supported. ## v2.4.0 -- 2022-03-22 ### Added - Added a `tamaas.utils` module. - Added `tamaas.utils.load_path` generator, which yields model objects for a sequence of applied loads. - Added `tamaas.utils.seeded_surfaces` generator, which yields surfaces for a sequence of random seeds. - Added `tamaas.utils.hertz_surface` which generates a parabolic (Hertzian) surface. - Added `tamaas.utils.log_context` context manager which locally sets the log level for Tamaas' logger. - Added `tamaas.compute.from_voigt` to convert Voigt/Mendel fields to dense tensor fields. - Added a deep-copy function to `ModelFactory` to copy `Model` objects. Use `model_copy = copy.deepcopy(model)` in Python to make a copy. Currently only copies registered fields, same as dumpers/readers. - Added a `read_sequence` method for dumpers to read model frames. - Automatic draft release on Zenodo. ### Changed - `*_ROOT` variables for vendored dependencies GoogleTest and pybind11 now default to empty strings, so that the vendored trees in `third-party/` are not selected by default. This is so system packages are given priority. Vendored dependency submodules will eventually be deprecated. ### Fixed - Fixed an issue with `scons -c` when GTest was missing. ## v2.3.1 -- 2021-11-08 ### Added - Now using `clang-format`, `clang-tidy` and `flake8` for linting - Pre-built Tamaas images can be pulled with `docker pull registry.gitlab.com/tamaas/tamaas` - Added a `--version` flag to the `tamaas` command ### Changed - The root `Dockerfile` now compiles Tamaas, so using Tamaas in Docker is easier. - The model attributes dumped to Numpy files are now written in a JSON-formatted string to avoid unsafe loading/unpickling of objects. - Removed the `build_doc` build option: now the doc targets are automatically added if the dependencies are met, and built if `scons doc` is called. - Removed the `use_googletest` build option: if tests are built and gtest is present, the corresponding tests will be built ### Fixed - The command `tamaas plot` gracefully fails if Matplotlib is missing - Better heuristic to guess which fields are defined on boundary in dumpers (still not perfect and may give false negatives) ## v2.3.0 -- 2021-06-15 ### Added - Added `read()` method to dumpers to create a model from a dump file - `getClusters()` can be called in MPI contact with partial contact maps - Added a JSON encoder class for models and a JSON dumper - CUDA compatibility is re-established, but has not been tested - Docstrings in the Python bindings for many classes/methods ### Changed - Tamaas version numbers are now managed by [versioneer](https://github.com/python-versioneer/python-versioneer). This means that Git tags prefixed with `v` (e.g. `v2.2.3`) carry meaning and determine the version. When no tag is set, versioneer uses the last tag, specifies the commit short hash and the distance to the last tag (e.g. `2.2.2+33.ge314b0e`). This version string is used in the compiled library, the `setup.py` script and the `__version__` variable in the python module. - Tamaas migrated to [GitLab](https://gitlab.com/tamaas/tamaas) - Continuous delivery has been implemented: - the `master` branch will now automatically build and publish Python wheels to `https://gitlab.com/api/v4/projects/19913787/packages/pypi/simple`. These "nightly" builds can be installed with: pip install \ --extra-index-url https://gitlab.com/api/v4/projects/19913787/packages/pypi/simple \ tamaas - version tags pushed to `master` will automatically publish the wheels to [PyPI](https://pypi.org/project/tamaas/) ### Deprecated - The `finalize()` function is now deprecated, since it is automatically called when the process terminates - Python versions 3.5 and below are not supported anymore ### Fixed - Fixed a host of dump read/write issues when model type was not `volume_*d`. Dumper tests are now streamlined and systematic. - Fixed a bug where `Model::solveDirichlet` would not compute correctly - Fixed a bug where `Statistics::contact` would not normalize by the global number of surface points ## v2.2.2 -- 2021-04-02 ### Added - Entry-point `tamaas` defines a grouped CLI for `examples/pipe_tools`. Try executing `tamaas surface -h` from the command-line! ### Changed - `CXXFLAGS` are now passed to the linker - Added this changelog - Using absolute paths for environmental variables when running `scons test` - Reorganized documentation layout - Gave the build system a facelift (docs are now generated directly with SCons instead of a Makefile) ### Deprecated - Python 2 support is discontinued. Version `v2.2.1` is the last PyPi build with a Python 2 wheel. - The scripts in `examples/pipe_tools` have been replaced by the `tamaas` command ### Fixed - `UVWDumper` no longer imports `mpi4py` in sequential - Compiling with different Thrust/FFTW backends ## v2.2.1 -- 2021-03-02 ### Added - Output registered fields and dumpers in `print(model)` - Added `operator[]` to the C++ model class (for fields) - Added `traction` and `displacement` properties to Python model bindings - Added `operators` property to Python model bindings, which provides a dict-like access to registered operators - Added `shape` and `spectrum` to properties to Python surface generator bindings - Surface generator constructor accepts surface global shape as argument - Choice of FFTW thread model ### Changed - Tests use `/tmp` for temporary files - Updated dependency versions (Thrust, Pybind11) ### Deprecated - Most `get___()` and `set___()` in Python bindings have been deprecated. They will generate a `DeprecationWarning`. ### Removed - All legacy code ## v2.2.0 -- 2020-12-31 ### Added - More accurate function for computation of contact area - Function to compute deviatoric of tensor fields - MPI implementation - Convenience `hdf5toVTK` function - Readonly properties `shape`, `global_shape`, `boundary_shape` on model to give shape information ### Changed - Preprocessor defined macros are prefixed with `TAMAAS_` - Moved `tamaas.to_voigt` to `tamaas.compute.to_voigt` ### Fixed - Warning about deprecated constructors with recent GCC versions - Wrong computation of grid strides - Wrong computation of grid sizes in views ## v2.1.4 -- 2020-08-07 ### Added - Possibility to generate a static `libTamaas` - C++ implementation of DFSANE solver - Allowing compilation without OpenMP ### Changed - NetCDF dumper writes frames to a single file ### Fixed - Compatibility with SCons+Python 3 ## v2.1.3 -- 2020-07-27 ### Added - Version number to `TamaasInfo` ### Changed - Prepending root directory when generating archive ## v2.1.2 -- 2020-07-24 This release changes some core internals related to discrete Fourier transforms for future MPI support. ### Added - Caching `CXXFLAGS` in SCons build - SCons shortcut to create code archive - Test of the elastic-plastic contact solver - Paraview data dumper (`.pvd` files) - Compression for UVW dumper - `__contains__` and `__iter__` Python bindings of model - Warning message of possible overflow in Kelvin ### Changed - Simplified `tamaas_info.cpp`, particularly the diff part - Using a new class `FFTEngine` to manage discrete Fourier transforms. Plans are re-used as much as possible with different data with the same shape. This is in view of future MPI developments - Redirecting I/O streams in solve functions so they can be used from Python (e.g. in Jupyter notebooks) - Calling `initialize()` and `finalize()` is no longer necessary ### Fixed - Convergence issue with non-linear solvers - Memory error in volume potentials ## v2.1.1 -- 2020-04-22 ### Added - SCons shortcut to run tests ### Fixed - Correct `RPATH` for shared libraries - Issues with SCons commands introduced in v2.1.0 - Tests with Python 2.7 ## v2.1.0 -- 2020-04-17 ### Added - SCons shortcuts to build/install Tamaas and its components - Selection of integration method for Kelvin operator - Compilation option to remove the legacy part of Tamaas - NetCDF dumper ### Fixed - Link bug with clang - NaNs in Kato saturated solver ## v2.0.0 -- 2019-11-11 First public release. Contains relatively mature elastic-plastic contact code. diff --git a/python/tamaas/dumpers/__init__.py b/python/tamaas/dumpers/__init__.py index b3b229f..6e01423 100644 --- a/python/tamaas/dumpers/__init__.py +++ b/python/tamaas/dumpers/__init__.py @@ -1,587 +1,581 @@ # -*- mode:python; coding: utf-8 -*- # # Copyright (©) 2016-2022 EPFL (École Polytechnique Fédérale de Lausanne), # Laboratory (LSMS - Laboratoire de Simulation en Mécanique des Solides) # Copyright (©) 2020-2022 Lucas Frérot # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as published # by the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see . """Dumpers for the class :py:class:`Model `.""" from pathlib import Path from os import PathLike import glob import json import io import typing as ts from collections.abc import Collection import numpy as np from .. import ( ModelDumper, model_type, mpi, type_traits, ModelFactory, Model, __version__, ) from ._helper import ( step_dump, directory_dump, local_slice, _is_surface_field, _basic_types, file_handler, ) __all__ = [ "JSONDumper", "FieldDumper", "NumpyDumper", ] FileType = ts.Union[str, PathLike, io.TextIOBase] NameType = ts.Union[str, PathLike] _reverse_trait_map = { 'model_type.' + t.__name__: mtype for mtype, t in type_traits.items() } def _get_attributes(model: Model): """Get model attributes.""" return { 'model_type': str(model.type), 'system_size': model.system_size, 'discretization': model.global_shape, 'boundary_fields': model.boundary_fields, 'program': f"Tamaas {__version__}, DOI:10.21105/joss.02121", } def _create_model(attrs: ts.MutableMapping): """Create a model from attribute dictionary.""" mtype = _reverse_trait_map[attrs['model_type']] # netcdf4 converts 1-lists attributes to numbers for attr in ['system_size', 'discretization']: if not isinstance(attrs[attr], Collection): attrs[attr] = [attrs[attr]] return ModelFactory.createModel(mtype, attrs['system_size'], attrs['discretization']) class MPIIncompatibilityError(RuntimeError): """Raised when code is not meant to be executed in MPI environment.""" class ModelError(ValueError): """Raised when unexpected model is passed to a dumper with a state.""" class ComponentsError(ValueError): """Raised when an unexpected number of components is encountred.""" class _ModelJSONEncoder(json.JSONEncoder): """Encode a model to JSON.""" def default(self, obj): """Encode model.""" if isinstance(obj, Model): model = obj attrs = _get_attributes(model) model_dict = { 'attrs': attrs, 'fields': {}, 'operators': [], } for field in model: model_dict['fields'][field] = model[field].tolist() for op in model.operators: model_dict['operators'].append(op) return model_dict return json.JSONEncoder.default(self, obj) class JSONDumper(ModelDumper): """Dumper to JSON.""" def __init__(self, file_descriptor: FileType): """Construct with file handle.""" super(JSONDumper, self).__init__() self.fd = file_descriptor @file_handler('w') def _dump_to_file(self, fd: FileType, model: Model): json.dump(model, fd, cls=_ModelJSONEncoder) def dump(self, model: Model): """Dump model.""" self._dump_to_file(self.fd, model) @classmethod @file_handler('r') def read(cls, fd: FileType): """Read model from file.""" properties = json.load(fd) model = _create_model(properties['attrs']) for name, field in properties['fields'].items(): v = np.asarray(field) if model.type in _basic_types: v = v.reshape(list(v.shape) + [1]) model[name] = v return model class FieldDumper(ModelDumper): """Abstract dumper for python classes using fields.""" postfix = "" extension = "" name_format = "{basename}{postfix}.{extension}" def __init__(self, basename: NameType, *fields, **kwargs): """Construct with desired fields.""" super(FieldDumper, self).__init__() self.basename = basename self.fields: ts.List[str] = list(fields) self.all_fields: bool = kwargs.get('all_fields', False) def add_field(self, field: str): """Add another field to the dump.""" if field not in self.fields: self.fields.append(field) def _dump_to_file(self, file_descriptor: FileType, model: Model): """Dump to a file (path-like or file handle).""" raise NotImplementedError() def get_fields(self, model: Model): """Get the desired fields.""" if not self.all_fields: requested_fields = self.fields else: requested_fields = list(model) return {field: model[field] for field in requested_fields} def dump(self, model: Model): """Dump model.""" self._dump_to_file(self.file_path, model) @classmethod def read(cls, file_descriptor: FileType): """Read model from file.""" raise NotImplementedError( f'read() method not implemented in {cls.__name__}') @classmethod def read_sequence(cls, glob_pattern): """Read models from a file sequence.""" return map(cls.read, glob.iglob(glob_pattern)) @property def file_path(self): """Get the default filename.""" return self.name_format.format(basename=self.basename, postfix=self.postfix, extension=self.extension) @directory_dump('numpys') @step_dump class NumpyDumper(FieldDumper): """Dumper to compressed numpy files.""" extension = 'npz' def _dump_to_file(self, file_descriptor: FileType, model: Model): """Save to compressed multi-field Numpy format.""" if mpi.size() > 1: raise MPIIncompatibilityError("NumpyDumper does not function " "at all in parallel") np.savez_compressed(file_descriptor, attrs=json.dumps(_get_attributes(model)), **self.get_fields(model)) @classmethod def read(cls, file_descriptor: FileType): """Create model from Numpy file.""" data = np.load(file_descriptor, mmap_mode='r') model = _create_model(json.loads(str(data['attrs']))) for k, v in filter(lambda k: k[0] != 'attrs', data.items()): if model.type in _basic_types: v = v.reshape(list(v.shape) + [1]) model[k] = v return model try: import h5py __all__.append("H5Dumper") @directory_dump('hdf5') @step_dump class H5Dumper(FieldDumper): """Dumper to HDF5 file format.""" extension = 'h5' @staticmethod def _hdf5_args(): if mpi.size() > 1: from mpi4py import MPI # noqa mpi_args = dict(driver='mpio', comm=MPI.COMM_WORLD) comp_args = {} # compression does not work in parallel else: mpi_args = {} comp_args = dict(compression='gzip', compression_opts=7) return mpi_args, comp_args def _dump_to_file(self, file_descriptor: FileType, model: Model): """Save to HDF5 with metadata about the model.""" # Setup for MPI if not h5py.get_config().mpi and mpi.size() > 1: raise MPIIncompatibilityError("HDF5 does not have MPI support") mpi_args, comp_args = self._hdf5_args() with h5py.File(file_descriptor, 'w', **mpi_args) as handle: # Writing data for name, field in self.get_fields(model).items(): shape = list(field.shape) if mpi.size() > 1: xdim = 0 if _is_surface_field(field, model) else 1 shape[xdim] = mpi_args['comm'].allreduce(shape[xdim]) dset = handle.create_dataset(name, shape, field.dtype, **comp_args) s = local_slice(field, model, name in model.boundary_fields) dset[s] = field # Writing metadata for name, attr in _get_attributes(model).items(): handle.attrs[name] = attr @classmethod def read(cls, file_descriptor: FileType): """Create model from HDF5 file.""" mpi_args, _ = cls._hdf5_args() with h5py.File(file_descriptor, 'r', **mpi_args) as handle: model = _create_model(handle.attrs) for k, v in handle.items(): v = np.asanyarray(v) if model.type in _basic_types: v = v.reshape(list(v.shape) + [1]) surface_field = \ k in handle.attrs.get('boundary_fields', {}) \ or _is_surface_field(v, model) s = local_slice(v, model, surface_field) if (surface_field and v.ndim == len(model.boundary_shape)) \ or (not surface_field and v.ndim == len(model.shape)): s = s + (np.newaxis, ) model[k] = v[s].copy() return model except ImportError: pass try: import uvw # noqa __all__ += [ "UVWDumper", "UVWGroupDumper", ] @directory_dump('paraview') @step_dump class UVWDumper(FieldDumper): """Dumper to VTK files for elasto-plastic calculations.""" extension = 'vtr' def _dump_to_file(self, file_descriptor: FileType, model: Model): """Dump displacements, plastic deformations and stresses.""" if mpi.size() > 1: raise MPIIncompatibilityError("UVWDumper does not function " "properly in parallel") bdim = len(model.boundary_shape) # Local MPI size lsize = model.shape gsize = mpi.global_shape(model.boundary_shape) gshape = gsize if len(lsize) > bdim: gshape = [model.shape[0]] + gshape # Space coordinates coordinates = [ np.linspace(0, L, N, endpoint=False) for L, N in zip(model.system_size, gshape) ] # If model has subsurfce domain, z-coordinate is always first dimension_indices = np.arange(bdim) if len(lsize) > bdim: dimension_indices += 1 dimension_indices = np.concatenate((dimension_indices, [0])) coordinates[0] = \ np.linspace(0, model.system_size[0], gshape[0]) offset = np.zeros_like(dimension_indices) offset[0] = mpi.local_offset(gsize) rectgrid = uvw.RectilinearGrid if mpi.size() == 1 \ else uvw.parallel.PRectilinearGrid # Creating rectilinear grid with correct order for components coordlist = [ coordinates[i][o:o + lsize[i]] for i, o in zip(dimension_indices, offset) ] grid = rectgrid( file_descriptor, coordlist, compression=True, offsets=offset, ) fields = self.get_fields(model).items() # Iterator over fields we want to dump on system geometry - fields_it = filter(lambda t: not _is_surface_field(t[1], model), - fields) + if model.type in {model_type.volume_1d, model_type.volume_2d}: + fields_it = filter(lambda t: not t[0] in model.boundary_fields, + fields) + else: + fields_it = iter(fields) - # We make fields periodic for visualization for name, field in fields_it: array = uvw.DataArray(field, dimension_indices, name) grid.addPointData(array) - # Dump surface fields separately - fields_it = filter(lambda t: _is_surface_field(t[1], model), - fields) - for name, field in fields_it: - array = uvw.DataArray(field, range(len(model.boundary_shape)), - name) - grid.addFieldData(array) - grid.write() @directory_dump('paraview') class UVWGroupDumper(FieldDumper): """Dumper to ParaViewData files.""" extension = 'pvd' def __init__(self, basename: NameType, *fields, **kwargs): """Construct with desired fields.""" super(UVWGroupDumper, self).__init__(basename, *fields, **kwargs) subdir = Path('paraview') / f'{basename}-VTR' subdir.mkdir(parents=True, exist_ok=True) self.uvw_dumper = UVWDumper( Path(f'{basename}-VTR') / basename, *fields, **kwargs) self.group = uvw.ParaViewData(self.file_path, compression=True) def _dump_to_file(self, file_descriptor, model): self.group.addFile( self.uvw_dumper.file_path.replace('paraview/', ''), timestep=self.uvw_dumper.count, ) self.group.write() self.uvw_dumper.dump(model) except ImportError: pass try: from netCDF4 import Dataset __all__.append("cls") @directory_dump('netcdf') class NetCDFDumper(FieldDumper): """Dumper to netCDF4 files.""" extension = "nc" time_dim = 'frame' format = 'NETCDF4' def _file_setup(self, grp, model: Model): grp.createDimension(self.time_dim, None) # Attibutes for k, v in _get_attributes(model).items(): grp.setncattr(k, v) # Local dimensions voigt_dim = type_traits[model.type].voigt components = type_traits[model.type].components self._vec = grp.createDimension('spatial', components) self._tens = grp.createDimension('Voigt', voigt_dim) self.model_info = model.global_shape, model.type global_boundary_shape = mpi.global_shape(model.boundary_shape) # Create boundary dimensions for label, size, length in zip("xy", global_boundary_shape, model.boundary_system_size): grp.createDimension(label, size) coord = grp.createVariable(label, 'f8', (label, )) coord[:] = np.linspace(0, length, size, endpoint=False) self._create_variables(grp, model, lambda f: _is_surface_field(f[1], model), global_boundary_shape, "xy") # Create volume dimension if model.type in {model_type.volume_1d, model_type.volume_2d}: size = model.shape[0] grp.createDimension("z", size) coord = grp.createVariable("z", 'f8', ("z", )) coord[:] = np.linspace(0, model.system_size[0], size) self._create_variables( grp, model, lambda f: not _is_surface_field(f[1], model), model.global_shape, "zxy") self.has_setup = True def _set_collective(self, rootgrp): if mpi.size() == 1: return for v in rootgrp.variables.values(): if self.time_dim in v.dimensions: v.set_collective(True) def _dump_to_file(self, file_descriptor: NameType, model: Model): mode = 'a' if Path(file_descriptor).is_file() \ and getattr(self, 'has_setup', False) else 'w' try: with Dataset(file_descriptor, mode, format=self.format, parallel=mpi.size() > 1) as rootgrp: if rootgrp.dimensions == {}: self._file_setup(rootgrp, model) self._set_collective(rootgrp) if self.model_info != (model.global_shape, model.type): raise ModelError(f"Unexpected model {mode}") self._dump_generic(rootgrp, model) except ValueError: raise MPIIncompatibilityError("NetCDF4 has no MPI support") def _create_variables(self, grp, model, predicate, shape, dimensions): field_dim = len(shape) fields = list(filter(predicate, self.get_fields(model).items())) dim_labels = list(dimensions[:field_dim]) for label, data in fields: local_dim = [] # If we have an extra component if data.ndim > field_dim: if data.shape[-1] == self._tens.size: local_dim = [self._tens.name] elif data.shape[-1] == self._vec.size: local_dim = [self._vec.name] else: raise ComponentsError( f"{label} has unexpected number of components " f"({data.shape[-1]})") # Downcasting in case of 128 bit float dtype = data.dtype if data.dtype.str[1:] != 'f16' else 'f8' grp.createVariable(label, dtype, [self.time_dim] + dim_labels + local_dim, zlib=mpi.size() == 0) def _dump_generic(self, grp, model): fields = self.get_fields(model).items() new_frame = len(grp.dimensions[self.time_dim]) for label, data in fields: var = grp[label] slice_in_global = (new_frame, ) + local_slice(data, model) var[slice_in_global] = np.array(data, dtype=var.dtype) @classmethod def _open_read(cls, fd): return Dataset(fd, 'r', format=cls.format, parallel=mpi.size() > 1) @staticmethod def _create_model(rootgrp): attrs = {k: rootgrp.getncattr(k) for k in rootgrp.ncattrs()} return _create_model(attrs) @staticmethod def _set_model_fields(rootgrp, model, frame): dims = rootgrp.dimensions.keys() for k, v in filter(lambda k: k[0] not in dims, rootgrp.variables.items()): v = v[frame, :] if model.type in _basic_types: v = np.asarray(v).reshape(list(v.shape) + [1]) model[k] = v[local_slice(v, model)].copy() @classmethod def read(cls, file_descriptor: NameType): """Create model with last frame.""" with cls._open_read(file_descriptor) as rootgrp: model = cls._create_model(rootgrp) cls._set_model_fields(rootgrp, model, -1) return model @classmethod def read_sequence(cls, file_descriptor: NameType): with cls._open_read(file_descriptor) as rootgrp: model = cls._create_model(rootgrp) for frame in range(len(rootgrp.dimensions[cls.time_dim])): cls._set_model_fields(rootgrp, model, frame) yield model except ImportError: pass diff --git a/python/tamaas/dumpers/_helper.py b/python/tamaas/dumpers/_helper.py index fbeecb3..692c2c7 100644 --- a/python/tamaas/dumpers/_helper.py +++ b/python/tamaas/dumpers/_helper.py @@ -1,174 +1,174 @@ # -*- mode:python; coding: utf-8 -*- # # Copyright (©) 2016-2022 EPFL (École Polytechnique Fédérale de Lausanne), # Laboratory (LSMS - Laboratoire de Simulation en Mécanique des Solides) # Copyright (©) 2020-2022 Lucas Frérot # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as published # by the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see . """ Helper functions for dumpers """ from os import PathLike from functools import wraps from pathlib import Path import io import numpy as np from .. import model_type, type_traits, mpi __all__ = ["step_dump", "directory_dump", "local_slice"] _basic_types = [t for t, trait in type_traits.items() if trait.components == 1] def _is_surface_field(field, model): def _to_global(shape): if len(shape) == len(model.boundary_shape) + 1: return mpi.global_shape(model.boundary_shape) + [shape[-1]] return mpi.global_shape(shape) def _compare_shape(a, b): return a == b b_shapes = [list(model[name].shape) for name in model.boundary_fields] shape = list(field.shape) return any(_compare_shape(shape, s) for s in b_shapes) \ or any(_compare_shape(shape, _to_global(s)) for s in b_shapes) def local_slice(field, model, surface_field=None): n = model.shape bn = model.boundary_shape gshape = mpi.global_shape(bn) offsets = np.zeros_like(gshape) offsets[0] = mpi.local_offset(gshape) if not surface_field: surface_field = _is_surface_field(field, model) if not surface_field and len(n) > len(bn): gshape = [n[0]] + gshape offsets = np.concatenate(([0], offsets)) shape = bn if surface_field else n if len(field.shape) > len(shape): shape += field.shape[len(shape):] def sgen(offset, size): return slice(offset, offset + size, None) def sgen_basic(offset, size): return slice(offset, offset + size) slice_gen = sgen_basic if model_type in _basic_types else sgen return tuple(map(slice_gen, offsets, shape)) def step_dump(cls): """ Decorator for dumper with counter for steps """ orig_init = cls.__init__ orig_dump = cls.dump @wraps(cls.__init__) def __init__(obj, *args, **kwargs): orig_init(obj, *args, **kwargs) obj.count = 0 def postfix(obj): return "_{:04d}".format(obj.count) @wraps(cls.dump) def dump(obj, *args, **kwargs): orig_dump(obj, *args, **kwargs) obj.count += 1 cls.__init__ = __init__ cls.dump = dump cls.postfix = property(postfix) return cls def directory_dump(directory=""): "Decorator for dumper in a directory" directory = Path(directory) def actual_decorator(cls): orig_dump = cls.dump orig_filepath = cls.file_path.fget orig_init = cls.__init__ @wraps(cls.__init__) def init(obj, *args, **kwargs): orig_init(obj, *args, **kwargs) obj.mkdir = kwargs.get('mkdir', True) @wraps(cls.dump) def dump(obj, *args, **kwargs): if mpi.rank() == 0 and getattr(obj, 'mkdir'): directory.mkdir(parents=True, exist_ok=True) orig_dump(obj, *args, **kwargs) @wraps(cls.file_path.fget) def file_path(obj): if getattr(obj, 'mkdir'): return str(directory / orig_filepath(obj)) return orig_filepath(obj) cls.__init__ = init cls.dump = dump cls.file_path = property(file_path) return cls return actual_decorator -def hdf5toVTK(inpath, outname): +def hdf5toVTK(inpath: PathLike, outname: str): """Convert HDF5 dump of a model to VTK.""" from . import UVWDumper, H5Dumper # noqa UVWDumper(outname, all_fields=True) << H5Dumper.read(inpath) -def netCDFtoParaview(inpath, outname): +def netCDFtoParaview(inpath: PathLike, outname: str): """Convert NetCDF dump of model sequence to Paraview.""" from . import UVWGroupDumper, NetCDFDumper # noqa dumper = UVWGroupDumper(outname, all_fields=True) for model in NetCDFDumper.read_sequence(inpath): dumper << model def file_handler(mode): """Decorate a function to accept path-like or file handles.""" def _handler(func): @wraps(func) def _wrapped(self, fd, *args, **kwargs): if isinstance(fd, (str, PathLike)): with open(fd, mode) as fd: return _wrapped(self, fd, *args, **kwargs) elif isinstance(fd, io.TextIOBase): return func(self, fd, *args, **kwargs) raise TypeError( f"Expected a path-like or file handle, got {type(fd)}") return _wrapped return _handler