diff --git a/CHANGELOG.md b/CHANGELOG.md
index ee42893..cb51df5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,298 +1,299 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html) for final versions and [PEP440](https://www.python.org/dev/peps/pep-0440/) in case intermediate versions need to be released (e.g. development version `2.2.3.dev1` or release candidates `2.2.3rc1`), or individual commits are packaged.
## Unrelease
### Added
- Added `tamaas.utils.publications` to print a list of publications relevant to
the parts of Tamaas used in a script. Call at the end of a script for an
exhaustive list.
- Added `tamaas.mpi.gather` and `tamaas.mpi.scatter` to help handling of 2D data
in MPI contexts.
- Added `getBoundaryFields()/boundary_fields` function/property to `Model`.
- Added constructor to `Model`, which is an alias to `ModelFactory.createModel`.
## Changed
- Filter for boundary fields in dumper should be smarter.
+- `read` functions in dumpers are now `@classmethod`
## Fixed
- Fixed `tamaas.dumpers.NetCDFDumper` to dump in MPI context.
## v2.4.0 -- 2022-03-22
### Added
- Added a `tamaas.utils` module.
- Added `tamaas.utils.load_path` generator, which yields model objects for a
sequence of applied loads.
- Added `tamaas.utils.seeded_surfaces` generator, which yields surfaces for a
sequence of random seeds.
- Added `tamaas.utils.hertz_surface` which generates a parabolic (Hertzian)
surface.
- Added `tamaas.utils.log_context` context manager which locally sets the log
level for Tamaas' logger.
- Added `tamaas.compute.from_voigt` to convert Voigt/Mendel fields to dense
tensor fields.
- Added a deep-copy function to `ModelFactory` to copy `Model` objects. Use
`model_copy = copy.deepcopy(model)` in Python to make a copy. Currently only
copies registered fields, same as dumpers/readers.
- Added a `read_sequence` method for dumpers to read model frames.
- Automatic draft release on Zenodo.
### Changed
- `*_ROOT` variables for vendored dependencies GoogleTest and pybind11 now
default to empty strings, so that the vendored trees in `third-party/` are not
selected by default. This is so system packages are given priority. Vendored
dependency submodules will eventually be deprecated.
### Fixed
- Fixed an issue with `scons -c` when GTest was missing.
## v2.3.1 -- 2021-11-08
### Added
- Now using `clang-format`, `clang-tidy` and `flake8` for linting
- Pre-built Tamaas images can be pulled with `docker pull
registry.gitlab.com/tamaas/tamaas`
- Added a `--version` flag to the `tamaas` command
### Changed
- The root `Dockerfile` now compiles Tamaas, so using Tamaas in Docker is easier.
- The model attributes dumped to Numpy files are now written in a JSON-formatted
string to avoid unsafe loading/unpickling of objects.
- Removed the `build_doc` build option: now the doc targets are automatically
added if the dependencies are met, and built if `scons doc` is called.
- Removed the `use_googletest` build option: if tests are built and gtest is
present, the corresponding tests will be built
### Fixed
- The command `tamaas plot` gracefully fails if Matplotlib is missing
- Better heuristic to guess which fields are defined on boundary in dumpers
(still not perfect and may give false negatives)
## v2.3.0 -- 2021-06-15
### Added
- Added `read()` method to dumpers to create a model from a dump file
- `getClusters()` can be called in MPI contact with partial contact maps
- Added a JSON encoder class for models and a JSON dumper
- CUDA compatibility is re-established, but has not been tested
- Docstrings in the Python bindings for many classes/methods
### Changed
- Tamaas version numbers are now managed by
[versioneer](https://github.com/python-versioneer/python-versioneer). This
means that Git tags prefixed with `v` (e.g. `v2.2.3`) carry meaning and
determine the version. When no tag is set, versioneer uses the last tag,
specifies the commit short hash and the distance to the last tag (e.g.
`2.2.2+33.ge314b0e`). This version string is used in the compiled library, the
`setup.py` script and the `__version__` variable in the python module.
- Tamaas migrated to [GitLab](https://gitlab.com/tamaas/tamaas)
- Continuous delivery has been implemented:
- the `master` branch will now automatically build and publish Python wheels
to `https://gitlab.com/api/v4/projects/19913787/packages/pypi/simple`. These
"nightly" builds can be installed with:
pip install \
--extra-index-url https://gitlab.com/api/v4/projects/19913787/packages/pypi/simple \
tamaas
- version tags pushed to `master` will automatically publish the wheels to
[PyPI](https://pypi.org/project/tamaas/)
### Deprecated
- The `finalize()` function is now deprecated, since it is automatically called
when the process terminates
- Python versions 3.5 and below are not supported anymore
### Fixed
- Fixed a host of dump read/write issues when model type was not `volume_*d`.
Dumper tests are now streamlined and systematic.
- Fixed a bug where `Model::solveDirichlet` would not compute correctly
- Fixed a bug where `Statistics::contact` would not normalize by the global
number of surface points
## v2.2.2 -- 2021-04-02
### Added
- Entry-point `tamaas` defines a grouped CLI for `examples/pipe_tools`. Try
executing `tamaas surface -h` from the command-line!
### Changed
- `CXXFLAGS` are now passed to the linker
- Added this changelog
- Using absolute paths for environmental variables when running `scons test`
- Reorganized documentation layout
- Gave the build system a facelift (docs are now generated directly with SCons
instead of a Makefile)
### Deprecated
- Python 2 support is discontinued. Version `v2.2.1` is the last PyPi build with
a Python 2 wheel.
- The scripts in `examples/pipe_tools` have been replaced by the `tamaas` command
### Fixed
- `UVWDumper` no longer imports `mpi4py` in sequential
- Compiling with different Thrust/FFTW backends
## v2.2.1 -- 2021-03-02
### Added
- Output registered fields and dumpers in `print(model)`
- Added `operator[]` to the C++ model class (for fields)
- Added `traction` and `displacement` properties to Python model bindings
- Added `operators` property to Python model bindings, which provides a
dict-like access to registered operators
- Added `shape` and `spectrum` to properties to Python surface generator
bindings
- Surface generator constructor accepts surface global shape as argument
- Choice of FFTW thread model
### Changed
- Tests use `/tmp` for temporary files
- Updated dependency versions (Thrust, Pybind11)
### Deprecated
- Most `get___()` and `set___()` in Python bindings have been deprecated. They
will generate a `DeprecationWarning`.
### Removed
- All legacy code
## v2.2.0 -- 2020-12-31
### Added
- More accurate function for computation of contact area
- Function to compute deviatoric of tensor fields
- MPI implementation
- Convenience `hdf5toVTK` function
- Readonly properties `shape`, `global_shape`, `boundary_shape` on model to give
shape information
### Changed
- Preprocessor defined macros are prefixed with `TAMAAS_`
- Moved `tamaas.to_voigt` to `tamaas.compute.to_voigt`
### Fixed
- Warning about deprecated constructors with recent GCC versions
- Wrong computation of grid strides
- Wrong computation of grid sizes in views
## v2.1.4 -- 2020-08-07
### Added
- Possibility to generate a static `libTamaas`
- C++ implementation of DFSANE solver
- Allowing compilation without OpenMP
### Changed
- NetCDF dumper writes frames to a single file
### Fixed
- Compatibility with SCons+Python 3
## v2.1.3 -- 2020-07-27
### Added
- Version number to `TamaasInfo`
### Changed
- Prepending root directory when generating archive
## v2.1.2 -- 2020-07-24
This release changes some core internals related to discrete Fourier transforms
for future MPI support.
### Added
- Caching `CXXFLAGS` in SCons build
- SCons shortcut to create code archive
- Test of the elastic-plastic contact solver
- Paraview data dumper (`.pvd` files)
- Compression for UVW dumper
- `__contains__` and `__iter__` Python bindings of model
- Warning message of possible overflow in Kelvin
### Changed
- Simplified `tamaas_info.cpp`, particularly the diff part
- Using a new class `FFTEngine` to manage discrete Fourier transforms. Plans are
re-used as much as possible with different data with the same shape. This is
in view of future MPI developments
- Redirecting I/O streams in solve functions so they can be used from Python
(e.g. in Jupyter notebooks)
- Calling `initialize()` and `finalize()` is no longer necessary
### Fixed
- Convergence issue with non-linear solvers
- Memory error in volume potentials
## v2.1.1 -- 2020-04-22
### Added
- SCons shortcut to run tests
### Fixed
- Correct `RPATH` for shared libraries
- Issues with SCons commands introduced in v2.1.0
- Tests with Python 2.7
## v2.1.0 -- 2020-04-17
### Added
- SCons shortcuts to build/install Tamaas and its components
- Selection of integration method for Kelvin operator
- Compilation option to remove the legacy part of Tamaas
- NetCDF dumper
### Fixed
- Link bug with clang
- NaNs in Kato saturated solver
## v2.0.0 -- 2019-11-11
First public release. Contains relatively mature elastic-plastic contact code.
diff --git a/python/tamaas/dumpers/__init__.py b/python/tamaas/dumpers/__init__.py
index dbdba15..2aae9e1 100644
--- a/python/tamaas/dumpers/__init__.py
+++ b/python/tamaas/dumpers/__init__.py
@@ -1,553 +1,559 @@
# -*- mode:python; coding: utf-8 -*-
#
# Copyright (©) 2016-2022 EPFL (École Polytechnique Fédérale de Lausanne),
# Laboratory (LSMS - Laboratoire de Simulation en Mécanique des Solides)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see .
"""Dumpers for the class :py:class:`Model `."""
from pathlib import Path
from os import PathLike
import glob
import json
import io
import typing as ts
from collections.abc import Collection
import numpy as np
from .. import (
ModelDumper,
model_type,
mpi,
type_traits,
ModelFactory,
Model,
__version__,
)
from ._helper import (
step_dump,
directory_dump,
local_slice,
_is_surface_field,
_basic_types,
file_handler,
)
__all__ = [
"JSONDumper",
"FieldDumper",
"NumpyDumper",
]
FileType = ts.Union[str, PathLike, io.TextIOBase]
NameType = ts.Union[str, PathLike]
_reverse_trait_map = {
'model_type.' + t.__name__: mtype
for mtype, t in type_traits.items()
}
def _get_attributes(model: Model):
"""Get model attributes."""
return {
'model_type': str(model.type),
'system_size': model.system_size,
'discretization': model.global_shape,
'program': f"Tamaas {__version__}, DOI:10.21105/joss.02121",
}
def _create_model(attrs: ts.MutableMapping):
"""Create a model from attribute dictionary."""
mtype = _reverse_trait_map[attrs['model_type']]
# netcdf4 converts 1-lists attributes to numbers
for attr in ['system_size', 'discretization']:
if not isinstance(attrs[attr], Collection):
attrs[attr] = [attrs[attr]]
return ModelFactory.createModel(mtype, attrs['system_size'],
attrs['discretization'])
class MPIIncompatibilityError(RuntimeError):
"""Raised when code is not meant to be executed in MPI environment."""
class ModelError(ValueError):
"""Raised when unexpected model is passed to a dumper with a state."""
class ComponentsError(ValueError):
"""Raised when an unexpected number of components is encountred."""
class _ModelJSONEncoder(json.JSONEncoder):
"""Encode a model to JSON."""
def default(self, obj):
"""Encode model."""
if isinstance(obj, Model):
model = obj
attrs = _get_attributes(model)
model_dict = {
'attrs': attrs,
'fields': {},
'operators': [],
}
for field in model:
model_dict['fields'][field] = model[field].tolist()
for op in model.operators:
model_dict['operators'].append(op)
return model_dict
return json.JSONEncoder.default(self, obj)
class JSONDumper(ModelDumper):
"""Dumper to JSON."""
def __init__(self, file_descriptor: FileType):
"""Construct with file handle."""
super(JSONDumper, self).__init__()
self.fd = file_descriptor
@file_handler('w')
def _dump_to_file(self, fd: FileType, model: Model):
json.dump(model, fd, cls=_ModelJSONEncoder)
def dump(self, model: Model):
"""Dump model."""
self._dump_to_file(self.fd, model)
@file_handler('r')
- def read(self, fd: FileType):
+ @classmethod
+ def read(cls, fd: FileType):
"""Read model from file."""
properties = json.load(fd)
model = _create_model(properties['attrs'])
for name, field in properties['fields'].items():
v = np.asarray(field)
if model.type in _basic_types:
v = v.reshape(list(v.shape) + [1])
model[name] = v
return model
class FieldDumper(ModelDumper):
"""Abstract dumper for python classes using fields."""
postfix = ""
extension = ""
name_format = "{basename}{postfix}.{extension}"
def __init__(self, basename: NameType, *fields, **kwargs):
"""Construct with desired fields."""
super(FieldDumper, self).__init__()
self.basename = basename
self.fields: ts.List[str] = list(fields)
self.all_fields: bool = kwargs.get('all_fields', False)
def add_field(self, field: str):
"""Add another field to the dump."""
if field not in self.fields:
self.fields.append(field)
def _dump_to_file(self, file_descriptor: FileType, model: Model):
"""Dump to a file (path-like or file handle)."""
raise NotImplementedError()
def get_fields(self, model: Model):
"""Get the desired fields."""
if not self.all_fields:
requested_fields = self.fields
else:
requested_fields = list(model)
return {field: model[field] for field in requested_fields}
def dump(self, model: Model):
"""Dump model."""
self._dump_to_file(self.file_path, model)
- def read(self, file_descriptor: FileType):
+ @classmethod
+ def read(cls, file_descriptor: FileType):
"""Read model from file."""
raise NotImplementedError(
- f'read() method not implemented in {type(self).__name__}')
+ f'read() method not implemented in {cls.__name__}')
- def read_sequence(self, glob_pattern):
+ @classmethod
+ def read_sequence(cls, glob_pattern):
"""Read models from a file sequence."""
- return map(self.read, glob.iglob(glob_pattern))
+ return map(cls.read, glob.iglob(glob_pattern))
@property
def file_path(self):
"""Get the default filename."""
return self.name_format.format(basename=self.basename,
postfix=self.postfix,
extension=self.extension)
@directory_dump('numpys')
@step_dump
class NumpyDumper(FieldDumper):
"""Dumper to compressed numpy files."""
extension = 'npz'
def _dump_to_file(self, file_descriptor: FileType, model: Model):
"""Save to compressed multi-field Numpy format."""
if mpi.size() > 1:
raise MPIIncompatibilityError("NumpyDumper does not function "
"at all in parallel")
np.savez_compressed(file_descriptor,
attrs=json.dumps(_get_attributes(model)),
**self.get_fields(model))
- def read(self, file_descriptor: FileType):
+ @classmethod
+ def read(cls, file_descriptor: FileType):
"""Create model from Numpy file."""
data = np.load(file_descriptor, mmap_mode='r')
model = _create_model(json.loads(str(data['attrs'])))
for k, v in filter(lambda k: k[0] != 'attrs', data.items()):
if model.type in _basic_types:
v = v.reshape(list(v.shape) + [1])
model[k] = v
return model
try:
import h5py
__all__.append("H5Dumper")
@directory_dump('hdf5')
@step_dump
class H5Dumper(FieldDumper):
"""Dumper to HDF5 file format."""
extension = 'h5'
- def _hdf5_args(self):
+ @staticmethod
+ def _hdf5_args():
if mpi.size() > 1:
from mpi4py import MPI # noqa
mpi_args = dict(driver='mpio', comm=MPI.COMM_WORLD)
comp_args = {} # compression does not work in parallel
else:
mpi_args = {}
comp_args = dict(compression='gzip', compression_opts=7)
return mpi_args, comp_args
def _dump_to_file(self, file_descriptor: FileType, model: Model):
"""Save to HDF5 with metadata about the model."""
# Setup for MPI
if not h5py.get_config().mpi and mpi.size() > 1:
raise MPIIncompatibilityError("HDF5 does not have MPI support")
mpi_args, comp_args = self._hdf5_args()
with h5py.File(file_descriptor, 'w', **mpi_args) as handle:
# Writing data
for name, field in self.get_fields(model).items():
shape = list(field.shape)
if mpi.size() > 1:
xdim = 0 if _is_surface_field(field, model) else 1
shape[xdim] = mpi_args['comm'].allreduce(shape[xdim])
dset = handle.create_dataset(name, shape, field.dtype,
**comp_args)
dset[local_slice(field, model)] = field
# Writing metadata
for name, attr in _get_attributes(model).items():
handle.attrs[name] = attr
- def read(self, file_descriptor: FileType):
+ @classmethod
+ def read(cls, file_descriptor: FileType):
"""Create model from HDF5 file."""
- mpi_args, _ = self._hdf5_args()
+ mpi_args, _ = cls._hdf5_args()
with h5py.File(file_descriptor, 'r', **mpi_args) as handle:
model = _create_model(handle.attrs)
for k, v in handle.items():
if model.type in _basic_types:
v = np.asarray(v).reshape(list(v.shape) + [1])
model[k] = v[local_slice(v, model)].copy()
return model
except ImportError:
pass
try:
import uvw # noqa
__all__ += [
"UVWDumper",
"UVWGroupDumper",
]
@directory_dump('paraview')
@step_dump
class UVWDumper(FieldDumper):
"""Dumper to VTK files for elasto-plastic calculations."""
extension = 'vtr'
forbidden_fields = ['traction', 'gap']
def _dump_to_file(self, file_descriptor: FileType, model: Model):
"""Dump displacements, plastic deformations and stresses."""
if mpi.size() > 1:
raise MPIIncompatibilityError("UVWDumper does not function "
"properly in parallel")
bdim = len(model.boundary_shape)
# Local MPI size
lsize = model.shape
gsize = mpi.global_shape(model.boundary_shape)
gshape = gsize
if len(lsize) > bdim:
gshape = [model.shape[0]] + gshape
# Space coordinates
coordinates = [
np.linspace(0, L, N, endpoint=False)
for L, N in zip(model.system_size, gshape)
]
# If model has subsurfce domain, z-coordinate is always first
dimension_indices = np.arange(bdim)
if len(lsize) > bdim:
dimension_indices += 1
dimension_indices = np.concatenate((dimension_indices, [0]))
coordinates[0] = \
np.linspace(0, model.system_size[0], gshape[0])
offset = np.zeros_like(dimension_indices)
offset[0] = mpi.local_offset(gsize)
rectgrid = uvw.RectilinearGrid if mpi.size() == 1 \
else uvw.parallel.PRectilinearGrid
# Creating rectilinear grid with correct order for components
coordlist = [
coordinates[i][o:o + lsize[i]]
for i, o in zip(dimension_indices, offset)
]
grid = rectgrid(
file_descriptor,
coordlist,
compression=True,
offsets=offset,
)
# Iterator over fields we want to dump
fields_it = filter(lambda t: t[0] not in self.forbidden_fields,
self.get_fields(model).items())
# We make fields periodic for visualization
for name, field in fields_it:
array = uvw.DataArray(field, dimension_indices, name)
grid.addPointData(array)
grid.write()
@directory_dump('paraview')
class UVWGroupDumper(FieldDumper):
"""Dumper to ParaViewData files."""
extension = 'pvd'
def __init__(self, basename: NameType, *fields, **kwargs):
"""Construct with desired fields."""
super(UVWGroupDumper, self).__init__(basename, *fields, **kwargs)
subdir = Path('paraview') / f'{basename}-VTR'
subdir.mkdir(parents=True, exist_ok=True)
self.uvw_dumper = UVWDumper(
Path(f'{basename}-VTR') / basename, *fields, **kwargs)
self.group = uvw.ParaViewData(self.file_path, compression=True)
def _dump_to_file(self, file_descriptor, model):
- self.group.addFile(self.uvw_dumper.file_path.replace(
- 'paraview/', ''),
- timestep=self.uvw_dumper.count)
+ self.group.addFile(
+ self.uvw_dumper.file_path.replace('paraview/', ''),
+ timestep=self.uvw_dumper.count,
+ )
self.group.write()
self.uvw_dumper.dump(model)
except ImportError:
pass
try:
from netCDF4 import Dataset
- __all__.append("NetCDFDumper")
+ __all__.append("cls")
@directory_dump('netcdf')
class NetCDFDumper(FieldDumper):
"""Dumper to netCDF4 files."""
extension = "nc"
time_dim = 'frame'
+ format = 'NETCDF4_CLASSIC'
def _file_setup(self, grp, model: Model):
grp.createDimension(self.time_dim, None)
# Attibutes
for k, v in _get_attributes(model).items():
grp.setncattr(k, v)
# Local dimensions
voigt_dim = type_traits[model.type].voigt
components = type_traits[model.type].components
self._vec = grp.createDimension('spatial', components)
self._tens = grp.createDimension('Voigt', voigt_dim)
self.model_info = model.global_shape, model.type
global_boundary_shape = mpi.global_shape(model.boundary_shape)
# Create boundary dimensions
for label, size, length in zip("xy", global_boundary_shape,
model.boundary_system_size):
grp.createDimension(label, size)
coord = grp.createVariable(label, 'f8', (label, ))
coord[:] = np.linspace(0, length, size, endpoint=False)
self._create_variables(grp, model,
lambda f: _is_surface_field(f[1], model),
global_boundary_shape, "xy")
# Create volume dimension
if model.type in {model_type.volume_1d, model_type.volume_2d}:
size = model.shape[0]
grp.createDimension("z", size)
coord = grp.createVariable("z", 'f8', ("z", ))
coord[:] = np.linspace(0, model.system_size[0], size)
self._create_variables(
grp, model, lambda f: not _is_surface_field(f[1], model),
model.global_shape, "zxy")
self.has_setup = True
def _set_collective(self, rootgrp):
if mpi.size() == 1:
return
for v in rootgrp.variables.values():
if self.time_dim in v.dimensions:
v.set_collective(True)
def _dump_to_file(self, file_descriptor: NameType, model: Model):
- format = 'NETCDF4_CLASSIC'
mode = 'a' if Path(file_descriptor).is_file() \
and getattr(self, 'has_setup', False) else 'w'
try:
with Dataset(file_descriptor,
mode,
- format=format,
+ format=self.format,
parallel=mpi.size() > 1) as rootgrp:
if rootgrp.dimensions == {}:
self._file_setup(rootgrp, model)
self._set_collective(rootgrp)
if self.model_info != (model.global_shape, model.type):
raise ModelError(f"Unexpected model {mode}")
self._dump_generic(rootgrp, model)
except ValueError:
raise MPIIncompatibilityError("NetCDF4 has no MPI support")
def _create_variables(self, grp, model, predicate, shape, dimensions):
field_dim = len(shape)
fields = list(filter(predicate, self.get_fields(model).items()))
dim_labels = list(dimensions[:field_dim])
for label, data in fields:
local_dim = []
# If we have an extra component
if data.ndim > field_dim:
if data.shape[-1] == self._tens.size:
local_dim = [self._tens.name]
elif data.shape[-1] == self._vec.size:
local_dim = [self._vec.name]
else:
raise ComponentsError(
f"{label} has unexpected number of components "
f"({data.shape[-1]})")
grp.createVariable(label,
'f8',
[self.time_dim] + dim_labels + local_dim,
zlib=mpi.size() == 0)
def _dump_generic(self, grp, model):
fields = self.get_fields(model).items()
new_frame = len(grp.dimensions[self.time_dim])
for label, data in fields:
var = grp[label]
slice_in_global = (new_frame, ) + local_slice(data, model)
var[slice_in_global] = np.array(data, dtype=np.double)
- @staticmethod
- def _open_read(fd):
- return Dataset(fd,
- 'r',
- format='NETCDF4_CLASSIC',
- parallel=mpi.size() > 1)
+ @classmethod
+ def _open_read(cls, fd):
+ return Dataset(fd, 'r', format=cls.format, parallel=mpi.size() > 1)
@staticmethod
def _create_model(rootgrp):
attrs = {k: rootgrp.getncattr(k) for k in rootgrp.ncattrs()}
return _create_model(attrs)
@staticmethod
def _set_model_fields(rootgrp, model, frame):
dims = rootgrp.dimensions.keys()
for k, v in filter(lambda k: k[0] not in dims,
rootgrp.variables.items()):
v = v[frame, :]
if model.type in _basic_types:
v = np.asarray(v).reshape(list(v.shape) + [1])
model[k] = v[local_slice(v, model)].copy()
- def read(self, file_descriptor: NameType):
+ @classmethod
+ def read(cls, file_descriptor: NameType):
"""Create model with last frame."""
- with self._open_read(file_descriptor) as rootgrp:
- model = self._create_model(rootgrp)
- self._set_model_fields(rootgrp, model, -1)
+ with cls._open_read(file_descriptor) as rootgrp:
+ model = cls._create_model(rootgrp)
+ cls._set_model_fields(rootgrp, model, -1)
return model
- def read_sequence(self, file_descriptor: NameType):
- with self._open_read(file_descriptor) as rootgrp:
- model = self._create_model(rootgrp)
- for frame in range(len(rootgrp.dimensions[self.time_dim])):
- self._set_model_fields(rootgrp, model, frame)
+ @classmethod
+ def read_sequence(cls, file_descriptor: NameType):
+ with cls._open_read(file_descriptor) as rootgrp:
+ model = cls._create_model(rootgrp)
+ for frame in range(len(rootgrp.dimensions[cls.time_dim])):
+ cls._set_model_fields(rootgrp, model, frame)
yield model
except ImportError:
pass