Page MenuHomec4science

Automated tests are failing
Closed, ResolvedPublic

Description

I installed the dependencies and ran the tests. test_dumper.py and test_patch_plasticity.py seems to fail. I have attached the log here. Could you please help with what is wrong? This part of the review of Tamaas JOSS publication.

_________________________________ test_dumpers _________________________________

tamaas_fixture = None

    def test_dumpers(tamaas_fixture):
        model = tm.ModelFactory.createModel(tm.model_type.volume_2d, [1., 1., 1.],
                                            [16, 4, 8])
        dumper = Dumper()
        np_dumper = NumpyDumper('test_dump', 'traction', 'displacement')
        model.addDumper(np_dumper)
        model.dump()
        model.dump()
    
        dumper << model
    
        ref_t = model.getTraction()
        ref_d = model.getDisplacement()
    
        tractions = np.loadtxt('tractions.txt')
        displacements = np.loadtxt('displacement.txt')
    
        assert np.all(tractions.reshape(ref_t.shape) == ref_t)
        assert np.all(displacements.reshape(ref_d.shape) == ref_d)
    
        with np.load('numpys/test_dump_0000.npz') as npfile:
            tractions = npfile['traction']
            displacements = npfile['displacement']
>           attributes = npfile['attrs'].item()

build-release/tests/test_dumper.py:80: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../miniconda3/envs/tamaas/lib/python3.7/site-packages/numpy/lib/npyio.py:255: in __getitem__
    pickle_kwargs=self.pickle_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

fp = <zipfile.ZipExtFile name='attrs.npy' mode='r' compress_type=deflate>
allow_pickle = False, pickle_kwargs = {'encoding': 'ASCII', 'fix_imports': True}

    def read_array(fp, allow_pickle=False, pickle_kwargs=None):
        """
        Read an array from an NPY file.
    
        Parameters
        ----------
        fp : file_like object
            If this is not a real file object, then this may take extra memory
            and time.
        allow_pickle : bool, optional
            Whether to allow writing pickled data. Default: False
    
            .. versionchanged:: 1.16.3
                Made default False in response to CVE-2019-6446.
    
        pickle_kwargs : dict
            Additional keyword arguments to pass to pickle.load. These are only
            useful when loading object arrays saved on Python 2 when using
            Python 3.
    
        Returns
        -------
        array : ndarray
            The array from the data on disk.
    
        Raises
        ------
        ValueError
            If the data is invalid, or allow_pickle=False and the file contains
            an object array.
    
        """
        version = read_magic(fp)
        _check_version(version)
        shape, fortran_order, dtype = _read_array_header(fp, version)
        if len(shape) == 0:
            count = 1
        else:
            count = numpy.multiply.reduce(shape, dtype=numpy.int64)
    
        # Now read the actual data.
        if dtype.hasobject:
            # The array contained Python objects. We need to unpickle the data.
            if not allow_pickle:
>               raise ValueError("Object arrays cannot be loaded when "
                                 "allow_pickle=False")
E               ValueError: Object arrays cannot be loaded when allow_pickle=False

../../miniconda3/envs/tamaas/lib/python3.7/site-packages/numpy/lib/format.py:727: ValueError
______________________________ test_netcdfdumper _______________________________

self = <tamaas.dumpers.NetCDFDumper object at 0x7f78f16490b0>
fd = 'netcdf/test_netcdf_0000.nc'
model = Model<volume_2d> (E = 1, nu = 0)
  - domain = [1, 1, 1]
  - discretization = [16, 4, 8]

    def dump_to_file(self, fd, model):
        with Dataset(fd, 'w', format='NETCDF4_CLASSIC') as rootgrp:
            model_dim = len(model.getDiscretization())
            self._vec = rootgrp.createDimension('spatial', model_dim)
            self._tens = rootgrp.createDimension('Voigt', 2*model_dim)
    
>           self._dump_boundary(rootgrp, model)

build-release/python/tamaas/dumpers/__init__.py:215: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tamaas.dumpers.NetCDFDumper object at 0x7f78f16490b0>
grp = <class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
    dimensions(sizes): s... Voigt(6), x(4), y(8)
    variables(dimensions): float64 x(x), float64 y(y), float64 traction(x,y,spatial)
    groups: 
model = Model<volume_2d> (E = 1, nu = 0)
  - domain = [1, 1, 1]
  - discretization = [16, 4, 8]

    def _dump_boundary(self, grp, model):
        # Create boundary dimensions
        it = zip("xy", model.getBoundaryDiscretization(),
                 model.getSystemSize())
    
        for label, size, length in it:
            grp.createDimension(label, size)
            coord = grp.createVariable(label, 'f8', (label,))
            coord[:] = np.linspace(0, length, size, endpoint=False)
    
        self._dump_generic(grp, model,
                           lambda f: f[0] in self.boundary_fields,
                           'boundary',
                           model.getBoundaryDiscretization(),
>                          "xy")

build-release/python/tamaas/dumpers/__init__.py:234: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tamaas.dumpers.NetCDFDumper object at 0x7f78f16490b0>
grp = <class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
    dimensions(sizes): s... Voigt(6), x(4), y(8)
    variables(dimensions): float64 x(x), float64 y(y), float64 traction(x,y,spatial)
    groups: 
model = Model<volume_2d> (E = 1, nu = 0)
  - domain = [1, 1, 1]
  - discretization = [16, 4, 8]
predicate = <function NetCDFDumper._dump_boundary.<locals>.<lambda> at 0x7f78f1634050>
group_name = 'boundary', shape = [4, 8], dimensions = 'xy'

    def _dump_generic(self, grp, model, predicate,
                      group_name, shape, dimensions):
        field_dim = len(shape)
        fields = filter(predicate, self.get_fields(model).items())
        dim_labels = list(dimensions[:field_dim])
    
        for label, data in fields:
            dims = dim_labels
    
            # If we have an extra component
            if data.ndim > field_dim:
                if data.shape[-1] == self._tens.size:
                    dims.append(self._tens.name)
                elif data.shape[-1] == self._vec.size:
                    dims.append(self._vec.name)
    
            var = grp.createVariable(label, 'f8', dims)
>           var[:] = np.array(data, dtype=np.double).flatten()

build-release/python/tamaas/dumpers/__init__.py:264: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

>   ???

netCDF4/_netCDF4.pyx:4853: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

elem = [slice(None, None, None), slice(None, None, None), slice(None, None, None)]
shape = (4, 8, 3), dimensions = ('x', 'y', 'spatial')
grp = <class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
    dimensions(sizes): s... Voigt(6), x(4), y(8)
    variables(dimensions): float64 x(x), float64 y(y), float64 traction(x,y,spatial)
    groups: 
datashape = (96,), put = True, use_get_vars = False

    def _StartCountStride(elem, shape, dimensions=None, grp=None, datashape=None,\
            put=False, use_get_vars = False):
        """Return start, count, stride and indices needed to store/extract data
        into/from a netCDF variable.
    
        This function is used to convert a slicing expression into a form that is
        compatible with the nc_get_vars function. Specifically, it needs
        to interpret integers, slices, Ellipses, and 1-d sequences of integers
        and booleans.
    
        Numpy uses "broadcasting indexing" to handle array-valued indices.
        "Broadcasting indexing" (a.k.a "fancy indexing") treats all multi-valued
        indices together to allow arbitrary points to be extracted. The index
        arrays can be multidimensional, and more than one can be specified in a
        slice, as long as they can be "broadcast" against each other.
        This style of indexing can be very powerful, but it is very hard
        to understand, explain, and implement (and can lead to hard to find bugs).
        Most other python packages and array processing
        languages (such as netcdf4-python, xray, biggus, matlab and fortran)
        use "orthogonal indexing" which only allows for 1-d index arrays and
        treats these arrays of indices independently along each dimension.
    
        The implementation of "orthogonal indexing" used here requires that
        index arrays be 1-d boolean or integer. If integer arrays are used,
        the index values must be sorted and contain no duplicates.
    
        In summary, slicing netcdf4-python variable objects with 1-d integer or
        boolean arrays is allowed, but may give a different result than slicing a
        numpy array.
    
        Numpy also supports slicing an array with a boolean array of the same
        shape. For example x[x>0] returns a 1-d array with all the positive values of x.
        This is also not supported in netcdf4-python, if x.ndim > 1.
    
        Orthogonal indexing can be used in to select netcdf variable slices
        using the dimension variables. For example, you can use v[lat>60,lon<180]
        to fetch the elements of v obeying conditions on latitude and longitude.
        Allow for this sort of simple variable subsetting is the reason we decided to
        deviate from numpy's slicing rules.
    
        This function is used both by the __setitem__ and __getitem__ method of
        the Variable class.
    
        Parameters
        ----------
        elem : tuple of integer, slice, ellipsis or 1-d boolean or integer
        sequences used to slice the netCDF Variable (Variable[elem]).
        shape : tuple containing the current shape of the netCDF variable.
        dimensions : sequence
          The name of the dimensions.
          __setitem__.
        grp  : netCDF Group
          The netCDF group to which the variable being set belongs to.
        datashape : sequence
          The shape of the data that is being stored. Only needed by __setitem__
        put : True|False (default False).  If called from __setitem__, put is True.
    
        Returns
        -------
        start : ndarray (..., n)
          A starting indices array of dimension n+1. The first n
          dimensions identify different independent data chunks. The last dimension
          can be read as the starting indices.
        count : ndarray (..., n)
          An array of dimension (n+1) storing the number of elements to get.
        stride : ndarray (..., n)
          An array of dimension (n+1) storing the steps between each datum.
        indices : ndarray (..., n)
          An array storing the indices describing the location of the
          data chunk in the target/source array (__getitem__/__setitem__).
    
        Notes:
    
        netCDF data is accessed via the function:
           nc_get_vars(grpid, varid, start, count, stride, data)
    
        Assume that the variable has dimension n, then
    
        start is a n-tuple that contains the indices at the beginning of data chunk.
        count is a n-tuple that contains the number of elements to be accessed.
        stride is a n-tuple that contains the step length between each element.
    
        """
        # Adapted from pycdf (http://pysclint.sourceforge.net/pycdf)
        # by Andre Gosselin..
        # Modified by David Huard to handle efficiently fancy indexing with
        # sequences of integers or booleans.
    
        nDims = len(shape)
        if nDims == 0:
            nDims = 1
            shape = (1,)
    
        # is there an unlimited dimension? (only defined for __setitem__)
        if put:
            hasunlim = False
            unlimd={}
            if dimensions:
                for i in range(nDims):
                    dimname = dimensions[i]
                    # is this dimension unlimited?
                    # look in current group, and parents for dim.
                    dim = _find_dim(grp, dimname)
                    unlimd[dimname]=dim.isunlimited()
                    if unlimd[dimname]:
                        hasunlim = True
        else:
            hasunlim = False
    
        # When a single array or (non-tuple) sequence of integers is given
        # as a slice, assume it applies to the first dimension,
        # and use ellipsis for remaining dimensions.
        if np.iterable(elem):
            if type(elem) == np.ndarray or (type(elem) != tuple and \
                np.array([_is_int(e) for e in elem]).all()):
                elem = [elem]
                for n in range(len(elem)+1,nDims+1):
                    elem.append(slice(None,None,None))
        else:   # Convert single index to sequence
            elem = [elem]
    
        # ensure there is at most 1 ellipse
        #  we cannot use elem.count(Ellipsis), as with fancy indexing would occur
        #  np.array() == Ellipsis which gives ValueError: The truth value of an
        #  array with more than one element is ambiguous. Use a.any() or a.all()
        if sum(1 for e in elem if e is Ellipsis) > 1:
            raise IndexError("At most one ellipsis allowed in a slicing expression")
    
        # replace boolean arrays with sequences of integers.
        newElem = []
        IndexErrorMsg=\
        "only integers, slices (`:`), ellipsis (`...`), and 1-d integer or boolean arrays are valid indices"
        i=0
        for e in elem:
            # string-like object try to cast to int
            # needs to be done first, since strings are iterable and
            # hard to distinguish from something castable to an iterable numpy array.
            if type(e) in [str,bytes,unicode]:
                try:
                    e = int(e)
                except:
                    raise IndexError(IndexErrorMsg)
            ea = np.asarray(e)
            # Raise error if multidimensional indexing is used.
            if ea.ndim > 1:
                raise IndexError("Index cannot be multidimensional")
            # set unlim to True if dimension is unlimited and put==True
            # (called from __setitem__)
            if hasunlim and put and dimensions:
                try:
                    dimname = dimensions[i]
                    unlim = unlimd[dimname]
                except IndexError: # more slices than dimensions (issue 371)
                    unlim = False
            else:
                unlim = False
            # convert boolean index to integer array.
            if np.iterable(ea) and ea.dtype.kind =='b':
                # check that boolen array not too long
                if not unlim and shape[i] != len(ea):
                    msg="""
    Boolean array must have the same shape as the data along this dimension."""
                    raise IndexError(msg)
                ea = np.flatnonzero(ea)
            # an iterable (non-scalar) integer array.
            if np.iterable(ea) and ea.dtype.kind == 'i':
                # convert negative indices in 1d array to positive ones.
                ea = np.where(ea < 0, ea + shape[i], ea)
                if np.any(ea < 0):
                    raise IndexError("integer index out of range")
                # if unlim, let integer index be longer than current dimension
                # length.
                if ea.shape != (0,):
                    elen = shape[i]
                    if unlim:
                        elen = max(ea.max()+1,elen)
                    if ea.max()+1 > elen:
                        msg="integer index exceeds dimension size"
                        raise IndexError(msg)
                newElem.append(ea)
            # integer scalar
            elif ea.dtype.kind == 'i':
                newElem.append(e)
            # slice or ellipsis object
            elif type(e) == slice or type(e) == type(Ellipsis):
                if not use_get_vars and type(e) == slice and e.step not in [None,-1,1] and\
                   dimensions is not None and grp is not None:
                    # convert strided slice to integer sequence if possible
                    # (this will avoid nc_get_vars, which is slow - issue #680).
                    start = e.start if e.start is not None else 0
                    step = e.step
                    if e.stop is None and dimensions is not None and grp is not None:
                        stop = len(_find_dim(grp, dimensions[i]))
                    else:
                        stop = e.stop
                        if stop < 0:
                            stop = len(_find_dim(grp, dimensions[i])) + stop
                    try:
                        ee = np.arange(start,stop,e.step)
                        if len(ee) > 0:
                            e = ee
                    except:
                        pass
                newElem.append(e)
            else:  # castable to a scalar int, otherwise invalid
                try:
                    e = int(e)
                    newElem.append(e)
                except:
                    raise IndexError(IndexErrorMsg)
            if type(e)==type(Ellipsis):
                i+=1+nDims-len(elem)
            else:
                i+=1
        elem = newElem
    
        # replace Ellipsis and integer arrays with slice objects, if possible.
        newElem = []
        for e in elem:
            ea = np.asarray(e)
            # Replace ellipsis with slices.
            if type(e) == type(Ellipsis):
                # The ellipsis stands for the missing dimensions.
                newElem.extend((slice(None, None, None),) * (nDims - len(elem) + 1))
            # Replace sequence of indices with slice object if possible.
            elif np.iterable(e) and len(e) > 1:
                start = e[0]
                stop = e[-1]+1
                step = e[1]-e[0]
                try:
                    ee = range(start,stop,step)
                except ValueError: # start, stop or step is not valid for a range
                    ee = False
                if ee and len(e) == len(ee) and (e == np.arange(start,stop,step)).all():
                    # don't convert to slice unless abs(stride) == 1
                    # (nc_get_vars is very slow, issue #680)
                    if not use_get_vars and step not in [1,-1]:
                        newElem.append(e)
                    else:
                        newElem.append(slice(start,stop,step))
                else:
                    newElem.append(e)
            elif np.iterable(e) and len(e) == 1:
                newElem.append(slice(e[0], e[0] + 1, 1))
            else:
                newElem.append(e)
        elem = newElem
    
        # If slice doesn't cover all dims, assume ellipsis for rest of dims.
        if len(elem) < nDims:
            for n in range(len(elem)+1,nDims+1):
                elem.append(slice(None,None,None))
    
        # make sure there are not too many dimensions in slice.
        if len(elem) > nDims:
            raise ValueError("slicing expression exceeds the number of dimensions of the variable")
    
        # Compute the dimensions of the start, count, stride and indices arrays.
        # The number of elements in the first n dimensions corresponds to the
        # number of times the _get method will be called.
        sdim = []
        for i, e in enumerate(elem):
            # at this stage e is a slice, a scalar integer, or a 1d integer array.
            # integer array:  _get call for each True value
            if np.iterable(e):
                sdim.append(np.alen(e))
            # Scalar int or slice, just a single _get call
            else:
                sdim.append(1)
    
        # broadcast data shape when assigned to full variable (issue #919)
        try:
            fullslice = elem.count(slice(None,None,None)) == len(elem)
        except: # fails if elem contains a numpy array.
            fullslice = False
        if fullslice and datashape and put and not hasunlim:
>           datashape = broadcasted_shape(shape, datashape)

../../.local/lib/python3.7/site-packages/netCDF4/utils.py:365: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

shp1 = (4, 8, 3), shp2 = (96,)

    def broadcasted_shape(shp1, shp2):
        # determine shape of array of shp1 and shp2 broadcast against one another.
        x = np.array([1])
        # trick to define array with certain shape that doesn't allocate all the
        # memory.
        a = as_strided(x, shape=shp1, strides=[0] * len(shp1))
        b = as_strided(x, shape=shp2, strides=[0] * len(shp2))
>       return np.broadcast(a, b).shape
E       ValueError: shape mismatch: objects cannot be broadcast to a single shape

../../.local/lib/python3.7/site-packages/netCDF4/utils.py:973: ValueError

During handling of the above exception, another exception occurred:

tamaas_fixture = None

    def test_netcdfdumper(tamaas_fixture):
        model = tm.ModelFactory.createModel(tm.model_type.volume_2d,
                                            [1., 1., 1.],
                                            [16, 4, 8])
        model.getDisplacement()[...] = 3.1415
        dumper = NetCDFDumper('test_netcdf', 'traction', 'displacement')
>       dumper << model

build-release/tests/test_dumper.py:126: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
build-release/python/tamaas/dumpers/_helper.py:88: in dump
    orig_dump(obj, *args, **kwargs)
build-release/python/tamaas/dumpers/_helper.py:64: in dump
    orig_dump(obj, *args, **kwargs)
build-release/python/tamaas/dumpers/__init__.py:75: in dump
    self.dump_to_file(self.file_path, model)
build-release/python/tamaas/dumpers/__init__.py:218: in dump_to_file
    self._dump_volume(rootgrp, model)
netCDF4/_netCDF4.pyx:2358: in netCDF4._netCDF4.Dataset.__exit__
    ???
netCDF4/_netCDF4.pyx:2485: in netCDF4._netCDF4.Dataset.close
    ???
netCDF4/_netCDF4.pyx:2449: in netCDF4._netCDF4.Dataset._close
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

>   ???
E   RuntimeError: NetCDF: HDF error

netCDF4/_netCDF4.pyx:1887: RuntimeError
______________ test_patch_plasticity[patch_isotropic_plasticity0] ______________

patch_isotropic_plasticity = <conftest.UniformPlasticity object at 0x7f78f162e1d0>

    def test_patch_plasticity(patch_isotropic_plasticity):
        model = patch_isotropic_plasticity.model
        residual = patch_isotropic_plasticity.residual
    
        applied_pressure = 0.1
    
        solver = DFSANESolver(residual, model)
        solver.tolerance = 1e-15
        pressure = model['traction'][..., 2]
        pressure[:] = applied_pressure
    
        solver.solve()
        solver.updateState()
    
        solution, normal = patch_isotropic_plasticity.solution(applied_pressure)
    
        for key in solution:
            error = norm(model[key] - solution[key]) / normal[key]
>           assert error < 1e-15
E           assert 1.2947314098277875e-15 < 1e-15

build-release/tests/test_patch_plasticity.py:43: AssertionError
----------------------------- Captured stderr call -----------------------------
DF-SANE: successful convergence (3 iterations, {'ftol': 0, 'fatol': 1e-15})
=========================== short test summary info ============================
FAILED build-release/tests/test_dumper.py::test_dumpers - ValueError: Object ...
FAILED build-release/tests/test_dumper.py::test_netcdfdumper - RuntimeError: ...
FAILED build-release/tests/test_patch_plasticity.py::test_patch_plasticity[patch_isotropic_plasticity0]
=================== 3 failed, 27 passed in 143.60s (0:02:23) ===================

Event Timeline

srmnitc created this task.Jul 22 2020, 10:50
srmnitc created this object in space S1 c4science.
srmnitc created this object with visibility "Public (No Login Required)".
frerot added a comment.Jul 22 2020, 14:28

Hi,

Thanks for the report. I've pushed a patch that should correct these errors. These were mostly due to newer versions of packages than what I have running in my test instance. Let me know if it works now.

frerot triaged this task as Normal priority.Jul 22 2020, 14:28
frerot closed this task as Resolved.Jul 27 2020, 17:13