Universal VTK Writer for Numpy arrays. This repository is a mirror of the main repository on Github.
Recent Commits
Commit | Author | Details | Committed | ||||
---|---|---|---|---|---|---|---|
e12fd2f4762e | frerot | proper testing of components ordering | May 19 2020 | ||||
9ee720a180c5 | frerot | fixed install phase of travis | May 16 2020 | ||||
c7a9034fe70c | frerot | adding coveralls | May 16 2020 | ||||
d88d2863858e | frerot | added Travis CI | May 16 2020 | ||||
63b625e84661 | frerot | reaching near 100% coverage | May 15 2020 | ||||
2871dc53c304 | frerot | added support for appended data section | May 15 2020 | ||||
c0b792f20bf7 | frerot | bump version number | May 14 2020 | ||||
ed78908b45bf | frerot | ignoring build from setup.py | May 14 2020 | ||||
00df50bb8c72 | frerot | avoiding uncessary data copy before b64encode | May 14 2020 | ||||
eaf40d59089d | frerot | bump version number (because of compression feature added) | May 14 2020 | ||||
45a509ae0e0f | frerot | updated README | May 14 2020 | ||||
9a87c50d6f95 | frerot | adding compression fixture to parallel test | May 14 2020 | ||||
cd7ad60f89a8 | frerot | fix of parallel vtk read | May 14 2020 | ||||
892ff3aa76e3 | frerot | adding a fixture for testing data compression | May 14 2020 | ||||
4e3262784414 | frerot | refactored parallel test to use vtk readers | May 14 2020 |
README.md
UVW - Universal VTK Writer
![Build Status](https://travis-ci.com/prs513rosewood/uvw) ![Coverage Status](https://coveralls.io/github/prs513rosewood/uvw?branch=master)
UVW is a small utility library to write VTK files from data contained in Numpy arrays. It handles fully-fledged ndarrays defined over {1, 2, 3}-d domains, with arbitrary number of components. There are no constraints on the particular order of components, although copy of data can be avoided if the array is Fortran contiguous, as VTK files are written in Fortran order. UVW supports multi-process writing of VTK files, so that it can be used in an MPI environment.
Getting Started
Here is how to install and use uvw.
Prerequisites
- Python 3. It may work with python 2, but it hasn't been tested.
- Numpy. This code has been tested with Numpy version 1.14.3.
- mpi4py only if you wish to use the parallel classes of UVW (i.e. the submodule uvw.parallel)
Installing
This library can be installed with pip:
pip install --user uvw
If you want to activate parallel capabilities, run:
pip install --user uvw[mpi]
which will automatically pull mpi4py as a dependency.
Writing Numpy arrays
As a first example, let us write a multi-component numpy array into a rectilinear grid:
python import numpy as np from uvw import RectilinearGrid, DataArray # Creating coordinates x = np.linspace(-0.5, 0.5, 10) y = np.linspace(-0.5, 0.5, 20) z = np.linspace(-0.9, 0.9, 30) # Creating the file (with possible data compression) grid = RectilinearGrid('grid.vtr', (x, y, z), compression=True) # A centered ball x, y, z = np.meshgrid(x, y, z, indexing='ij') r = np.sqrt(x**2 + y**2 + z**2) ball = r < 0.3 # Some multi-component multi-dimensional data data = np.zeros([10, 20, 30, 3, 3]) data[ball, ...] = np.array([[0, 1, 0], [1, 0, 0], [0, 1, 1]]) # Some cell data cell_data = np.zeros([9, 19, 29]) cell_data[0::2, 0::2, 0::2] = 1 # Adding the point data (see help(DataArray) for more info) grid.addPointData(DataArray(data, range(3), 'ball')) # Adding the cell data grid.addCellData(DataArray(cell_data, range(3), 'checkers')) grid.write()
UVW also supports writing data on 2D and 1D physical domains, for example:
python import sys import numpy as np from uvw import RectilinearGrid, DataArray # Creating coordinates x = np.linspace(-0.5, 0.5, 10) y = np.linspace(-0.5, 0.5, 20) # A centered disk xx, yy = np.meshgrid(x, y, indexing='ij') r = np.sqrt(xx**2 + yy**2) R = 0.3 disk = r < R data = np.zeros([10, 20]) data[disk] = np.sqrt(1-(r[disk]/R)**2) # File object can be used as a context manager # and you can write to stdout! with RectilinearGrid(sys.stdout, (x, y)) as grid: grid.addPointData(DataArray(data, range(2), 'data'))
Writing in parallel with mpi4py
The classes contained in the uvw.parallel submodule support multi-process writing using mpi4py. Here is a code example:
python import numpy as np from mpi4py import MPI from uvw.parallel import PRectilinearGrid from uvw import DataArray comm = MPI.COMM_WORLD rank = comm.Get_rank() N = 20 # Domain bounds per rank bounds = [ {'x': (-2, 0), 'y': (-2, 0)}, {'x': (-2, 0), 'y': (0, 2)}, {'x': (0, 2), 'y': (-2, 2)}, ] # Domain sizes per rank sizes = [ {'x': N, 'y': N}, {'x': N, 'y': N}, {'x': N, 'y': 2*N-1}, # account for overlap ] # Size offsets per rank offsets = [ [0, 0], [0, N], [N, 0], ] x = np.linspace(*bounds[rank]['x'], sizes[rank]['x']) y = np.linspace(*bounds[rank]['y'], sizes[rank]['y']) xx, yy = np.meshgrid(x, y, indexing='ij', sparse=True) r = np.sqrt(xx**2 + yy**2) data = np.exp(-r**2) # Indicating rank info with a cell array proc = np.ones((x.size-1, y.size-1)) * rank with PRectilinearGrid('pgrid.pvtr', (x, y), offsets[rank]) as rect: rect.addPointData(DataArray(data, range(2), 'gaussian')) rect.addCellData(DataArray(proc, range(2), 'proc'))
As you can see, using PRectilinearGrid feels just like using RectilinearGrid, except that you need to supply the position of the local grid in the global grid numbering (the offsets[rank] in the above example). Note that RecilinearGrid VTK files need an overlap in point data, hence why the global grid size ends up being (2*N-1, 2*N-1). If you forget that overlap, Paraview (or another VTK-based software) may complain that some parts in the global grid (aka "extents" in VTK) are missing data.
List of features
Here is a list of what is available in UVW:
VTK file formats
- Image data (.vti)
- Rectilinear grid (.vtr)
- Structured grid (.vts)
- Parallel Rectilinear grid (.pvtr)
Data representation
- ASCII
- Base64 (raw and compressed: the compression argument of file constructors can be True, False, or an integer in [-1, 9] for compression levels)
Planned developments
Here is a list of future developments:
- Image data
- Unstructured grid
- Structured grid
- Parallel writing (mpi4py-enabled PRectilinearGrid *is now available!*)
- Benchmarking + performance comparison with pyevtk
Developing
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Git repository
First clone the git repository:
git clone https://github.com/prs513rosewood/uvw.git
Then you can use pip in development mode (possibly in virtualenv):
pip install --user -e .[mpi,tests]
Running the tests
The tests can be run using pytest:
pytest # or for tests with mpi mpiexec -n 2 pytest --with-mpi
License
This project is licensed under the MIT License - see the LICENSE.md file for details.