FFTW3_OMP, python(2|3), numpy, thrust, boost (preprocessor), g++/clang++ with C++11 and OpenMP support
Cuda 8.0 or higher. Compute capacity 3.5.
You should first clone the git submodules that are dependencies to tamaas (pybind11 and googletest, optional):
git submodule update --init --recursive
The build system uses SCons. In order to construct the library you should hit:
And to speedup the process you can do:
scons -j 6
In order to clean the build
In order to compile in debug
Indeed the default was
In order to make the compilation more verbose
Tamaas relies on thrust for parallelism. You can change the thrust backend using the appropriate option:
For a list of *all* compilation options:
You can edit the file build-setup.conf to change the available compilation options.
You can customize a few compilation variables:
- CXX changes the compiler
- CXXFLAGS adds flags to the compilation process
- scons CXX=icpc compiles with the intel C++ compiler
- CXXFLAGS=-mavx2 scons adds the -mavx2 flag (which enables some Intel instructions) to the compilation process
If the CXX variable is defined in your environment, scons will take it as a default compiler.
Make sure you have Mercurial and Doxygen installed (and graphviz for the nice graphs).
Tamaas is mainly used through the python interface. An example can be found in examples/new_contact.py. The example examples/new_adhesion.py shows how one can derive a Tamaas (C++) class in python.
TODO: update the example
Tamaas features shared-memory paralellism with OpenMP. The number of threads can be controlled via the OMP_NUM_THREADS environment variable or the omp_set_num_thread() function in the OpenMP API.
To compile Tamaas with Cuda support use the following