Page MenuHomec4science

MPI parallelization
Closed, ResolvedPublic

Description

Introduce MPI communicator and make relevant parts MPI aware. Suggestion is to rely on FFTWs MPI parallelization initially.

Need to discuss which classes need to know about MPI. First discussion with Till resulted in the conclusion that Field collection should broker MPI communicator and FFT class should decide decomposition. CG solver also needs to know about MPI but probably does not need decomposition.

Event Timeline

pastewka created this task.Feb 21 2018, 21:20
pastewka created this object in space S1 c4science.
pastewka created this object with visibility "All Users".
pastewka created this object with edit policy "All Users".
pastewka changed the visibility from "All Users" to "Public (No Login Required)".
pastewka added subscribers: junge, RLeute.
pastewka claimed this task.Mar 2 2018, 21:31
pastewka raised the priority of this task from Low to Normal.Mar 6 2018, 12:56
pastewka renamed this task from MPI parallelization to MPI parallelization (FFTW).
pastewka renamed this task from MPI parallelization (FFTW) to MPI parallelization.Mar 6 2018, 13:12
pastewka created subtask T1912: PFFT Engine.
pastewka moved this task from Backlog to Doing on the µSpectre board.Mar 7 2018, 13:05
  • Think about subclassing Communicator class
  • get_resolutions -> get_subdomain_resolutions etc.
pastewka closed this task as Resolved.Apr 11 2018, 17:37
pastewka moved this task from Doing to Done on the µSpectre board.