BSpline Finite Element Exterior Calculus
Loading...
Searching...
No Matches
m_domain_decomp::domaindecomp Interface Reference

The B-spline tensor product domain type. More...

Inheritance diagram for m_domain_decomp::domaindecomp:
m_domain_decomp::tensorproddomain

Public Member Functions

procedure is_sequential_memory (this, dir)
 Determine whether the memory layout is sequential in this direction.
 
procedure is_shared_memory (this, dir)
 Determine whether the memory layout is shared in this direction.
 
procedure is_distributed_memory (this, dir)
 Determine whether the memory layout is distributed in this direction.
 
procedure destroy (this)
 Destroy a DomainDecomp by freeing the communicators.
 
type(domaindecomp) function init_domain_decomp (memory_layout_x, memory_layout_y, memory_layout_z, nx, ny, nz, comm, flat_mpi, dimensionality, my_node, my_shmem_rank, nr_nodes, nr_shmem_ranks)
 Initialize a DomainDecomp.
 

Public Attributes

integer my_rank
 The rank of this MPI process in the communicator.
 
integer nr_ranks
 The number of ranks in the communicator.
 
integer my_node = -1
 The node/processor ID (color used for splitting)
 
integer nr_nodes = 0
 The total number of distributed memory nodes in the communicator.
 
integer my_shmem_rank = -1
 The rank in the shared memory communicator.
 
integer nr_shmem_ranks = 0
 The number of ranks on the same node.
 
integer, dimension(3) memory_layout
 The memory layout in each direction (MEMORY_LAYOUT_SEQUENTIAL, MEMORY_LAYOUT_SHARED, MEMORY_LAYOUT_DISTRIBUTED)
 
integer, dimension(3) my_subinterval_ijk
 The subinterval index in the x,y,z-direction.
 
integer, dimension(3) nr_subintervals
 The number of subintervals in each direction (based on node distribution)
 
integer comm = MPI_COMM_NULL
 The communicator for all ranks.
 
integer comm_shmem = MPI_COMM_NULL
 The shared memory communicator (for ranks on the same node)
 
integer comm_node = MPI_COMM_NULL
 The communicator for the leading ranks on each node.
 
integer, dimension(3) nr_eff_neighbours
 The number of neighbouring domains for any memory layout.
 
integer, dimension(3) nr_comm_neighbours
 The number of neighbouring domains for distributed memory layout.
 
logical flat_mpi
 Forced flat MPI mode (i.e., no shared memory)
 

Detailed Description

The B-spline tensor product domain type.

Initialize a DomainDecomp.

This type contains information about the domain decomposition, without being speficic to a particular B-spline space

Member Function/Subroutine Documentation

◆ destroy()

procedure m_domain_decomp::domaindecomp::destroy ( class(domaindecomp), intent(inout) this)

Destroy a DomainDecomp by freeing the communicators.

Parameters
[in,out]thisThe DomainDecomp to destroy

◆ init_domain_decomp()

type(domaindecomp) function m_domain_decomp::domaindecomp::init_domain_decomp ( character(*), intent(in), optional memory_layout_x,
character(*), intent(in), optional memory_layout_y,
character(*), intent(in), optional memory_layout_z,
integer, intent(in), optional nx,
integer, intent(in), optional ny,
integer, intent(in), optional nz,
integer, intent(in), optional comm,
logical, intent(in), optional flat_mpi,
integer, intent(in), optional dimensionality,
integer, intent(in), optional my_node,
integer, intent(in), optional my_shmem_rank,
integer, intent(in), optional nr_nodes,
integer, intent(in), optional nr_shmem_ranks )

Initialize a DomainDecomp.

We consider two different memory models: distributed memory and shared memory. Both are handled using MPI communicators. Distributed memory is needed if multiple nodes are used (node is a physical computer for which each thread on that node shares memory), whereas shared memory is used for multiple ranks on the same node (i.e., multiple MPI processes on the same physical computer).

By default, the outer dimension (by default 'z', or 'y' if a 2D problem is consider) is distributed in memory if multiple nodes are used, whereas the second dimension (by default 'y', or 'x' if a 2D problem is consider) is shared in memory if multiple ranks are used on the same node. For example in 3D, if 4 nodes are used with 2 ranks on each node (total of 8 ranks), then the default behaviour is to have 4 subintervals in the z-direction (distributed memory) and 2 subintervals in the y-direction (shared memory).

The user can override this behaviour by specifying the number of subintervals in each direction as well as the desired memory model in each direction. It must be ensured, however, that the product of the number of subintervals in each direction matches the number of ranks in the communicator. For example in 3D, if 4 nodes are used with 4 ranks on each node (total of 16 ranks), then the user could force distributed memory in the z-direction (2 subintervals) as well as in the y-direction (2 subintervals), and shared memory in the x-direction (4 subintervals): DomainDecomp(Nx=4, Ny=2, Nz=2, memory_layout_x='shared', memory_layout_y='distributed', memory_layout_z='distributed').`

Parameters
[in]_(optional)_Nx The number of subintervals in the x-direction
[in]_(optional)_Ny The number of subintervals in the y-direction
[in]_(optional)_Nz The number of subintervals in the z-direction
[in]_(optional)_memory_layout_x Sequential, shared, or distributed (default: sequential)
[in]_(optional)_memory_layout_y Sequential, shared, or distributed (default: sequential unless 2D, or 3D with more than one node with shared memory ranks)
[in]_(optional)_memory_layout_z Sequential, shared, or distributed (default: sequential unless 3D and more than one rank)
[in]_(optional)_comm The communicator to use (default is PETSC_COMM_WORLD)
[in]_(optional)_flat_mpi If true, shared memory is ignored (default: .false.)
[in]_(optional)_dimensionality If 2, the domain decomposition is performed in the x,y-directions only (default: 3, alternatively, Nz == 1 can be used)
[in]_(optional)_my_node The node of this MPI process in the communicator (only set this for debugging purposes)
[in]_(optional)_my_shmem_rank The shared memory rank of this MPI process in the communicator (only set this for debugging purposes)
[in]_(optional)_nr_nodes The number of nodes in the communicator (only set this for debugging purposes)
[in]_(optional)_nr_shmem_ranks The number of shared memory ranks on this node (only set this for debugging purposes )
Returns
ans The initialized DomainDecomp

◆ is_distributed_memory()

procedure m_domain_decomp::domaindecomp::is_distributed_memory ( class(domaindecomp), intent(in) this,
integer, intent(in) dir )

Determine whether the memory layout is distributed in this direction.

Parameters
[in]dirThe direction to check (1=x, 2=y, 3=z)
Returns
ans True if the memory layout is distributed in this direction, false otherwise

◆ is_sequential_memory()

procedure m_domain_decomp::domaindecomp::is_sequential_memory ( class(domaindecomp), intent(in) this,
integer, intent(in) dir )

Determine whether the memory layout is sequential in this direction.

Parameters
[in]dirThe direction to check (1=x, 2=y, 3=z)
Returns
ans True if the memory layout is sequential in this direction, false otherwise

◆ is_shared_memory()

procedure m_domain_decomp::domaindecomp::is_shared_memory ( class(domaindecomp), intent(in) this,
integer, intent(in) dir )

Determine whether the memory layout is shared in this direction.

Parameters
[in]dirThe direction to check (1=x, 2=y, 3=z)
Returns
ans True if the memory layout is shared in this direction, false otherwise

The documentation for this interface was generated from the following file: