|
BSpline Finite Element Exterior Calculus
|
The B-spline tensor product domain type. More...
Public Member Functions | |
| procedure | is_sequential_memory (this, dir) |
| Determine whether the memory layout is sequential in this direction. | |
| procedure | is_shared_memory (this, dir) |
| Determine whether the memory layout is shared in this direction. | |
| procedure | is_distributed_memory (this, dir) |
| Determine whether the memory layout is distributed in this direction. | |
| procedure | destroy (this) |
| Destroy a DomainDecomp by freeing the communicators. | |
| type(domaindecomp) function | init_domain_decomp (memory_layout_x, memory_layout_y, memory_layout_z, nx, ny, nz, comm, flat_mpi, dimensionality, my_node, my_shmem_rank, nr_nodes, nr_shmem_ranks) |
| Initialize a DomainDecomp. | |
The B-spline tensor product domain type.
Initialize a DomainDecomp.
This type contains information about the domain decomposition, without being speficic to a particular B-spline space
| procedure m_domain_decomp::domaindecomp::destroy | ( | class(domaindecomp), intent(inout) | this | ) |
Destroy a DomainDecomp by freeing the communicators.
| [in,out] | this | The DomainDecomp to destroy |
| type(domaindecomp) function m_domain_decomp::domaindecomp::init_domain_decomp | ( | character(*), intent(in), optional | memory_layout_x, |
| character(*), intent(in), optional | memory_layout_y, | ||
| character(*), intent(in), optional | memory_layout_z, | ||
| integer, intent(in), optional | nx, | ||
| integer, intent(in), optional | ny, | ||
| integer, intent(in), optional | nz, | ||
| integer, intent(in), optional | comm, | ||
| logical, intent(in), optional | flat_mpi, | ||
| integer, intent(in), optional | dimensionality, | ||
| integer, intent(in), optional | my_node, | ||
| integer, intent(in), optional | my_shmem_rank, | ||
| integer, intent(in), optional | nr_nodes, | ||
| integer, intent(in), optional | nr_shmem_ranks ) |
Initialize a DomainDecomp.
We consider two different memory models: distributed memory and shared memory. Both are handled using MPI communicators. Distributed memory is needed if multiple nodes are used (node is a physical computer for which each thread on that node shares memory), whereas shared memory is used for multiple ranks on the same node (i.e., multiple MPI processes on the same physical computer).
By default, the outer dimension (by default 'z', or 'y' if a 2D problem is consider) is distributed in memory if multiple nodes are used, whereas the second dimension (by default 'y', or 'x' if a 2D problem is consider) is shared in memory if multiple ranks are used on the same node. For example in 3D, if 4 nodes are used with 2 ranks on each node (total of 8 ranks), then the default behaviour is to have 4 subintervals in the z-direction (distributed memory) and 2 subintervals in the y-direction (shared memory).
The user can override this behaviour by specifying the number of subintervals in each direction as well as the desired memory model in each direction. It must be ensured, however, that the product of the number of subintervals in each direction matches the number of ranks in the communicator. For example in 3D, if 4 nodes are used with 4 ranks on each node (total of 16 ranks), then the user could force distributed memory in the z-direction (2 subintervals) as well as in the y-direction (2 subintervals), and shared memory in the x-direction (4 subintervals): DomainDecomp(Nx=4, Ny=2, Nz=2, memory_layout_x='shared', memory_layout_y='distributed', memory_layout_z='distributed').`
| [in] | _(optional)_ | Nx The number of subintervals in the x-direction |
| [in] | _(optional)_ | Ny The number of subintervals in the y-direction |
| [in] | _(optional)_ | Nz The number of subintervals in the z-direction |
| [in] | _(optional)_ | memory_layout_x Sequential, shared, or distributed (default: sequential) |
| [in] | _(optional)_ | memory_layout_y Sequential, shared, or distributed (default: sequential unless 2D, or 3D with more than one node with shared memory ranks) |
| [in] | _(optional)_ | memory_layout_z Sequential, shared, or distributed (default: sequential unless 3D and more than one rank) |
| [in] | _(optional)_ | comm The communicator to use (default is PETSC_COMM_WORLD) |
| [in] | _(optional)_ | flat_mpi If true, shared memory is ignored (default: .false.) |
| [in] | _(optional)_ | dimensionality If 2, the domain decomposition is performed in the x,y-directions only (default: 3, alternatively, Nz == 1 can be used) |
| [in] | _(optional)_ | my_node The node of this MPI process in the communicator (only set this for debugging purposes) |
| [in] | _(optional)_ | my_shmem_rank The shared memory rank of this MPI process in the communicator (only set this for debugging purposes) |
| [in] | _(optional)_ | nr_nodes The number of nodes in the communicator (only set this for debugging purposes) |
| [in] | _(optional)_ | nr_shmem_ranks The number of shared memory ranks on this node (only set this for debugging purposes ) |
| procedure m_domain_decomp::domaindecomp::is_distributed_memory | ( | class(domaindecomp), intent(in) | this, |
| integer, intent(in) | dir ) |
Determine whether the memory layout is distributed in this direction.
| [in] | dir | The direction to check (1=x, 2=y, 3=z) |
| procedure m_domain_decomp::domaindecomp::is_sequential_memory | ( | class(domaindecomp), intent(in) | this, |
| integer, intent(in) | dir ) |
Determine whether the memory layout is sequential in this direction.
| [in] | dir | The direction to check (1=x, 2=y, 3=z) |
| procedure m_domain_decomp::domaindecomp::is_shared_memory | ( | class(domaindecomp), intent(in) | this, |
| integer, intent(in) | dir ) |
Determine whether the memory layout is shared in this direction.
| [in] | dir | The direction to check (1=x, 2=y, 3=z) |