.\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "MPI_TYPE_CREATE_DARRAY" "3" "Apr 11, 2024" "" "Open MPI" .sp \fI\%MPI_Type_create_darray\fP — Creates a distributed array datatype; .SH SYNTAX .SS C Syntax .INDENT 0.0 .INDENT 3.5 .sp .EX #include int MPI_Type_create_darray(int size, int rank, int ndims, const int array_of_gsizes[], const int array_of_distribs[], const int array_of_dargs[], const int array_of_psizes[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) .EE .UNINDENT .UNINDENT .SS Fortran Syntax .INDENT 0.0 .INDENT 3.5 .sp .EX USE MPI ! or the older form: INCLUDE \(aqmpif.h\(aq MPI_TYPE_CREATE_DARRAY(SIZE, RANK, NDIMS, ARRAY_OF_GSIZES, ARRAY_OF_DISTRIBS, ARRAY_OF_DARGS, ARRAY_OF_PSIZES, ORDER, OLDTYPE, NEWTYPE, IERROR) INTEGER SIZE, RANK, NDIMS, ARRAY_OF_GSIZES(*), ARRAY_OF_DISTRIBS(*), ARRAY_OF_DARGS(*), ARRAY_OF_PSIZES(*), ORDER, OLDTYPE, NEWTYPE, IERROR .EE .UNINDENT .UNINDENT .SS Fortran 2008 Syntax .INDENT 0.0 .INDENT 3.5 .sp .EX USE mpi_f08 MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype, ierror) INTEGER, INTENT(IN) :: size, rank, ndims, array_of_gsizes(ndims), array_of_distribs(ndims), array_of_dargs(ndims), array_of_psizes(ndims), order TYPE(MPI_Datatype), INTENT(IN) :: oldtype TYPE(MPI_Datatype), INTENT(OUT) :: newtype INTEGER, OPTIONAL, INTENT(OUT) :: ierror .EE .UNINDENT .UNINDENT .SH INPUT PARAMETERS .INDENT 0.0 .IP \(bu 2 \fBsize\fP: Size of process group (positive integer). .IP \(bu 2 \fBrank\fP: Rank in process group (nonnegative integer). .IP \(bu 2 \fBndims\fP: Number of array dimensions as well as process grid dimensions (positive integer). .IP \(bu 2 \fBarray_of_gsizes\fP: Number of elements of type \fIoldtype\fP in each dimension of global array (array of positive integers). .IP \(bu 2 \fBarray_of_distribs\fP: Distribution of array in each dimension (array of state). .IP \(bu 2 \fBarray_of_dargs\fP: Distribution argument in each dimension (array of positive integers). .IP \(bu 2 \fBarray_of_psizes\fP: Size of process grid in each dimension (array of positive integers). .IP \(bu 2 \fBorder\fP: Array storage order flag (state). .IP \(bu 2 \fBoldtype\fP: Old data type (handle). .UNINDENT .SH OUTPUT PARAMETERS .INDENT 0.0 .IP \(bu 2 \fBnewtype\fP: New data type (handle). .IP \(bu 2 \fBierror\fP: Fortran only: Error status (integer). .UNINDENT .SH DESCRIPTION .sp \fI\%MPI_Type_create_darray\fP can be used to generate the data types corresponding to the distribution of an ndims\-dimensional array of \fIoldtype\fP elements onto an \fIndims\fP\-dimensional grid of logical processes. Unused dimensions of \fIarray_of_psizes\fP should be set to 1. For a call to \fI\%MPI_Type_create_darray\fP to be correct, the equation .INDENT 0.0 .INDENT 3.5 .sp .EX ndims\-1 pi array_of_psizes[i] = size i=0 .EE .UNINDENT .UNINDENT .sp must be satisfied. The ordering of processes in the process grid is assumed to be row\-major, as in the case of virtual Cartesian process topologies in the \fI\%MPI Standard\fP\&. .sp Each dimension of the array can be distributed in one of three ways: .INDENT 0.0 .IP \(bu 2 \fBMPI_DISTRIBUTE_BLOCK\fP \- Block distribution .IP \(bu 2 \fBMPI_DISTRIBUTE_CYCLIC\fP \- Cyclic distribution .IP \(bu 2 \fBMPI_DISTRIBUTE_NONE\fP \- Dimension not distributed. .UNINDENT .sp The constant MPI_DISTRIBUTE_DFLT_DARG specifies a default distribution argument. The distribution argument for a dimension that is not distributed is ignored. For any dimension \fIi\fP in which the distribution is MPI_DISTRIBUTE_BLOCK, it erroneous to specify \fIarray_of_dargs[i]\fP \fI*\fP \fIarray_of_psizes[i]\fP < \fIarray_of_gsizes[i]\fP\&. .sp For example, the HPF layout ARRAY(CYCLIC(15)) corresponds to MPI_DISTRIBUTE_CYCLIC with a distribution argument of 15, and the HPF layout ARRAY(BLOCK) corresponds to MPI_DISTRIBUTE_BLOCK with a distribution argument of MPI_DISTRIBUTE_DFLT_DARG. .sp The \fIorder\fP argument is used as in \fI\%MPI_Type_create_subarray\fP to specify the storage order. Therefore, arrays described by this type constructor may be stored in Fortran (column\-major) or C (row\-major) order. Valid values for order are MPI_ORDER_FORTRAN and MPI_ORDER_C. .sp This routine creates a new MPI data type with a typemap defined in terms of a function called “cyclic()” (see below). .sp Without loss of generality, it suffices to define the typemap for the MPI_DISTRIBUTE_CYCLIC case where \fBMPI_DISTRIBUTE_DFLT_DARG\fP is not used. .sp MPI_DISTRIBUTE_BLOCK and MPI_DISTRIBUTE_NONE can be reduced to the MPI_DISTRIBUTE_CYCLIC case for dimension \fIi\fP as follows. .sp MPI_DISTRIBUTE_BLOCK with \fIarray_of_dargs[i]\fP equal to MPI_DISTRIBUTE_DFLT_DARG is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to .INDENT 0.0 .INDENT 3.5 .sp .EX (array_of_gsizes[i] + array_of_psizes[i] \- 1)/array_of_psizes[i] .EE .UNINDENT .UNINDENT .sp If \fIarray_of_dargs[i]\fP is not MPI_DISTRIBUTE_DFLT_DARG, then MPI_DISTRIBUTE_BLOCK and DISTRIBUTE_CYCLIC are equivalent. .sp MPI_DISTRIBUTE_NONE is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to \fIarray_of_gsizes[i]\fP\&. .sp Finally, MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP equal to MPI_DISTRIBUTE_DFLT_DARG is equivalent to MPI_DISTRIBUTE_CYCLIC with \fIarray_of_dargs[i]\fP set to 1. .SH NOTES .sp For both Fortran and C arrays, the ordering of processes in the process grid is assumed to be row\-major. This is consistent with the ordering used in virtual Cartesian process topologies in MPI. To create such virtual process topologies, or to find the coordinates of a process in the process grid, etc., users may use the corresponding functions provided in MPI. .SH ERRORS .sp Almost all MPI routines return an error value; C routines as the return result of the function and Fortran routines in the last argument. .sp Before the error value is returned, the current MPI error handler associated with the communication object (e.g., communicator, window, file) is called. If no communication object is associated with the MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error handler. When MPI_COMM_SELF is not initialized (i.e., before \fI\%MPI_Init\fP/\fI\%MPI_Init_thread\fP, after \fI\%MPI_Finalize\fP, or when using the Sessions Model exclusively) the error raises the initial error handler. The initial error handler can be changed by calling \fI\%MPI_Comm_set_errhandler\fP on MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to \fI\%MPI_Comm_spawn\fP/\fI\%MPI_Comm_spawn_multiple\fP\&. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all other MPI functions. .sp Open MPI includes three predefined error handlers that can be used: .INDENT 0.0 .IP \(bu 2 \fBMPI_ERRORS_ARE_FATAL\fP Causes the program to abort all connected MPI processes. .IP \(bu 2 \fBMPI_ERRORS_ABORT\fP An error handler that can be invoked on a communicator, window, file, or session. When called on a communicator, it acts as if \fI\%MPI_Abort\fP was called on that communicator. If called on a window or file, acts as if \fI\%MPI_Abort\fP was called on a communicator containing the group of processes in the corresponding window or file. If called on a session, aborts only the local process. .IP \(bu 2 \fBMPI_ERRORS_RETURN\fP Returns an error code to the application. .UNINDENT .sp MPI applications can also implement their own error handlers by calling: .INDENT 0.0 .IP \(bu 2 \fI\%MPI_Comm_create_errhandler\fP then \fI\%MPI_Comm_set_errhandler\fP .IP \(bu 2 \fI\%MPI_File_create_errhandler\fP then \fI\%MPI_File_set_errhandler\fP .IP \(bu 2 \fI\%MPI_Session_create_errhandler\fP then \fI\%MPI_Session_set_errhandler\fP or at \fI\%MPI_Session_init\fP .IP \(bu 2 \fI\%MPI_Win_create_errhandler\fP then \fI\%MPI_Win_set_errhandler\fP .UNINDENT .sp Note that MPI does not guarantee that an MPI program can continue past an error. .sp See the \fI\%MPI man page\fP for a full list of \fI\%MPI error codes\fP\&. .sp See the Error Handling section of the MPI\-3.1 standard for more information. .SH COPYRIGHT 2003-2024, The Open MPI Community .\" Generated by docutils manpage writer. .