Collective Communication and Computation Operations
MPI provides an extensive set of functions for performing commonly used collective communication operations.
All of the collective communication functions provided by MPI take asan argument a communicator that defines the group of processes that participate in the collective operation.
All the processes that belong to this communicatorparticipate in the operation,
and all of them must call the collective communication function.
Even though collective communication operations do not act like barriers,
act like a virtual synchronization step.
The parallel program should be written such that it behaves correctly even if a global synchronization is performed before and after the collective call.
Barrier; the barrier synchronization operation is performed in MPI using the MPI_Barrier function.
int MPI_Barrier(MPI_Comm comm)
The only argument of MPI_Barrier is the communicator that defines the group of processes that are synchronized.
The call to MPI_Barrierreturns only after all the processes in the group have called this function.