Next: About this document ...
Up: week9l
Previous: week9l
- Determining the amount of MPI buffering; http://siber.cankaya.edu.tr/ParallelComputing/cfiles/buflimit.cprogram
- Write a program to determine the amount of buffering that MPI_Send provides. That is, write a program that determines how large a message can be sent with MPI_Send without a matching receive at the destination.
- Benchmarking collective barrier; http://siber.cankaya.edu.tr/ParallelComputing/cfiles/barrier.cprogram
- Write a program to measure the time it takes to perform an MPI_Barrier on MPI_COMM_WORLD. Use the same techniques as in the memcpy to average out variations and overhead in MPI_Wtime.
- Print the size of MPI_COMM_WORLD and time for each test.
- Make sure that both sender and reciever are ready when you begin the test.
- How does the performance of MPI_Barrier vary with the size of MPI_COMM_WORLD?
- Broadcast and non-blocking receive; http://siber.cankaya.edu.tr/ParallelComputing/cfiles/send-recv7.cprogram
- A simple SPMD program which uses broadcast and non-blocking receive. The sender process bradcasts a message to all other processes.
- They receive the message and send an answer back, containing the hostname of the machine on which the process is running.
- The receiving process waits for the first reply with MPI_Waitany, and accepts messages in the order they are received.
- MPI_Scatter; http://siber.cankaya.edu.tr/ParallelComputing/cfiles/scatter.cprogram
- A simple SPMD program which uses MPI_Scatter to distribute an array of integer values evenly between a number of processes.
- MPI_Gather; http://siber.cankaya.edu.tr/ParallelComputing/cfiles/gather.cprogram
- A simple SPMD program which uses MPI_Gather to collect an array of integer values from a number of processes.
Next: About this document ...
Up: week9l
Previous: week9l
Cem Ozdogan
2006-12-13