Blocking Message Passing Operations
- A simple solution to the dilemma presented in the code fragment above is for the send operation to return only when it is semantically safe to do so.
- Note that this is not the same as saying that the send operation returns only after the receiver has received the data.
- It simply means that the sending operation blocks until it can guarantee that the semantics will not be violated on return irrespective of what happens in the program subsequently.
- There are two mechanisms by which this can be achieved.
- Blocking Non-Buffered Send/Receive
- Blocking Buffered Send/Receive
- 1
- Blocking Non-Buffered Send/Receive
- The send operation does not return until the matching receive has been encountered at the receiving process.
- When this happens, the message is sent and the send operation returns upon completion of the communication operation.
- Typically, this process involves a handshake between the sending and receiving processes (see Fig. 4.1).
Figure 4.1:
Handshake for a blocking non-buffered send/receive operation.
|
- The sending process sends a request to communicate to the receiving process.
- When the receiving process encounters the target receive, it responds to the request.
- The sending process upon receiving this response initiates a transfer operation.
- Since there are no buffers used at either sending or receiving ends, this is also referred to as a non-buffered blocking operation.
- Idling Overheads in Blocking Non-Buffered Operations: It is clear from the figure that a blocking non-buffered protocol is suitable when the send and receive are posted at roughly the same time (middle in the figure).
- However, in an asynchronous environment, this may be impossible to predict.
- This idling overhead is one of the major drawbacks of this protocol.
- Deadlocks in Blocking Non-Buffered Operations: Consider the following simple exchange of messages that can lead to a deadlock:
- The code fragment makes the values of available to both processes and .
- However, if the send and receive operations are implemented using a blocking non-buffered protocol,
- the send at waits for the matching receive at
- whereas the send at process waits for the corresponding receive at ,
- resulting in an infinite wait.
- Deadlocks are very easy in blocking protocols and care must be taken to break cyclic waits.
- 2
- Blocking Buffered Send/Receive
- A simple solution to the idling and deadlocking problems outlined above is to rely on buffers at the sending and receiving ends.
Figure 4.2:
Blocking buffered transfer protocols: Left: in the presence of communication hardware with buffers at send and receive ends; and Right: in the absence of communication hardware, sender interrupts receiver and deposits data in buffer at receiver end.
|
Figure 4.2Left
- On a send operation, the sender simply copies the data into the designated buffer and returns after the copy operation has been completed.
- The sender process can now continue with the program knowing that any changes to the data will not impact program semantics.
- If the hardware supports asynchronous communication (independent of the CPU), then a network transfer can be initiated after the message has been copied into the buffer.
- Note that at the receiving end, the data cannot be stored directly at the target location since this would violate program semantics.
- Instead, the data is copied into a buffer at the receiver as well.
- When the receiving process encounters a receive operation, it checks to see if the message is available in its receive buffer. If so, the data is copied into the target location.
Figure 4.2Right
- In Fig. 4.2Left, buffers are used at both sender and receiver and communication is handled by dedicated hardware.
- Sometimes machines do not have such communication hardware.
- In this case, some of the overhead can be saved by buffering only on one side.
- For example, on encountering a send operation, the sender interrupts the receiver, both processes participate in a communication operation and the message is deposited in a buffer at the receiver end.
- When the receiver eventually encounters a receive operation, the message is copied from the buffer into the target location.
- In general, if the parallel program is highly synchronous, non-buffered sends may perform better than buffered sends.
- However, generally, this is not the case and buffered sends are desirable unless buffer capacity becomes an issue.
- Impact of finite buffers in message passing; consider the following code fragment:
- In this code fragment, process produces 1000 data items and process consumes them.
- However, if process was slow getting to this loop, process might have sent all of its data.
- If there is enough buffer space, then both processes can proceed;
- however, if the buffer is not sufficient (i.e., buffer overflow), the sender would have to be blocked until some of the corresponding receive operations had been posted, thus freeing up buffer space.
- This can often lead to unforeseen overheads and performance degradation.
- In general, it is a good idea to write programs that have bounded buffer requirements.
- Deadlocks in Buffered Send and Receive Operations:
- While buffering relieves many of the deadlock situations, it is still possible to write code that deadlocks.
- This is due to the fact that as in the non-buffered case, receive calls are always blocking (to ensure semantic consistency).
- Thus, a simple code fragment such as the following deadlocks since both processes wait to receive data but nobody sends it.
- Once again, such circular waits have to be broken.
- However, deadlocks are caused only by waits on receive operations in this case.
Cem Ozdogan
2010-12-27