Mutual Exclusion for Shared Variables
- While communication is implicit in shared-address-space programming,
- much of the effort associated with writing correct threaded programs is spent on synchronizing concurrent threads with respect to their data accesses or scheduling.
- Using pthread_create and pthread_join calls, we can create concurrent tasks.
- These tasks work together to manipulate data and accomplish a given task.
- When multiple threads attempt to manipulate the same data item,
- the results can often be incoherent if proper care is not taken to synchronize them.
- Consider the following code fragment being executed by multiple threads.
- The variable my_cost is thread-local and best_cost is a global variable shared by all threads.
- This is an undesirable situation, sometimes also referred to as a race condition.
- So called because the result of the computation depends on the race between competing threads.
- To understand the problem with shared data access, let us examine one execution instance of the above code fragment.
- Assume that there are two threads,
- The initial value of best_cost is 100,
- The values of my_cost are 50 and 75 at threads t1 and t2, respectively.
- If both threads execute the condition inside the if statement concurrently, then both threads enter the then part of the statement.
- Depending on which thread executes first, the value of best_cost at the end could be either 50 or 75.
- There are two problems here:
- non-deterministic nature of the result;
- more importantly, the value 75 of best_cost is inconsistent in the sense that no serialization of the two threads can possibly yield this result.
- Race condition occurred because the test-and-update operation is an atomic operation;
- i.e., the operation should not be broken into sub-operations.
- Furthermore, the code corresponds to a critical segment;
- i.e., a segment that must be executed by only one thread at any time.
- Many statements that seem atomic in higher level languages such as C may in fact be non-atomic.
- i.e., a statement of the form
may comprise several assembler instructions and therefore must be handled carefully.
- Threaded APIs provide support for implementing critical sections and atomic operations using mutex-locks (mutual exclusion locks).
- Mutex-locks have two states: locked and unlocked.
- At any point of time, only one thread can lock a mutex lock.
- A lock is an atomic operation.
- To access the shared data, a thread must first try to acquire a mutex-lock.
- If the mutex-lock is already locked, the process trying to acquire the lock is blocked.
- This is because a locked mutex-lock implies that there is another thread currently in the critical section and that no other thread must be allowed in.
- When a thread leaves a critical section, it must unlock the mutex-lock so that other threads can enter the critical section.
- All mutex-locks must be initialized to the unlocked state at the beginning of the program.
- The function pthread_mutex_lock;
- A call to this function attempts a lock on the mutex-lock mutex_lock.
- The data type of a mutex_lock is predefined to be pthread_mutex_t.
- If the mutex-lock is already locked, the calling thread blocks; otherwise the mutex-lock is locked and the calling thread returns.
- A successful return from the function returns a value 0. Other values indicate error conditions such as deadlocks.
- The function pthread_mutex_unlock;
- On leaving a critical section, a thread must unlock the mutex-lock associated with the section.
- If it does not do so, no other thread will be able to enter this section, typically resulting in a deadlock.
- On calling pthread_mutex_unlock function, the lock is relinquished and one of the blocked threads is scheduled to enter the critical section.
- The specific thread is determined by the scheduling policy.
- if the thread priority scheduling is not implied, the assignment will be left to the native system scheduler and may appear to be more or less random.
- Mutex variables must be declared with type pthread_mutex_t, and must be initialized before they can be used.
- There are two ways to initialize a mutex variable:
- Statically, when it is declared. For example: pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER;
- Dynamically, with the pthread_mutex_init() routine. This method permits setting mutex object attributes, attr.
- If a programmer attempts a pthread_mutex_unlock on a previously unlocked mutex or one that is locked by another thread, the effect is undefined.
- The function pthread_mutex_init;
- We need one more function before we can start using mutex-locks, namely, a function to initialize a mutex-lock to its unlocked state.
- The mutex is initially unlocked.
- The attributes of the mutex-lock are specified by lock_attr.
- If this argument is set to NULL, the default mutex-lock attributes are used (normal mutex-lock).
- Locks represent serialization points since critical sections must be executed by threads one after the other.
- Encapsulating large segments of the program within locks can, therefore, lead to significant performance degradation.
- It is therefore important to minimize the size of critical sections and to handle critical sections and shared data structures with extreme care.
- It is often possible to reduce the idling overhead associated with locks using an alternate function, pthread_mutex_trylock.
- It does not have to deal with queues associated with locks for multiple threads waiting on the lock.
- The function pthread_mutex_trylock;
- This function attempts a lock on mutex_lock.
- If the lock is successful, the function returns a zero.
- If it is already locked by another thread, instead of blocking the thread execution, it returns a value EBUSY.
- This allows the thread to do other work and to poll the mutex for a lock.
- Furthermore, pthread_mutex_trylock is typically much faster than pthread_mutex_lock on typical systems.
Cem Ozdogan
2010-12-27