Multithreading and Deadlock

Multithreading is a core concept in modern programming that allows a program to perform multiple tasks concurrently by creating multiple threads of execution. Synchronization ensures that these threads interact safely and correctly, especially when they share resources like memory or files.


1. What is Multithreading?

Definition:

Multithreading is the ability of a CPU or a single process to execute multiple threads concurrently. A thread is the smallest sequence of programmed instructions that can be managed independently.

Key Features:

  • Threads share the same memory space.

  • Threads within a process can communicate more easily than processes.

  • Faster context switching compared to processes.


2. Why Use Multithreading?

  1. Improved Performance:

    • Utilizes multiple CPU cores efficiently.

    • Handles multiple tasks simultaneously (e.g., GUI responsiveness while processing data).

  2. Better Resource Utilization:

    • Makes better use of I/O operations by running threads when waiting for I/O.
  3. Concurrent Operations:

    • Suitable for tasks like web servers, where multiple requests are handled simultaneously.

3. Challenges in Multithreading

  • Race Conditions: Occur when threads access shared resources without proper synchronization, leading to unpredictable results.

  • Deadlocks: When two or more threads are waiting for each other to release resources, causing a standstill.

  • Starvation: A thread may be perpetually denied access to necessary resources due to other higher-priority threads.


4. What is Synchronization?

Definition:

Synchronization ensures that threads coordinate their actions to prevent conflicts, especially when accessing shared resources.

Purpose:

  • Protect shared data.

  • Prevent race conditions.

  • Maintain consistency and correctness.


5. Key Multithreading and Synchronization Concepts

a. Critical Section:

  • A portion of code where a thread accesses shared resources.

  • Only one thread should execute a critical section at a time.

b. Locks:

Mechanisms to enforce mutual exclusion.

  1. Mutex (Mutual Exclusion):

    • A lock that allows only one thread to access a resource at a time.

    • Thread must release the mutex after using it.

  2. Spinlocks:

    • A thread waits in a loop ("spins") until the lock becomes available.

    • Used in situations where the wait is expected to be short.

c. Semaphores:

  • Generalization of a lock, allowing a specific number of threads to access a resource.

  • Two types:

    1. Binary Semaphore: Acts like a mutex.

    2. Counting Semaphore: Allows multiple threads up to a limit.

d. Monitors:

  • High-level synchronization construct.

  • Combines mutex and condition variables for thread coordination.

e. Condition Variables:

  • Used to make threads wait for certain conditions to become true.

  • Works with mutexes to block and unblock threads.


6. Thread Communication Methods

  1. Shared Memory:

    • Threads share data in the same address space.

    • Requires synchronization to avoid conflicts.

  2. Message Passing:

    • Threads exchange information via messages.

    • Common in distributed systems.

Deadlocks occur when two or more tasks/processes are waiting indefinitely for resources held by each other, resulting in a circular wait. To prevent deadlocks, you can use various strategies and techniques.


7. Example: Multithreading with Synchronization in C++

Problem:

A counter shared between multiple threads must be incremented safely.

#include <iostream>
#include <thread>
#include <mutex>

int counter = 0;
std::mutex mtx;

void incrementCounter(int id) {
    for (int i = 0; i < 5; i++) {
        std::lock_guard<std::mutex> lock(mtx); // Automatically locks and unlocks
        counter++;
        std::cout << "Thread " << id << " incremented counter to " << counter << std::endl;
    }
}

int main() {
    std::thread t1(incrementCounter, 1);
    std::thread t2(incrementCounter, 2);

    t1.join();
    t2.join();

    std::cout << "Final counter value: " << counter << std::endl;
    return 0;
}

Necessary Conditions for Deadlock

A deadlock occurs if all of the following four conditions are true:

  1. Mutual Exclusion: A resource can only be held by one task at a time.

  2. Hold and Wait: A task holding a resource is waiting to acquire additional resources.

  3. No Preemption: Resources cannot be forcibly taken away from a task.

  4. Circular Wait: A circular chain of tasks exists, where each task holds a resource and waits for another.

Preventing deadlock involves breaking at least one of these conditions.


Techniques to Prevent Deadlock

1. Resource Allocation Ordering

  • Assign a strict order to resources, and require tasks to request resources in this order.

  • This avoids circular wait because tasks will always acquire resources in a specific sequence.

Example:

  • Suppose tasks need resources A and B.

  • If a task holds resource A, it can only request resource B next. It cannot request B first, ensuring no circular chain is formed.

2. Avoid Hold and Wait

  • Allocate all required resources at the start of a task’s execution.

  • Tasks will not hold some resources while waiting for others.

Pros: Prevents deadlock completely.
Cons: Can result in resource under utilization, as some tasks may hold resources longer than needed.

Example in C++ pseudocode:

cppCopy codelock(mutexA);
lock(mutexB); // Acquire all required locks at once
// Perform operations
unlock(mutexB);
unlock(mutexA);

3. Resource Timeout

  • Use timeouts when acquiring resources. If a task cannot acquire the resource within a certain time, it will release any held resources and retry later.

Example:

cppCopy codeif (lock.try_lock_for(std::chrono::seconds(2))) {
    // Lock acquired
} else {
    // Handle timeout (e.g., retry, release other locks)
}

This breaks the hold and wait condition because tasks don’t wait indefinitely.


4. No Circular Wait – Resource Hierarchies

  • Impose a strict resource hierarchy to prevent circular waiting.

  • A task can only request a resource with a higher priority than the ones it already holds.

Example:

  • Suppose resources A, B, and C have priorities: A < B < C.

  • If a task holds A, it can request B but not C directly.


5. Release Resources When Blocking

  • Ensure that a task releases held resources if it cannot proceed.

  • This technique can break the hold and wait condition.

Example:

  • If Task A is waiting for resource B, it will release resource A and try again later.

6. Use Deadlock Detection and Recovery

Instead of preventing deadlocks outright, you can detect them and recover:

  1. Deadlock Detection Algorithm: Periodically check the system for circular waits.

  2. Recovery Mechanisms:

    • Abort tasks: Kill one or more tasks to release resources.

    • Preempt resources: Forcefully take resources from lower-priority tasks.

Drawback: Detection and recovery can be resource-intensive and disruptive.


7. Lock Ordering (Priority Inversion Solution)

  • Always acquire locks in the same order across tasks to avoid circular waiting.

  • This method aligns with priority inheritance: A lower-priority task holding a lock can temporarily inherit the higher priority of a waiting task to complete quickly and release the lock.

Example:

cppCopy codelock(mutex1); 
lock(mutex2); // Always acquire mutex1 first, then mutex2
// Perform operations
unlock(mutex2);
unlock(mutex1);

8. Deadlock-Free Locking Libraries

Modern programming frameworks provide libraries or mechanisms that avoid deadlocks:

  • C++ STL std::lock: Prevents deadlocks when acquiring multiple locks simultaneously.

Example:

cppCopy codestd::lock(mutex1, mutex2); // Locks mutex1 and mutex2 without risk of deadlock

Summary of Deadlock Prevention Strategies

TechniqueWhich Condition It BreaksNotes
Resource OrderingCircular WaitEnforces a strict resource acquisition order.
Avoid Hold and WaitHold and WaitAllocate all resources upfront.
Timeout on LockingHold and WaitAvoid indefinite waiting with timeouts.
Resource HierarchyCircular WaitRequest resources in a fixed priority order.
Release Resources EarlyHold and WaitRelease locks/resources when blocked.
Deadlock DetectionNone (Detects Instead)Use algorithms to identify and recover.

Practical Tips for Real-Time Systems

  • Always prioritize timing and responsiveness in real-time operating systems (RTOS).

  • Use priority inversion handling (e.g., priority inheritance) to prevent tasks from getting indefinitely blocked.

  • Test concurrency extensively with tools like thread sanitizers or race condition detectors.