UNIT – 2
QUESTION BANKS
----------2 MARKS ------------
1.Define thread ?
Threads in python are an entity within a process that can be scheduled
for execution. In simpler words, a thread is a computation process that
is to be performed by a computer.
2. Define join ()
Join in Python is an in-built method used to join an iterable’s elements,
separated by a string separator, which is specified by you. Thus,
whenever you want to join the elements of an iterable and make it a
string, you can use the string join in Python.
3. What is daemon thread ?
A daemon thread is a background thread. A daemon thread is useful
for executing tasks that are not critical. The program can exit and
doesn’t need to wait for the daemon threads to be completed
4.Define race condition ?
A race condition occurs when two or more threads try to access a
shared variable simultaneously, leading to unpredictable outcomes
5.Define deadlock ?
A deadlock is a situation where two or more threads or processes are
unable to proceed because each is waiting for the other to release a
resource (Lock)
6.Define semaphore ?
A semaphore in Python is a synchronization primitive that manages
access to a shared resource with a limited capacity
7. Define timer ?
A timer in Python is a time-tracking program. Python developers can
create timers with the help of Python's time modules.
----------- 5 Marks -----------
1. Threadpool Executors?
Threading allows parallelism of code and Python language has two
ways to achieve its 1st is via multiprocessing module and 2nd is via
multithreading module. Multithreading is well suited to speed up I/O
bound tasks like making a web request, or database operations, or
reading/writing to a file. In contrast to this CPU intensive tasks like
mathematical computational tasks are benefited the most using
multiprocessing. This happens due to GIL (Global Interpreter Lock)
Threadpool Executors methods :
Submit(fn, *args, **kwargs): It runs a callable or a method and returns
a Future object representing the execution state of the method
Shutdown(wait = True, *, cancel_futures = False):
It signals the executor to free up all resources when the futures are
done executing.
Key Features:
1. Thread Pool Management: Automatically manages a pool of threads
for efficient resource utilization.
2. Concurrency: Enables running multiple tasks concurrently without
manually creating and managing threads.
3. Future Objects: Returns Future objects to easily monitor task status
and retrieve results.
EXAMPLE___________
From concurrent.futures import ThreadPoolExecutor
From time import sleep
Values = [3,4,5,6]
Def cube(x):
Print(f’Cube of {x}:{x*x*x}’)
If __name__ == ‘__main__’:
Result =[]
With ThreadPoolExecutor(max_workers=5) as exe:
Exe.submit(cube,2)
# Maps the method ‘cube’ with a list of values.
Result = exe.map(cube,values)
For r in result:
Print(r)
. OUTPUT
Cube of 2:8
Cube of 3:27
Cube of 4:64
Cube of 5:125
Cube of 6:216
2. Explain how to working with multiple threads ?
Working with multiple threads in Python allows you to execute tasks
concurrently, improving efficiency in certain applications, especially those
involving I/O-bound operations. Python’s threading module is typically used
to create and manage threads
1. Creating a Thread:
To create a thread, you can instantiate a Thread object from the threading
module. You provide the target function (the function the thread will run) and
optionally arguments.
Import threading
Def print_numbers():
For I in range(5):
Print(i)
# Create a thread that runs the print_numbers function
Thread = threading.Thread(target=print_numbers)
# Start the thread
Thread.start()
# Wait for the thread to finish
Thread.join()
2.Working with Multiple Threads:
You can create and run multiple threads by following the same process. If
you need to run the same function with different arguments, you can pass
those arguments to the thread.
Example with multiple threads:
Import threading
Def print_numbers(thread_name):
For I in range(5):
Print(f”{thread_name}: {i}”)
# Create multiple threads
Thread1 = threading.Thread(target=print_numbers, args=(“Thread 1”,))
Thread2 = threading.Thread(target=print_numbers, args=(“Thread 2”,))
# Start the threads
Thread1.start()
Thread2.start()
# Wait for both threads to finish
Thread1.join()
Thread2.join()
In this example:
Args=(“Thread 1”,): The argument is passed to the target function
(print_numbers) using a tuple.
3.Thread Synchronization:
When multiple threads share resources (like variables or data structures),
you must ensure synchronization to prevent conflicts or data corruption.
Python provides tools like Lock, RLock, Semaphore, and Condition for thread
synchronization.
Example using a Lock:
Import threading
# Shared resource
Counter = 0
Lock = threading.Lock()
Def increment():
Global counter
With lock: # Ensures only one thread accesses this block at a time
Temp = counter
Temp += 1
Counter = temp
# Create multiple threads
Threads = [threading.Thread(target=increment) for _ in range(1000)]
# Start all threads
For thread in threads:
Thread.start()
# Wait for all threads to finish
For thread in threads:
Thread.join()
Print(f”Final counter value: {counter}”)
In this example:
A Lock is used to ensure that only one thread can modify the shared counter
variable at a time, preventing race conditions.
4.Thread Safety:
While Python’s Global Interpreter Lock (GIL) limits true parallelism in CPU-
bound operations, threads are still useful for I/O-bound tasks (such as
network or file operations) where the GIL is released during blocking I/O calls.
5.Daemon Threads:
Daemon threads are background threads that automatically terminate when
the main program exits. They are useful when you don’t need the threads to
block the program from exiting.
Import threading
Import time
Def background_task():
While True:
Print(“Background task running…”)
Time.sleep(1)
# Create a daemon thread
Daemon_thread = threading.Thread(target=background_task, daemon=True)
# Start the daemon thread
Daemon_thread.start()
# Main program exits after 5 seconds, even though the daemon thread is still
running
Time.sleep(5)
Print(“Main program ends.”)
Daemon=True: Sets the thread as a daemon. It allows the program to exit
even if the thread is still running.
3 . Explain the concepts of deadlock?
A deadlock is a situation in computing where two or more processes are
unable to proceed because each is waiting for the other to release a
resource. Deadlocks commonly occur in systems where multiple processes
share resources, such as memory, files, or devices.
Key Concepts of Deadlock
1. Mutual Exclusion
At least one resource must be held in a non-shareable mode. If a process
holds a resource, others cannot use it until it is released.
2. Hold and Wait
A process holding at least one resource is waiting to acquire additional
resources held by other processes.
3. No Preemption
A resource allocated to a process can only be released voluntarily by the
process holding it; it cannot be forcibly taken.
4. Circular Wait
A circular chain of processes exists where each process is waiting for a
resource that the next process in the chain holds
Deadlock Example
Process A holds Resource 1 and waits for Resource 2.
Process B holds Resource 2 and waits for Resource 1.
Both processes are stuck because they cannot proceed until the other
releases its resource.
Methods for Handling Deadlocks
1. Deadlock Prevention
Ensure that at least one of the four conditions for deadlock cannot occur. For
instance:
Avoid mutual exclusion where possible.
Use resource allocation strategies that prevent circular wait.
2. Deadlock Avoidance
Use algorithms like the Banker's Algorithm to ensure that resource allocation
does not lead to a deadlock.
3. Deadlock Detection and Recovery
Allow the system to enter a deadlock state and then detect it using a
resource allocation graph or similar methods. Once detected:
Abort one or more processes to break the deadlock.
Preempt resources and reallocate them.
4. Ignore the Problem
In some systems, such as personal computers, deadlocks are rare and may
be ignored, relying on a system reboot to resolve the issue.
Understanding these concepts helps in designing systems that can either
prevent or effectively handle deadlocks.
10 marks :
1. Explain the concepts of producer consumer thread ?
1.)producer consumer using threads
The Producer-Consumer problem is a classic synchronization problem
in computer science that deals with the coordination of two or more
threads that share a common resource, such as a queue. In this
scenario:
- Producer threads generate data and put it in a queue
- Consumer threads take data from the queue and process it
The goal is to ensure that:
- The producer threads can produce data at their own pace without
worrying about the consumer threads
- The consumer threads can consume data at their own pace without
worrying about the producer threads
- The queue is used efficiently and doesn’t overflow or underflow
Here’s a detailed explanation of the Producer-Consumer problem using
a queue:
Components:
- Queue: a data structure that stores data produced by the producer
threads and consumed by the consumer threads
- Producer threads: generate data and put it in the queue
- Consumer threads: take data from the queue and process it
Problem Statement:
- The producer threads produce data at a rate that may be faster or
slower than the consumer threads can consume it
- The consumer threads consume data at a rate that may be faster or
slower than the producer threads can produce it
- The queue has a limited capacity and may overflow or underflow if
not managed properly
Solution:
- Use a queue to decouple the producer and consumer threads
- Use synchronization techniques (e.g., locks, semaphores, monitors) to
coordinate access to the queue
- Implement a producer-consumer protocol that ensures the queue is
used efficiently and doesn’t overflow or underflow
Protocol:
1. Producer threads produce data and put it in the queue
2. Consumer threads take data from the queue and process it
3. If the queue is full, producer threads wait until there is space
available
4. If the queue is empty, consumer threads wait until data is available
Synchronization Techniques:
- Locks (mutexes): ensure exclusive access to the queue
- Semaphores: control the access to the queue based on a count
- Monitors: provide a high-level synchronization mechanism
Benefits:
- Efficient use of resources
- Improved responsiveness
- Enhanced system performance
Example:
- Producer threads: web servers generating HTML pages
- Consumer threads: web browsers rendering HTML pages
- Queue: a buffer that stores HTML pages
By using a queue and synchronization techniques, the Producer-
Consumer problem can be solved efficiently and effectively, ensuring
that data is produced and consumed at the right pace and the system
runs smoothly.
2. Explained the concepts of producer consumer using queue ?
3.
2.) Producer Consumer using Queue
The producer-consumer pattern requires a producer to make data
available, a communication channel to share that data to, and a
consumer to receive and use that data
The producer-consumer problem is a classic synchronization problem in
computer science. It involves two types of threads:
1. Producer threads: These threads produce data and put it in a shared
buffer (queue).
2. Consumer threads: These threads take data from the shared buffer
(queue) and process it.
The problem is to synchronize access to the shared buffer so that:
- The producer threads can produce data at their own pace without
worrying about the consumer threads.
- The consumer threads can consume data at their own pace without
worrying about the producer threads.
- The shared buffer does not overflow or underflow.
To solve this problem, we use a queue data structure to share data
between producer and consumer threads. Here’s a detailed
explanation of the producer-consumer problem using a queue:
1. Shared Buffer (Queue): We use a queue data structure to share data
between producer and consumer threads. The queue has a limited
size, and we use the following operations:
a. `put(item)`: Producer threads put data into the queue.
b. `get()`: Consumer threads take data from the queue.
c. `full()`: Check if the queue is full.
d. `empty()`: Check if the queue is empty.
2.Producer Thread:
a. Produce data.
b. Put data into the queue using `put(item)`.
c. If the queue is full, wait until the consumer thread consumes some
data.
3.Consumer Thread:
a. Take data from the queue using `get()`.
b. Process the data.
c. If the queue is empty, wait until the producer thread produces some
data.
4. Synchronization:
a. If the queue is full, the producer thread waits until the consumer
thread consumes some data.
b. If the queue is empty, the consumer thread waits until the producer
thread produces some data.
Using a queue ensures that the producer and consumer threads are
decoupled, and the shared buffer does not overflow or underflow.
Here’s a code snippet in Python to demonstrate the producer-consumer
problem using a queue:
Example :
Import queue
Import threading
Import time
# Shared buffer (queue)
Q = queue.Queue(maxsize=5)
# Producer thread function
Def producer():
For I in range(10):
q.put(i)
print(“Produced:”, i)
time.sleep(0.1)
# Consumer thread function
Def consumer():
For I in range(10):
Item = q.get()
Print(“Consumed:”, item)
Time.sleep(0.1
```
3. Explain the concepts of semaphore?
Concepts of Semaphore in python
Semaphores are a synchronization mechanism used to control access to
shared resources in concurrent programming. In Python, semaphores are
implemented through the `threading` module.
Thread semaphores are another synchronization mechanism used in
multithreaded applications. Semaphores are similar to thread locks in that
they prevent race conditions by allowing only a limited number of threads to
access a shared resource at a time.
How it works :
A semaphore maintains a count of the number of available resources. When
a thread wants to use a resource, it asks the semaphore for permission. If the
count is greater than zero, the thread is allowed access, and the count
decreases. When the thread is done, it notifies the semaphore, and the count
increases.
How it’s used :
Semaphores are typically used to limit the number of threads or processes
that can access a shared resource at the same time. For example, you can
use a semaphore to control access to a limited number of espresso machines
in a busy coffee shop.
How it’s implemented :
A semaphore manages an internal counter that is:
Decremented by each acquire() call
Incremented by each release() call
Never allowed to go below zero
You can learn more about semaphore
A semaphore is essentially a counter that regulates the access to a resource.
It has the following properties:
- Initial value (typically set to the number of resources available)
- Increment (release) operation
- Decrement (acquire) operation
Python’s `threading` module provides two types of semaphores:
1. `Semaphore`: A classic semaphore that allows a specified number of
threads to access a resource.
2. `BoundedSemaphore`: A semaphore that prevents the release
operation from exceeding the initial value.
Semaphore Methods:
- `acquire()`: Decrement the semaphore value. If the value is zero, block
until another thread releases the semaphore.
- `release()`: Increment the semaphore value. If another thread is waiting,
wake it up.
- `value()`: Return the current semaphore value.
Example:
```
Import threading
Import time
Sem = threading.Semaphore(3) # Allow 3 threads to access the resource
Def worker():
Sem.acquire()
Try:
Print(“Working…”)
Time.sleep(2)
Finally:
Sem.release()
Threads = []
For I in range(6):
T = threading.Thread(target=worker)
Threads.append(t)
t.start()
for t in threads:
t.join()
```
In this example, we create a semaphore with an initial value of 3, allowing 3
threads to access the resource concurrently. The `worker()` function acquires
the semaphore, performs some work, and then releases the semaphore. If all
3 slots are taken, additional threads will block until one of the slots is
released.
4. Explain about producer consumer using lock ?
Producer Consumer using lock
The producer-consumer problem is a classic synchronization problem in
computer science. It involves two types of threads:
1. Producer threads: These threads produce data and put it in a
shared buffer (queue).
2. Consumer threads: These threads take data from the shared
buffer (queue) and process it.
The problem is to synchronize access to the shared buffer so that:
- The producer threads can produce data at their own pace without
worrying about the consumer threads.
- The consumer threads can consume data at their own pace without
worrying about the producer threads.
- The shared buffer does not overflow or underflow.
To solve this problem, we use a lock (mutex) to synchronize access to
the shared buffer. Here’s a detailed explanation of the producer-
consumer problem using a lock:
1. Shared Buffer (Queue): We use a queue data structure to share
data between producer and consumer threads.
2. Lock (Mutex): We use a lock (mutex) to synchronize access to the
shared buffer. Only one thread can acquire the lock at a time.
3. Producer Thread:
a. Acquire the lock.
b. Produce data and put it in the shared buffer (queue).
c. Release the lock.
1. Consumer Thread:
a. Acquire the lock.
b. Take data from the shared buffer (queue).
c. Process the data.
d. Release the lock.
1. Synchronization:
a. If the shared buffer is full, the producer thread waits until the
consumer thread consumes some data.
b. If the shared buffer is empty, the consumer thread waits until the
producer thread produces some data.
Using a lock ensures that only one thread can access the shared buffer
at a time, preventing data corruption and ensuring synchronization.
Here’s a code snippet in Python to demonstrate the producer-consumer
problem using a lock:
```
Import threading
Import queue
Import time
# Shared buffer (queue)
Q = queue.Queue(maxsize=5)
# Lock (mutex)
Lock = threading.Lock()
# Producer thread function
Def producer():
For I in range(10):
With lock:
While q.full():
Lock.wait()
q.put(i)
print(“Produced:”, i)
lock.notify()
time.sleep(0.1)
# Consumer thread function
Def consumer():
For I in range(10):
With lock:
While q.empty():
Lock.wait()
Item = q.get()
Print(“Consumed:”, item)
Lock.notify()
Time.sleep
```