0% found this document useful (0 votes)
3 views

Operating System Project

The report presents a simulation program that analyzes various CPU scheduling algorithms, including FCFS, SJF, RR, and Priority Scheduling, by evaluating their performance through metrics like turnaround time, waiting time, and response time. Additionally, it details a multi-threaded implementation of the Producer-Consumer problem in C, utilizing synchronization mechanisms such as semaphores and mutex locks to ensure data integrity and prevent race conditions. The findings from both tasks provide insights into the effectiveness of CPU scheduling strategies and robust solutions for concurrent programming challenges.

Uploaded by

sushean45
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Operating System Project

The report presents a simulation program that analyzes various CPU scheduling algorithms, including FCFS, SJF, RR, and Priority Scheduling, by evaluating their performance through metrics like turnaround time, waiting time, and response time. Additionally, it details a multi-threaded implementation of the Producer-Consumer problem in C, utilizing synchronization mechanisms such as semaphores and mutex locks to ensure data integrity and prevent race conditions. The findings from both tasks provide insights into the effectiveness of CPU scheduling strategies and robust solutions for concurrent programming challenges.

Uploaded by

sushean45
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Department of Computer Science and Engineering

(Autonomous)
Operating System Assignment (COM-302)
Analysis Report
Submitted By:-

Name - SUSHEAN SHARMA

Semester - 3rd

Roll No. - 2022A1R067

Section - A3

1
CONTENTS

S.No Topics Page No.

1. Tasks 3

2. Summary 3-4

3. Introduction 5-6

4. Implementation 6 - 11
Details
5. Code Snippet 11 - 15

6. Code Output 16 - 17

7. Algorithm 18 - 22
Explanation
8. Conclusion 22 - 26

9. Reference & 26 - 27
Repository Link

2
TASKS :

1. Write a program in a language of your choice to simulate various CPU


scheduling algorithms such as First-Come-First-Served (FCFS), Shortest Job
First (SJF), Round Robin (RR), and Priority Scheduling. Compare and
analyze the performance of these algorithms using different test cases and
metrics like turnaround time, waiting time, and response time.

2. Write a multi-threaded program in C or another suitable language to solve


the classic ProducerConsumer problem using semaphores or mutex locks.
Describe how you ensure synchronization and avoid race conditions in your
solution.

SUMMARY:

● The program's implementation faithfully represents each scheduling


algorithm, providing a dynamic environment for testing and analysis.
Multiple test cases have been designed to assess the algorithms' adaptability
to different workloads and operational conditions. Metrics such as
turnaround time, waiting time, and response time have been selected to
quantify and compare the effectiveness of each scheduling algorithm.
The experimental setup involves simulated hardware configurations, and the
results are presented through detailed analysis and visual representations.
The findings offer insights into the strengths and weaknesses of each
algorithm, aiding in the selection of an appropriate scheduling strategy based
on specific performance requirements and constraints.
This executive summary encapsulates the essence of the simulation,
highlighting the significance of the analysis and its potential impact on

3
informed decision-making in CPU scheduling strategy selection. The
subsequent sections of the report delve into the specifics of the
implementation, test cases, metrics, results, and a thorough discussion of the
observed outcomes.

● This multi-threaded program, developed in the C programming language,


addresses the classic Producer-Consumer problem using synchronization
mechanisms such as semaphores or mutex locks. The primary objective is to
ensure the secure and efficient exchange of data between threads, mitigating
potential race conditions and guaranteeing a synchronized execution flow.
The implementation employs a multi-threaded approach to simulate the
Producer-Consumer scenario, where one or more threads act as producers,
generating data, and others function as consumers, processing the data.
Synchronization mechanisms, specifically semaphores or mutex locks, are
strategically utilized to control access to shared resources, preventing
conflicts that could arise in concurrent execution.
In describing the solution's synchronization strategy, careful attention is paid
to the design choices made to circumvent race conditions. The report
elaborates on how semaphores or mutex locks are employed to coordinate
the interaction between producer and consumer threads, ensuring orderly
access to shared buffers and avoiding potential data corruption or loss.
By addressing the synchronization challenges inherent in the
Producer-Consumer problem, this program serves as a model for robust
multi-threaded solutions, emphasizing the importance of controlled resource
access in concurrent programming. The subsequent sections of the report
provide a detailed examination of the program's implementation,
synchronization mechanisms employed, and an in-depth discussion of the
strategies used to avoid race conditions and ensure thread safety.

4
INTRODUCTION :

● In the dynamic landscape of computer systems, the efficiency of CPU


scheduling algorithms plays a pivotal role in determining the overall
performance and responsiveness of a computing environment. This
simulation program, crafted in [Programming Language of Choice],
endeavors to emulate and scrutinize the functionalities of four fundamental
CPU scheduling algorithms: First-Come-First-Served (FCFS), Shortest Job
First (SJF), Round Robin (RR), and Priority Scheduling.
The primary motivation behind this endeavor is to conduct a thorough
examination of the performance characteristics exhibited by each scheduling
algorithm under diverse scenarios. Through the systematic execution of
various test cases, encompassing a range of workloads and operational
conditions, the program aims to assess and compare key metrics that
profoundly influence the efficiency of these algorithms. These metrics
include turnaround time, waiting time, and response time.
As we delve into the intricacies of each scheduling algorithm, we anticipate
uncovering nuanced insights into their respective strengths, limitations, and
adaptability to different computational contexts. By subjecting these
algorithms to a battery of carefully designed test cases, we seek to provide a
comprehensive understanding of their performance nuances and trade-offs.
This simulation program serves as a valuable tool for system architects,
developers, and decision-makers, aiding them in selecting an optimal CPU
scheduling strategy tailored to specific application requirements. The
subsequent sections of this report will delve into the implementation details,
the rationale behind the chosen metrics, the experimental methodology, and
a detailed analysis of the observed results, offering a holistic perspective on
the performance dynamics of these fundamental CPU scheduling algorithms.

● In the realm of concurrent programming, the Producer-Consumer problem


stands as a quintessential challenge, exemplifying the delicate balance
required when multiple threads contend for shared resources. This project
endeavors to tackle the classic Producer-Consumer problem by crafting a
multi-threaded program in C, or another suitable language, employing
synchronization mechanisms such as semaphores or mutex locks to ensure a
harmonious and secure interaction between threads.
The Producer-Consumer scenario encapsulates a fundamental pattern where

5
one set of threads, the producers, generate data, while another set, the
consumers, process that data. The critical aspect lies in orchestrating this
interaction to avoid potential race conditions, ensuring data integrity, and
maintaining the overall system's stability.
This program delves into the intricacies of synchronization and race
condition prevention through the strategic use of semaphores or mutex locks.
By carefully orchestrating access to shared resources, such as buffers or data
structures, the program aims to provide a robust solution that guarantees
orderly execution, minimizing the risk of conflicts and data corruption.
As we delve into the implementation details, this report will elucidate the
mechanisms employed to synchronize the activities of producers and
consumers, shedding light on how semaphores or mutex locks are
instrumental in facilitating a coordinated and secure execution flow. The
ensuing sections will provide a comprehensive overview of the program, its
synchronization strategies, and an in-depth analysis of the methodologies
employed to steer clear of race conditions, thereby ensuring a resilient and
thread-safe solution to the classic Producer-Consumer problem.

Implementation Details :

● TASK 1 :

Programming Language Choice :

The simulation program is implemented in C#, and selected for its


versatility, efficiency, and suitability for system-level programming.

Design Overview :

The program is structured to simulate the execution of four distinct


CPU scheduling algorithms - FCFS, SJF, RR, and Priority Scheduling.
Each algorithm is encapsulated in modular components, promoting
code modularity and ease of maintenance.

Data Structures :

➢ Process Representation :

6
A data structure is employed to represent processes,
encompassing attributes such as process ID, burst time, priority,
etc.

➢ Scheduling Queue :

A queue or list structure holds the processes awaiting execution,


adhering to the principles of each scheduling algorithm.

Algorithm Implementation :

➢ First-Come-First-Served (FCFS) :

Processes are executed in the order of their arrival.

➢ Shortest Job First (SJF):

The process with the shortest burst time is selected for


execution first.

➢ Round Robin (RR):

Processes are executed in a cyclic manner with a fixed time


quantum.

➢ Priority Scheduling :

Processes with the highest priority are given precedence.

Metrics Calculation :

➢ Turnaround Time :

Calculated as the total time taken for a process to complete,


including waiting time and execution time.

➢ Waiting Time :

Represents the total time a process spends waiting in the ready


queue.

7
➢ Response Time :

Denotes the time taken from submitting a process until the first
response is received.

➢ Test Cases :

A diverse set of test cases is carefully designed to evaluate the


algorithms under various scenarios, including:

1. Equal burst times.


2. Varying process priorities.
3. Randomized execution order.
4. Unequal burst times.

Experimental Setup :

The program is tested on a simulated hardware environment with


configurable parameters, allowing for the emulation of diverse
computing scenarios.

Code Modularity :

The codebase is organized into modular functions or classes,


facilitating easy modification or extension of the simulation program.

Output and Visualization :

The program provides detailed output, including the execution


sequence, turnaround time, waiting time, and response time for each
scheduling algorithm. Visualization tools may be incorporated for a
clearer representation of results.

This implementation prioritizes clarity, modularity, and


configurability, laying the foundation for a comprehensive analysis of
CPU scheduling algorithms under diverse conditions. The subsequent
sections will delve into the results and analyses derived from the
execution of the simulation program.

8
● TASK 2 :

1. Language Choice :
The solution is implemented in C, a language well-suited for systems
programming and thread management. The POSIX threads (pthreads)
library is used for multi-threading support.

2. Synchronization Mechanisms :

○ Mutex Locks (pthread_mutex_t):

● A mutex (pthread_mutex_t) is employed to protect


critical sections where shared data (the buffer) is
accessed and modified.

● The pthread_mutex_lock and pthread_mutex_unlock


functions ensure that only one thread can access the
critical section at any given time, preventing race
conditions.

○ Semaphores (sem_t) :

● Two semaphores (sem_t) are used: full and empty.


● sem_full is used to track the number of filled slots in the
buffer.
● sem_empty is used to track the number of empty slots in
the buffer.
● sem_wait and sem_post operations are employed to
control access to the buffer and signal changes in its
state.
3. Buffer and Thread Management :

○ Buffer (int buffer[BUFFER_SIZE]):

● The buffer is a shared data structure that holds the


produced items.
● It is a simple array with a fixed size (BUFFER_SIZE),
and it is accessed by both the producer and the consumer.

9
○ Threads:

● Two threads are created - one for the producer and one
for the consumer.
● The pthread_create function is used to spawn threads,
and pthread_join is used to wait for their completion.

4. Producer and Consumer Functions :

● **Producer (void producer(void arg)):

➢ Generates a random item.


➢ Waits for an empty slot in the buffer using
sem_wait(&empty).
➢ Acquires the mutex to access the critical section.
➢ Adds the item to the buffer and updates the buffer index.
➢ Releases the mutex.
➢ Signals that the buffer is no longer empty using
sem_post(&full).

● **Consumer (void consumer(void arg)):

➢ Waits for a filled slot in the buffer using


sem_wait(&full).
➢ Acquires the mutex to access the critical section.
➢ Retrieves an item from the buffer and updates the buffer
index.
➢ Releases the mutex.
➢ Signals that the buffer is no longer full using
sem_post(&empty).

5. Main Function :

● Initializes semaphores (full and empty) and the mutex.


● Creates producer and consumer threads.
● Waits for threads to complete using pthread_join.
● Destroys semaphores and the mutex.

10
6. Conclusion :

This implementation of the Producer-Consumer problem in C demonstrates


effective use of mutex locks and semaphores to ensure synchronization and
prevent race conditions. The mutex protects critical sections, and
semaphores control access to the shared buffer, providing a robust and
thread-safe solution to the classic concurrency problem.

Code Snippet :

➢ TASK 1 :

11
12
13
➢ TASK 2:

14
15
CODE OUTPUT’S :

➢ TASK 1 :

16
➢ TASK 2 :

17
Algorithm Explanation :

● Task 1 :

1. First-Come-First-Served (FCFS):

● Algorithm:

➢ Arrange processes in the ready queue in the order they


arrive.
➢ Execute the processes in the order they are in the queue.
➢ Calculate turnaround time: Turnaround Time =
Completion Time - Arrival Time
➢ Calculate waiting time: Waiting Time = Turnaround Time
- Burst Time

● Implementation Steps:

➢ Sort the processes based on arrival time.


➢ Execute each process sequentially.
➢ Track completion, turnaround, and waiting times for
analysis.

2. Shortest Job First (SJF):

● Algorithm:

➢ Select the process with the shortest burst time next.


➢ Execute the selected process.
➢ Calculate turnaround time and waiting time as in FCFS.

● Implementation Steps:

➢ Sort the processes based on burst time.


➢ Execute the process with the shortest burst time.
➢ Continue until all processes are executed.

18
3. Round Robin (RR):

● Algorithm:

➢ Assign each process a fixed time quantum (time slice).


➢ Execute each process for the allotted time quantum.
➢ Move to the next process in the ready queue.
➢ Continue until all processes are executed.

● Implementation Steps:

➢ Maintain a circular queue for ready processes.


➢ Execute each process for the time quantum.
➢ Rotate the queue and continue until all processes are
executed.

4. Priority Scheduling:

● Algorithm:

➢ Assign priority levels to processes (lower value means


higher priority).
➢ Execute the process with the highest priority.
➢ Calculate turnaround time and waiting time.

● Implementation Steps :

➢ Sort processes based on priority.


➢ Execute the process with the highest priority.
➢ Continue until all processes are executed.

● Task 2 :

Algorithm Explanation:

● Buffer and Indices:

19
A shared buffer is used to store the data produced by
producers and consumed by consumers. In this example,
buffer is an array with a fixed size (BUFFER_SIZE).
Two indices, in and out, keep track of the next available
slot in the buffer for the producer to write (in) and the
next item to be consumed by the consumer (out).

Semaphores Initialization:

Three semaphores are used: mutex, full, and empty.


mutex:
● Binary semaphore to control access to the critical sections (buffer
modification).
● full: Counting semaphore to track the number of items in the buffer
available for consumption.
● empty: Counting semaphore to track the number of empty slots in the
buffer available for producers.

Producer Thread:

In the producer function, a loop is run to simulate the production of data. For
each iteration, a random item is produced.
● sem_wait(&empty): Waits if the buffer is full, i.e., no empty
slots for the producer.
● sem_wait(&mutex): Locks the buffer to ensure exclusive
access.
● The produced item is added to the buffer, and the in index is
updated.
● sem_post(&mutex): Unlocks the buffer.
● sem_post(&full): Signals that there is a new item in the buffer
available for consumption.

Consumer Thread:

In the consumer function, a loop is run to simulate the consumption of


data. For each iteration, a consumer takes an item from the buffer.
● sem_wait(&full): Waits if the buffer is empty, i.e., no items
available for consumption.
● sem_wait(&mutex): Locks the buffer to ensure exclusive
access.

20
● The item is consumed from the buffer, and the out index is
updated.
● sem_post(&mutex): Unlocks the buffer.
● sem_post(&empty): Signals that there is an empty slot in the
buffer available for production.

Main Function:

● Creates threads for producers and consumers using pthread_create.


● Waits for all threads to finish using pthread_join.
● Destroys the semaphores to release system resources using
sem_destroy.

Synchronization and Race Condition Avoidance:

● Mutex (sem_mutex):

➢ Ensures that only one thread (either a producer or a


consumer) can access the critical sections where the
shared buffer is being modified.
➢ Prevents race conditions by ensuring exclusive access to
the buffer during production and consumption.

● Full Semaphore (sem_full):

➢ Producers wait if the buffer is full (sem_wait(&empty)).


➢ Consumers wait if the buffer is empty (sem_wait(&full)).
➢ Ensures that producers and consumers are synchronized
based on the state of the buffer.

● Empty Semaphore (sem_empty):

➢ Producers wait if the buffer is full (sem_wait(&empty)).


➢ Consumers signal (sem_post(&empty)) that an empty
slot is available after consuming an item.
➢ Ensures that producers and consumers are synchronized
based on the availability of empty slots in the buffer.

21
This combination of mutex locks and semaphores ensures that the producer
and consumer threads cooperate properly, preventing data corruption or race
conditions during concurrent access to the shared buffer. The semaphores
regulate the flow of producers and consumers, ensuring that the buffer is
accessed in a controlled and synchronized manner.

CONCLUSION :
● TASK 1 :

In conclusion, the simulation and analysis of various CPU scheduling


algorithms, including First-Come-First-Served (FCFS), Shortest Job
First (SJF), Round Robin (RR), and Priority Scheduling, provide
valuable insights into their performance under different scenarios. The
evaluation was based on key metrics such as turnaround time, waiting
time, and response time.

Key Findings:

● FCFS:

Simple and easy to implement.


May suffer from the "convoy effect" where shorter processes get
delayed due to longer ones.

● SJF:

Efficient for minimizing waiting time by prioritizing shorter jobs.


Prone to the "starvation" problem for longer jobs if short jobs keep
arriving.

● Round Robin (RR):

Provides fair execution for all processes with a fixed time quantum.
May have higher waiting times for processes with longer burst times.

● Priority Scheduling:

22
Allows for prioritizing processes based on predefined priorities.
Prone to "priority inversion" issues and may lead to starvation of
lower-priority processes.

Analysis and Comparison:

The choice of the best scheduling algorithm depends on the specific


characteristics of the workload and the system requirements.
FCFS and RR are straightforward but may not perform optimally in all
scenarios.
SJF is effective in minimizing waiting times but may suffer from starvation.
Priority Scheduling allows for customizing priority levels but requires
careful handling to avoid issues like priority inversion.

Recommendations:

For Shorter Job Times:

SJF may be preferred to minimize waiting times.

For Fairness and Responsiveness:

Round Robin ensures fair execution, especially in scenarios where


process burst times vary significantly.

For Customized Priority Handling:

Priority Scheduling allows for customization but requires careful


consideration of priority inversion issues.

Consideration of Real-Time Systems:

In real-time systems, RR or Priority Scheduling may be more suitable,


depending on the specific requirements.

Future Considerations:

Further experiments with different time quantum values in RR to observe its


impact on performance.
Implementation of advanced scheduling algorithms like Multilevel Queue

23
Scheduling or Multilevel Feedback Queue Scheduling for more
sophisticated analysis.
In summary, the choice of a CPU scheduling algorithm depends on the
specific characteristics and requirements of the system. The analysis and
comparison of these algorithms provide a foundation for making informed
decisions based on the nature of the workload and desired system behavior.
Future considerations may involve exploring more advanced scheduling
strategies for enhanced performance in diverse scenarios.

● TASK 2 :

In conclusion, the implementation of a multi-threaded program to


solve the classic Producer-Consumer problem using C and
synchronization mechanisms such as semaphores and mutex locks has
proven effective in ensuring thread safety and avoiding race
conditions. The solution presented here provides a robust framework
for coordinating the interactions between producers and consumers,
preventing conflicts that could lead to data corruption or program
instability.

Key Aspects of the Implementation:

● Mutex Locks:

The use of mutex locks ensures that critical sections, such as


accessing and modifying the shared buffer, are protected from
concurrent access by multiple threads.
The pthread_mutex_lock and pthread_mutex_unlock functions
are employed to acquire and release the mutex, respectively.

● Semaphores:

Semaphores are employed to control the flow of execution and


prevent issues such as deadlock or excessive resource
consumption.
The sem_wait and sem_post operations on semaphores ensure
that producers and consumers wait for and signal the
availability of resources in the buffer.

24
● Buffer Management:

A shared buffer is used as the communication medium between


producers and consumers.
Proper indexing and boundary checking are implemented to
prevent buffer overflows or underflows.
● Thread Coordination:

The creation and joining of threads are handled using the


pthread_create and pthread_join functions, ensuring proper
synchronization during thread execution.

Ensuring Synchronization:

● Mutex for Critical Sections:

Mutex locks are employed to guard critical sections, such as


accessing and modifying the shared buffer. This ensures that
only one thread can access these sections at any given time,
preventing race conditions.

● Semaphore for Buffer Management:

Semaphores are used to manage the state of the buffer. The full
semaphore ensures that the producer waits when the buffer is
full, and the empty semaphore ensures that the consumer waits
when the buffer is empty.

● Proper Locking and Unlocking:

Mutex locks are acquired before entering critical sections and


released afterward to allow other threads to access the protected
regions.

Avoiding Race Conditions:

● Exclusive Access with Mutex:

The use of mutex locks ensures exclusive access to critical

25
sections, preventing race conditions that could arise from
concurrent modification of shared data.

● Semaphore Signaling:

Semaphores are used to signal changes in the state of the buffer.


Proper signaling ensures that producers and consumers are
synchronized, avoiding race conditions related to buffer
occupancy.

Future Considerations:

The presented solution serves as a solid foundation for understanding and


implementing thread synchronization in concurrent programming.
Future considerations may involve exploring more advanced
synchronization techniques or scalability improvements for scenarios with a
larger number of producers and consumers.
In summary, the multi-threaded Producer-Consumer solution presented in C,
employing mutex locks and semaphores, effectively ensures
synchronization, prevents race conditions, and provides a basis for
understanding concurrent programming challenges and solutions.

Reference & Repository Link :

○ https://chat.openai.com/c/d204f853-6b36-42f9-a7f8-8adbcde384b1
○ https://www.geeksforgeeks.org/program-for-shortest-job-first-or-sjf-cpu
-scheduling-set-1-non-preemptive/
○ https://www.geeksforgeeks.org/producer-consumer-problem-using-sema
phores-set-1/
○ https://medium.com/@sohamshah456/producer-consumer-programmin
g-with-c-d0d47b8f103f

26
TASK 1 :

● https://github.com/sushean/OS-Assignment/blob/main/1.cs

TASK 2 :

● https://github.com/sushean/OS-Assignment/blob/main/Program.cs

27

You might also like