0% found this document useful (0 votes)
5 views

Operating System Overview3

Uploaded by

Shyamali Sarkar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Operating System Overview3

Uploaded by

Shyamali Sarkar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Process Completion Time Arrival Time Burst Time Turnaround Time Waiting Time

B 12 1 4 12 - 1 = 11 11 - 4 = 7
C 21 2 9 21 - 2 = 19 19 - 9 = 10
D 26 3 5 26 - 3 = 23 23 - 5 = 18

Average TAT (FCFS) = (8 + 11 + 19 + 23) / 4 = 15.25


Average WT (FCFS) = (0 + 7 + 10 + 18) / 4 = 8.75

2. Shortest Job First (SJF)

Gantt Chart

Based on the burst time of processes that are ready to execute:

css

| A | B | D | C |
0 8 12 17 26

Completion Times

A: 8
B: 12
D: 17
C: 26

TAT and WT Calculation

Process Completion Time Arrival Time Burst Time Turnaround Time Waiting Time
A 8 0 8 8-0=8 8-8=0
B 12 1 4 12 - 1 = 11 11 - 4 = 7

D 17 3 5 17 - 3 = 14 14 - 5 = 9
C 26 2 9 26 - 2 = 24 24 - 9 = 15

Average TAT (SJF) = (8 + 11 + 14 + 24) / 4 = 14.25


Average WT (SJF) = (0 + 7 + 9 + 15) / 4 = 7.75

3. Shortest Remaining Time First (SRTF)

Gantt Chart

Processes are scheduled based on the shortest remaining time:

css

| A | B | D | C |
0 1 5 17 26

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 31/90
(Here, A runs until B arrives, B takes over, and then D runs before C finishes last.)

Completion Times

A: 1 (then gets interrupted)


B: 5
D: 17
C: 26 (completion after remaining A)

Process Completion Time Arrival Time Burst Time Turnaround Time Waiting Time
A 1 0 8 1-0=1 1 - 8 = -7
B 5 1 4 5-1=4 4-4=0
D 17 3 5 17 - 3 = 14 14 - 5 = 9
C 26 2 9 26 - 2 = 24 24 - 9 = 15

Average TAT (SRTF) = (1 + 4 + 14 + 24) / 4 = 10.25


Average WT (SRTF) = (-7 + 0 + 9 + 15) / 4 = 4.25

4. Round Robin (RR) Scheduling (Quantum = 2 ms)

Gantt Chart

Process execution is as follows, with each getting 2 ms:

css

| A | B | C | D | A | B | C | D | A | C |
0 2 4 6 8 10 12 14 17 19 26

Completion Times

A: 8
B: 10
C: 19
D: 17

TAT and WT Calculation

Process Completion Time Arrival Time Burst Time Turnaround Time Waiting Time

A 26 0 8 26 - 0 = 26 26 - 8 = 18
B 10 1 4 10 - 1 = 9 9-4=5
C 19 2 9 19 - 2 = 17 17 - 9 = 8
D 17 3 5 17 - 3 = 14 14 - 5 = 9

Average TAT (RR) = (26 + 9 + 17 + 14) / 4 = 16.5


Average WT (RR) = (18 + 5 + 8 + 9) / 4 = 10

Summary of Results
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 32/90
Algorithm Average Turnaround Time Average Waiting Time
FCFS 15.25 8.75
SJF 14.25 7.75
SRTF 10.25 4.25
RR 16.5 10

This summary provides an overview of the performance of each scheduling algorithm with respect to
average turnaround time and average waiting time for the given processes.

To create the Gantt charts for Shortest Remaining Time First (SRTF) scheduling with a time quantum of 5
ms, we need to follow the SRTF scheduling rules. SRTF always picks the process that has the least
remaining burst time, preempting the currently running process if a new process arrives with a shorter
burst time.

Given Data

Process Arrival Time CPU Time


P1 0 13
P2 2 6
P3 3 10
P4 5 8

Execution Order

Let's break down the execution of processes in a timeline format:

1. At time 0: Only P1 is available.


P1 runs from time 0 to 5 (5 ms). Remaining time for P1 = 8 ms.
2. At time 5: P2 and P1 are available (P2 arrives at 2 ms). Since P2 (6 ms) has a shorter burst time
than the remaining time of P1 (8 ms), P2 gets the CPU.
P2 runs from time 5 to 10 (5 ms). Remaining time for P2 = 1 ms.
3. At time 10: P1, P3, and P4 are available (P3 arrives at 3 ms and P4 arrives at 5 ms). P2 is finished.
P1 has 8 ms remaining, P3 has 10 ms, and P4 has 8 ms. P1 has the least remaining time (8
ms).
P1 runs from time 10 to 15 (5 ms). Remaining time for P1 = 3 ms.
4. At time 15: P1, P3, and P4 are available.
P1 has 3 ms remaining, P3 has 10 ms, and P4 has 8 ms. P1 still has the least remaining time.
P1 runs from time 15 to 18 (3 ms). P1 finishes at time 18.
5. At time 18: P3 and P4 are available.
P4 has 8 ms remaining, and P3 has 10 ms remaining. P4 has the least remaining time.
P4 runs from time 18 to 23 (5 ms). Remaining time for P4 = 3 ms.
6. At time 23: P4 has 3 ms remaining and P3 has 10 ms remaining.
P4 runs from time 23 to 26 (3 ms). P4 finishes at time 26.
7. At time 26: Only P3 remains.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 33/90
P3 runs from time 26 to 36 (10 ms). P3 finishes at time 36.

Gantt Chart

| P1 | P2 | P1 | P4 | P4 | P3 |
0 5 10 18 26 36

Summary of Execution

P1: Runs from 0 to 5 (5 ms), then 10 to 15 (5 ms), and finally 15 to 18 (3 ms).


P2: Runs from 5 to 10 (5 ms), then it finishes.
P4: Runs from 18 to 23 (5 ms) and then from 23 to 26 (3 ms) before it finishes.
P3: Runs from 26 to 36 (10 ms).

Completion Times

P1: 18
P2: 10
P3: 36
P4: 26

Turnaround Time and Waiting Time Calculation

Turnaround Time (TAT) = Completion Time - Arrival Time


Waiting Time (WT) = TAT - Burst Time

Process Completion Time Arrival Time CPU Time Turnaround Time Waiting Time
P1 18 0 13 18 - 0 = 18 18 - 13 = 5
P2 10 2 6 10 - 2 = 8 8-6=2
P3 36 3 10 36 - 3 = 33 33 - 10 = 23

P4 26 5 8 26 - 5 = 21 21 - 8 = 13

Final Results

Average Turnaround Time = (18 + 8 + 33 + 21) / 4 = 20


Average Waiting Time = (5 + 2 + 23 + 13) / 4 = 10.75

This analysis provides a clear view of how SRTF scheduling works with the given set of processes.

Here’s an overview of the main features of multiprocessor scheduling, multiprocessor feedback queue
scheduling, and multilevel feedback scheduling:

Main Features of Multiprocessor Scheduling [3]

1. Load Balancing: One of the primary goals is to evenly distribute processes across the available
processors to maximize CPU utilization and minimize idle time. Effective load balancing ensures

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 34/90
that no single processor is overwhelmed while others are underutilized.
2. Scalability: Multiprocessor scheduling should efficiently scale with the addition of more
processors. As more CPUs are added, the scheduling mechanism should maintain or improve
performance without significant overhead.
3. Synchronization and Resource Sharing: Since multiple processors may access shared resources,
the scheduling mechanism must ensure that processes are synchronized properly to prevent
conflicts and ensure data consistency. This includes managing locks, semaphores, and other
synchronization mechanisms.

Multiprocessor Feedback Queue Scheduling [2]

Multiprocessor feedback queue scheduling is a variant of the feedback scheduling algorithm that
operates across multiple processors. In this approach:

Processes are placed in multiple queues based on their priority and CPU burst time.
A process can move between queues based on its behavior; for example, if a process uses too
much CPU time, it may be moved to a lower-priority queue.
The system dynamically adjusts the priority of processes to optimize CPU utilization and reduce
turnaround time.

This method aims to adaptively allocate CPU time based on the changing nature of processes, allowing
for more responsiveness to real-time requirements in a multiprocessor environment.

Multilevel Feedback Scheduling [5]

Multilevel feedback scheduling (MLFQ) is a sophisticated scheduling algorithm designed to efficiently


manage processes with varying priorities and resource demands. Here are its key features:

1. Multiple Queues: MLFQ uses several queues, each with its priority level. Higher-priority queues
are served before lower-priority ones. Typically, processes start in the highest priority queue and
are moved down as they consume more CPU time.

2. Dynamic Priority Adjustment: A process's priority can change based on its CPU usage. If a
process uses less CPU than allocated (i.e., it doesn’t use its entire time quantum), it may be
promoted to a higher-priority queue. Conversely, if it uses its entire time quantum, it may be
demoted.
3. Time Quantum: Each queue has its time quantum, which decreases with priority. This means that
higher-priority queues are serviced more frequently, allowing for shorter waiting times for critical
tasks.

4. Starvation Prevention: To prevent starvation of lower-priority processes, MLFQ ensures that


processes eventually get a chance to execute, even if they are demoted to lower-priority queues.
This is often achieved through aging mechanisms.
5. Adaptability: MLFQ is adaptable to different workloads, allowing it to optimize the scheduling of
both CPU-bound and I/O-bound processes effectively. It efficiently balances the needs of short,
interactive jobs with long-running batch processes.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 35/90
Overall, multilevel feedback scheduling provides a versatile and responsive approach to process
scheduling, making it suitable for a variety of operating system environments.

Here’s a comprehensive overview of Inter-Process Communication (IPC) and related concepts:

What is IPC?

Inter-Process Communication (IPC) refers to a set of techniques that allows processes to communicate
and synchronize their actions while executing concurrently. IPC is essential in multi-process
environments where processes need to share data and resources. Common IPC mechanisms include
pipes, message queues, shared memory, semaphores, and sockets.

Co-operating Process

A co-operating process is one that can be affected by the actions of other processes. These processes
can share resources and information, which enhances efficiency and resource utilization. Co-operating
processes require mechanisms for communication and synchronization, such as IPC, to avoid conflicts
and maintain data consistency.

Race Conditions

A race condition occurs when two or more processes access shared data concurrently, and the outcome
of the execution depends on the timing of their execution. Race conditions can lead to unpredictable
behavior and data corruption.

Critical Section

The critical section is a segment of code where a process accesses shared resources (like variables,
files, or hardware) that could be concurrently modified by other processes. Only one process should
execute in its critical section at a time to avoid inconsistencies and data corruption.

Requirements for the Solution of the Critical Section Problem

The solution to the critical section problem must satisfy the following conditions:

1. Mutual Exclusion: Only one process can execute in its critical section at a time.
2. Progress: If no process is executing in its critical section and there are processes waiting to enter
their critical sections, the selection of the process that will enter the critical section cannot be
postponed indefinitely.
3. Bounded Waiting: There must be a limit on the number of times that other processes are allowed
to enter their critical sections after a process has requested to enter its critical section and before
that request is granted.

TestAndSet

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 36/90
TestAndSet is an atomic instruction used to implement mutual exclusion. It reads the value of a variable
and sets it to a new value in a single atomic operation. This is useful for managing access to shared
resources, ensuring that only one process can enter the critical section.

Swap

The Swap operation is another atomic operation that exchanges the values of two variables. It's often
used in implementing locks or semaphores for mutual exclusion, allowing processes to synchronize their
access to shared resources.

Strict Alternation Problem

Strict alternation is a method of synchronizing processes by forcing them to alternate access to the
critical section. While it guarantees mutual exclusion, it can lead to inefficiencies, such as busy waiting
and potential starvation if one process has a significantly longer critical section than the other.

Peterson’s Solution

Peterson’s solution is a classic algorithm for two processes to achieve mutual exclusion. It uses two
shared variables: a flag array (to indicate whether a process wants to enter the critical section) and a
turn variable (to indicate whose turn it is). The algorithm ensures that both mutual exclusion and
progress are satisfied.

Semaphores

A semaphore is a synchronization primitive used to control access to a common resource in concurrent


programming. Semaphores can be:

Counting Semaphores: Can take any non-negative integer value and are used to manage a
resource pool.
Binary Semaphores (Mutexes): Can only take values 0 or 1, providing mutual exclusion.

Semaphores support two operations:

1. wait (P): Decreases the semaphore value; if the value is less than 0, the process is blocked.
2. signal (V): Increases the semaphore value; if the value was previously 0, it wakes a blocked
process.

Producer-Consumer Problem (Bounded Buffer Problem)

The Producer-Consumer Problem is a classic synchronization problem where two processes (producer
and consumer) share a fixed-size buffer. The producer generates data and puts it in the buffer, while the
consumer takes data from the buffer. Solutions typically use semaphores or condition variables to
ensure that the producer doesn't add data when the buffer is full and the consumer doesn't remove
data when the buffer is empty.

Dining Philosophers Problem

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 37/90
The Dining Philosophers Problem is a synchronization problem that illustrates the challenges of
resource sharing among multiple processes. It involves five philosophers sitting at a table, each needing
two forks to eat. The challenge is to design a protocol that ensures that no philosopher starves while
preventing deadlock. Solutions often involve using semaphores or resource hierarchy to manage fork
acquisition.

Readers-Writers Problem

The Readers-Writers Problem is another classic synchronization issue where a shared resource (like a
database) can be read by multiple readers or modified by a single writer. The challenge is to allow
multiple readers to access the resource simultaneously while ensuring that writers have exclusive
access. Solutions involve semaphores or mutexes to manage access and priority, ensuring fairness and
avoiding starvation.

Monitor

A monitor is a high-level synchronization construct that provides a convenient way to manage shared
resources. It encapsulates shared data and operations on that data, ensuring that only one process can
execute within the monitor at a time. Monitors use condition variables for signaling and waiting, making
them suitable for implementing complex synchronization scenarios.

This overview encapsulates the key concepts related to IPC, co-operating processes, and various
synchronization problems, highlighting their significance in concurrent programming and operating
systems.

Here’s a concise overview of message passing, its purpose, and the concepts of direct and indirect
communication:

What is Message Passing and Why is it Used? [2]

Message Passing is a method of communication used in concurrent and distributed systems where
processes exchange messages to share information or synchronize their actions. In this model,
processes do not share memory; instead, they send and receive messages through a communication
channel.
Purpose of Message Passing:

1. Process Communication: It allows separate processes, which may be running on the same
machine or on different machines, to communicate and coordinate their actions without shared
memory.
2. Synchronization: It helps synchronize processes by allowing them to signal each other about state
changes or the availability of resources.

Direct and Indirect Communication [3]

Direct Communication:

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 38/90
In direct communication, the sender and receiver processes must explicitly name each other to
send or receive messages. This can be implemented through specific identifiers (e.g., process IDs).
Characteristics:
Simplicity: Easier to understand since communication is straightforward and involves known
sender and receiver processes.
Immediate Feedback: The sender knows who the receiver is and can receive immediate
confirmation of message delivery.

Indirect Communication:

In indirect communication, messages are sent to and received from shared mailboxes or queues
instead of directly between sender and receiver processes. The sender does not need to know the
identity of the receiver.
Characteristics:
Flexibility: It allows processes to communicate without needing to know the details of each
other, enabling more flexible system design.
Decoupling: Sender and receiver processes are decoupled, which can improve modularity
and scalability in distributed systems.

These communication methods are fundamental in designing concurrent and distributed systems,
ensuring efficient and organized interactions between processes.

Here’s a detailed overview of co-operating processes, race conditions, critical sections, and their related
concepts:

What are Co-operating Processes? [2]

Co-operating processes are processes that can affect and are affected by the execution of other
processes. They can share resources and information, allowing them to work together to achieve a
common goal. In a co-operating process environment, synchronization and communication mechanisms
(like message passing or shared memory) are crucial to ensure proper coordination and avoid conflicts.

Advantages of Co-operating Processes [3]

1. Resource Sharing: Co-operating processes can share data and resources, which improves
efficiency and resource utilization within the system.
2. Increased Throughput: By dividing tasks among multiple processes, co-operation can lead to
increased throughput, as processes can work in parallel rather than sequentially.
3. Improved Responsiveness: Co-operating processes can enhance the responsiveness of
applications, as they can handle different parts of a task simultaneously, making the overall system
more efficient.

What Do You Mean by Race Condition? [2]

A race condition occurs when two or more processes access shared data concurrently, and the outcome
of the execution depends on the timing of their execution. This can lead to unpredictable results and
data corruption because the processes may interfere with each other's operations.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 39/90
Define Critical Section. [3]

The critical section is a segment of code in which a process accesses shared resources (like variables,
files, or hardware) that could be concurrently modified by other processes. To avoid data
inconsistencies, only one process should be allowed to execute in its critical section at any given time.

Mechanisms to Control Access to Critical Sections [4]

Several mechanisms can control access to critical sections to prevent race conditions and ensure mutual
exclusion:

1. Mutexes (Mutual Exclusion Locks): Simple locks that allow only one process to enter the critical
section.
2. Semaphores: Counting mechanisms that can be used to manage access to shared resources,
ensuring mutual exclusion.
3. Monitors: High-level synchronization constructs that encapsulate shared data and the procedures
that operate on that data, ensuring that only one process can execute within the monitor at a time.
4. Atomic Operations: Operations that complete without interruption, ensuring that critical section
access is controlled.

What is the Critical Section Problem? [1]

The critical section problem refers to the challenge of designing a protocol that ensures that multiple
processes can safely access shared resources without causing inconsistencies or data corruption. It
encompasses the requirements and conditions necessary for processes to enter and exit their critical
sections.

Requirements a Critical Section Problem Must Satisfy [3]

To solve the critical section problem effectively, a solution must satisfy the following requirements:

1. Mutual Exclusion: Only one process can be in its critical section at any given time.
2. Progress: If no process is executing in its critical section, and there are processes waiting to enter
their critical sections, the selection of the process that will enter the critical section cannot be
postponed indefinitely.
3. Bounded Waiting: There must be a limit on the number of times that other processes can enter
their critical sections after a process has requested to enter its critical section and before that
request is granted.

Conditions for a Solution to the Mutual Exclusion Problem [4]

The solution to the mutual exclusion problem must meet the following conditions:

1. Mutual Exclusion: Only one process can execute in its critical section at a time.
2. No Starvation: No process should be denied access to its critical section indefinitely.
3. No Deadlock: No two processes should wait for each other indefinitely.
4. Fairness: Each process should get a fair chance to enter its critical section without unfair delays.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 40/90

You might also like