0% found this document useful (0 votes)
30 views32 pages

Os Unit-Ii

Uploaded by

maskon.alien
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views32 pages

Os Unit-Ii

Uploaded by

maskon.alien
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

UNIT - II

Threads – Overview – Threading issues - CPU Scheduling – Basic Concepts – Scheduling Criteria – Scheduling
Algorithms – Multiple-Processor Scheduling – Real Time Scheduling - The Critical-Section Problem –
Synchronization Hardware – Semaphores – Classic problems of Synchronization – Critical regions – Monitors.

PART-A

1. What do you mean by critical region?(APRIL-2014)

➢ The critical region high-level language synchronization construct require a variable v of


types T, which is to be shared among many processes be declared as,

v:shared T;

➢ The variable v can be accessed only inside a region statement of the following forms.

region v when B do S;

➢ This construct means that,while statement S is being executed ,no other process can
access

2. List the different CPU scheduling algorithms?(APRIL-2014,NOV-2016)

• Six different types CPU scheduling algorithms are:

1. First Come First Serve (FCFS),


2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin Scheduling (RR)
5. Multilevel Queue Scheduling
6. Multi Level feedback queue

3.Specify any four issues considered with multithreaded programs?(APRIL-2015)

• Increased Complexity − Multithreaded processes are quite complicated. Coding for these
can only be handled by expert programmers.

• Complications due to Concurrency − It is difficult to handle concurrency in


multithreaded processes. This may lead to complications and future problems.

• Difficult to Identify Errors− Identification and correction of errors is much more difficult
in multithreaded processes as compared to single threaded processes.

• Testing Complications− Testing is a complicated process i multithreaded programs as


compared to single threaded programs. This is because defects can be timing related and
not easy to identify.

INFORMATION TECHNOLOGY DEPARTMENT,


1
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

4.Compare between user threads and kernel threads?(NOV-2014,NOV-2018)

User Thread Kernel Thread

➢ User Thread are implemented by ➢ Kernel Threads are implemented by


Users. OS.

➢ User level threads are faster to ➢ Kernel level threads are slower to
create and manage. create and manage

➢ Example: Java thread,POSIX ➢ Example: Window Solaris


threads

5.List the difference between preemptive and non-preemptive scheduling?(NOV-2014,NOV-


2018)

Preemptive scheduling Non-Preemptive

➢ The CPU allocated to a process ➢ The CPU allocated to a process until


for a limited time. it terminated or switches to waiting
state.

➢ Process can be interrupted in ➢ Can’t be interrupted till it completes


between. its burst time.

➢ If a high priority process ➢ If a process with long burst time is


frequently arrives in the ready running CPU,the another process
queue ,low priority process may with less CPU burst time may
starve. Starve.

6.State any two classic problems of synchronization?(APRIL-2015)

The following problems of synchronization are considered as classical problems:


1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosphers Problem,
3. Readers and Writers Problem
7.What is a Semaphore? (NOV-2015,APRIL-2016)

Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.

8.What are threads?(NOV-2015)

A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.

INFORMATION TECHNOLOGY DEPARTMENT,


2
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Types of Thread:

➢ User-level Thread

➢ Kernel-level Thread

9.What is Context Switch? (APRIL-2016)

(OR)

Explain Context switch and function of Context Switcher?(NOV-2017)

• When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process.
• Context-switch time is overhead; the system does no useful work while switching.
• Time dependent on hardware support
10.What is Kernel? (NOV-2016)

Kernel is central component of an operating system that manages operations of computer


and hardware. It basically manages operations of memory and CPU time. It is core component
of an operating system. Kernel acts as a bridge between applications and data processing
performed at hardware level using inter-process communication and system calls.

11.List out the three conditions for process synchronization?(APRIL-2017)

Process Synchronization means sharing system resources by processes in a such a way


that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent
data.

A solution to the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion:-Out of a group of cooperating processes, only one process can be


in its critical section at a given point of time.

2. Progress:If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be allowed to get into its critical
section.

3. Bounded Waiting-After a process makes a request for getting into its critical section,
there is a limit for how many other processes can get into their critical section, before this process's
request is granted. So after the limit is reached, system must grant the process permission to get
into its critical section.

12.Discriminate deadlock and starvation?(APRIL-2017)

Deadlock:

Deadlock occurs when each process holds a resource and wait for other resource held by
any other process.

Starvation:

Starvation is the problem that occurs when high priority processes keep executing and
low priority processes get blocked for indefinite time.

INFORMATION TECHNOLOGY DEPARTMENT,


3
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

13. Define busy waiting and Spinlock?(NOV-2017,MAY-2018)

When a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the entry code. This is called as busy waiting and this type of
semaphore is also called a spinlock, because the process while waiting for the lock.

14.Differentiate Single threaded and multi threaded processes?(MAY-2018)

Single Threaded Multi Threaded


• It refers to executing an entire process • It refers to allowing multiple threads
from beginning to end without within a process such that they
interruption by thread. execute independently but share
their resources.

15. What is meant by Job Scheduling?(NOV-2019)

Job scheduling is the process of allocating system resources to many different tasks by an
operating system (OS). The system handles prioritized job queues that are awaiting CPU time and
it should determine which job to be taken from which queue and the amount of time to be allocated
for the job.

16.List out the States of Thread?(NOV-2019)

A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.

When a thread moves through the system, it is always in one of the five states:

(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
PART-B

INFORMATION TECHNOLOGY DEPARTMENT,


4
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

1. Discuss in detail about the various CPU scheduling algorithms?(APRIL-2014,APRIL-


2015,NOV-2015)

(OR)

List the important factors in scheduling and Explain with an example and gantt chart,FCFS
and Round Robin Scheduling?(NOV-2017)

CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU

• FCFS Scheduling
• Round Robin Scheduling.
• SJF Scheduling.
• SRT Scheduling.
• Priority Scheduling.
• Multilevel Queue Scheduling.
• Multilevel Feedback Queue Scheduling.

First-Come-First-Served (FCFS) Scheduling

Other names of this algorithm are:


• First-In-First-Out (FIFO)
• Run-to-Completion
• Run-Until-Done
• First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest
scheduling algorithm. Processes are dispatched according to their arrival time on the ready
queue. Being a non-preemptive discipline, once a process has a CPU, it runs to completion.
The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair
in the sense that long jobs make short jobs wait and unimportant jobs make important jobs
wait.

Example:
The First-Come-First-Served algorithm is rarely used as a master scheme in modern
operating systems but it is often embedded within other schemes.

Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

INFORMATION TECHNOLOGY DEPARTMENT,


5
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

P1 P2 P3

0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27

Average waiting time: (0 + 24 + 27)/3 = 17

Suppose that the processes arrive in the order P2 , P3 , P1

P2 P3 P1

0 3 6 30
The Gantt chart for the schedule is:

Waiting time for P1 = 6; P2 = 0; P3 = 3

Average waiting time: (6 + 0 + 3)/3 = 3

ROUND ROBIN

• In the round robin scheduling, processes are dispatched in a FIFO manner but are given a
limited amount of CPU time called a time-slice or a quantum. If a process does not
complete before its CPU-time expires, the CPU is preempted and given to the next process
waiting in a queue.
• The preempted process is then placed at the back of the ready list. Round Robin Scheduling
is preemptive (at the end of time-slice) therefore it is effective in time-sharing
environments in which the system needs to guarantee reasonable response times for
interactive users.

1. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready
queue.
2. If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more
than (n-1)q time units.
3. Performance

->q large  FIFO

->q small  q must be large with respect to context switch, otherwise overhead is too
high.

INFORMATION TECHNOLOGY DEPARTMENT,


6
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Example:-

Process Burst Time

P1 53

P2 17

P3 68

P4 24

The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

->Typically, higher average turnaround than SJF, but better response

Shortest-Job-First (SJF) Scheduling

Other name of this algorithm is Shortest-Process-Next (SPN).

• Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process)


with the smallest estimated run-time-to-completion is run next. In other words, when CPU
is available, it is assigned to the process that has smallest next CPU burst. The SJF
scheduling is especially appropriate for batch jobs for which the run times are known in
advance. Since the SJF scheduling algorithm gives the minimum average time for a given
set of processes, it is probably optimal.

Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which
reasonable response time must be guaranteed.

1. Associate with each process the length of its next CPU burst. Use these lengths to schedule
the process with the shortest time
2. Two schemes:

❖ nonpreemptive – once CPU given to the process it cannot be preempted until


completes its CPU burst
❖ preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is know
as the Shortest-Remaining-Time-First (SRTF)

3. SJF is optimal – gives minimum average waiting time for a given set of processes

Process Arrival Time Burst Time

INFORMATION TECHNOLOGY DEPARTMENT,


7
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

->SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 1 16
1
->Average waiting time = (9 + 1 + 0 +2)/4 = 3

Shortest-Remaining-Time (SRT) Scheduling

• The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
• In SRT scheduling, the process with the smallest estimated run-time to completion is run
next, including new arrivals.
• In SJF scheme, once a job begin executing, it run to completion.
• In SJF scheme, a running process may be preempted by a new arrival process with shortest
estimated run-time.
• The algorithm SRT has higher overhead than its counterpart SJF.
• The SRT must keep track of the elapsed time of the running process and must handle
occasional preemptions.
• In this scheme, arrival of small processes will run almost immediately. However, longer
jobs have even longer mean waiting time.

Priority Scheduling
1. A priority number (integer) is associated with each process
2. The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)
->Preemptive

->nonpreemptive

3. SJF is a priority scheduling where priority is the predicted next CPU burst time
4. Problem  Starvation – low priority processes may never execute
5. Solution  Aging – as time progresses increase the priority of the process

The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run.
Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a
special case of general priority scheduling algorithm.

INFORMATION TECHNOLOGY DEPARTMENT,


8
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted)
next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.

Priority can be defined either internally or externally. Internally defined priorities use some
measurable quantities or qualities to compute priority of a process.

Examples of Internal priorities are

• Time limits.
• Memory requirements.
• File requirements,
for example, number of open files.
• CPU Vs I/O requirements.

Externally defined priorities are set by criteria that are external to operating system such as

• The importance of process.


• Type or amount of funds being paid for computer use.
• The department sponsoring the work.
• Politics.

Priority scheduling can be either preemptive or non preemptive

• A preemptive priority algorithm will preemptive the CPU if the priority of the newly
arrival process is higher than the priority of the currently running process.
• A non-preemptive priority algorithm will simply put the new process at the head of the
ready queue.

A major problem with priority scheduling is indefinite blocking or starvation. A solution to the
problem of indefinite blockage of the low-priority process is aging. Aging is a technique of
gradually increasing the priority of processes that wait in the system for a long period of time.

INFORMATION TECHNOLOGY DEPARTMENT,


9
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Multilevel Queue Scheduling

Figure: Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for
instance In a multilevel queue scheduling processes are permanently assigned to one queues. The
processes are permanently assigned to one another, based on some property of the process, such
as

• Memory size
• Process priority
• Process type

Algorithm chooses the process from the occupied queue that has the highest priority, and run that
process either

• Preemptive or
• Non-preemptively

Each queue has its own scheduling algorithm or policy.

Multilevel Feedback Queue Scheduling

• Multilevel feedback queue-scheduling algorithm allows a process to move between


queues. It uses many ready queues and associate a different priority with each queue. The
Algorithm chooses to process with highest priority from the occupied queue and run that
process either preemptively or non preemptively. If the process uses too much CPU time
it will moved to a lower-priority queue. Similarly, a process that wait too long in the lower-
priority queue may be moved to a higher-priority queue may be moved to a highest-priority
queue. Note that this form of aging prevents starvation.

• A process entering the ready queue is placed in queue 0.


• If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
• If it does not complete, it is preempted and placed into queue 2.
• Processes in queue 2 run on a FCFS basis, only when 2 run on a FCFS basis queue, only
when queue 0 and queue 1 are empty.

INFORMATION TECHNOLOGY DEPARTMENT,


10
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Example:-

1. Three queues:

❖ Q0 – RR with time quantum 8 milliseconds


❖ Q1 – RR time quantum 16 milliseconds
❖ Q2 – FCFS

2. Scheduling

❖ A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved
to queue Q1.
❖ At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.

Figure: Scheduling States

2. Describe the critical section problems?(APRIL-2014,NOV-2016)

(OR)

What is Synchronization? Explain how semaphores can be used to deal with n-process
critical section problem?(MAY-2018)

• The critical section is a code segment where the shared variables can be accessed.
An atomic action is required in a critical section i.e. only one process can execute
in its critical section at a time. All the other processes have to wait to execute in
their critical sections.

A diagram that demonstrates the critical section is as follows −

INFORMATION TECHNOLOGY DEPARTMENT,


11
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• In the above diagram, the entry section handles the entry into the critical section.
It acquires the resources needed for execution by the process. The exit section
handles the exit from the critical section. It releases the resources and also informs
the other processes that the critical section is free.

Solution to the Critical Section Problem


The critical section problem needs a solution to synchronize the different processes. The
solution to the critical section problem must satisfy the following conditions −

• Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at
any time. If any other processes require the critical section, they must wait until it
is free.

• Progress
Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a
critical section if it is free.

• Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section.

INFORMATION TECHNOLOGY DEPARTMENT,


12
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
• boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn : The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions :


• Mutual Exclusion is assured as only one process can access the critical section at any
time.
• Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


• It involves Busy waiting
• It is limited to 2 processes.

INFORMATION TECHNOLOGY DEPARTMENT,


13
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

TestAndSet
TestAndSet is a hardware solution to the synchronization problem. In TestAndSet,
we have a shared lock variable which can take either of the two values, 0 or 1.
0 Unlock
1 Lock
Before entering into the critical section, a process inquires about the lock. If it is locked,
it keeps on waiting until it becomes free and if it is not locked, it takes the lock and executes the
critical section.
In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot
be preserved.
3.Describe about the Producer Consumer problem with bounded buffer using either message
passing or semaphores?(NOV-2014,NOV-2018)

In bounded buffer problem (or) Producer Consumer problem the buffer size is
specified.

• The producer consumer problem is a synchronization problem. There is a fixed size buffer
and the producer produces items and enters them into the buffer. The consumer removes
the items from the buffer and consumes them.

• A producer should not produce items into the buffer when the consumer is consuming an
item from the buffer and vice versa. So the buffer should only be accessed by the producer
or consumer at a time.

• The producer consumer problem can be resolved using semaphores. The codes for the
producer and consumer process are given as follows −

1. N buffers, each can hold one item


2. Semaphore mutex initialized to the value 1
3. Semaphore full initialized to the value 0
4. Semaphore empty initialized to the value N.
5. The structure of the producer process

while (true) {

// produce an item

wait (empty);

wait (mutex);

// add the item to the buffer

signal (mutex);

signal (full); }

INFORMATION TECHNOLOGY DEPARTMENT,


14
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

6. The structure of the consumer process

while (true) {

wait (full);

wait (mutex); // remove an item from buffer

signal (mutex);

signal (empty); // consume the removed item

4.Discuss on:

a)Semaphores (NOV-2015)

(OR)

What are semaphore?Give its usage and implementation?(NOV-2017)

Definition
A semaphore is a protected variable whose value can be accessed and altered only by the
operations P and V and initialization operation called 'Semaphoiinitislize'.

Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also
called general semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S),
operates as follows:

P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal
(S), operates as follows:

V(S): IF (one or more process are waiting on S)


THEN (let one of these processes proceed)
ELSE S := S +1

Operations P and V are done as single, indivisible, atomic action. It is guaranteed


that once a semaphore operations has stared, no other process can access the semaphore
until operation has completed. Mutual exclusion on the semaphore, S, is enforced within
P(S) and V(S).

INFORMATION TECHNOLOGY DEPARTMENT,


15
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

If several processes attempt a P(S) simultaneously, only process will be allowed to


proceed. The other processes will be kept waiting, but the implementation of P and V
guarantees that processes will not suffer indefinite postponement.

Semaphore as General Synchronization Tool

1. Counting semaphore – integer value can range over an unrestricted domain.


2. Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement Also known as mutex locks.
3. Can implement a counting semaphore S as a binary semaphore.
4. Provides mutual exclusion

▪ Semaphore S; // initialized to 1
▪ wait (S);
Critical Section
signal (S);

Semaphore Implementation

1. Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
2. Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the critical section.

▪ Could now have busy waiting in critical section implementation

 But implementation code is short


 Little busy waiting if critical section rarely occupied
3. Note that applications may spend lots of time in critical sections and therefore this
is not a good solution.

Semaphore Implementation with no Busy waiting

1. With each semaphore there is an associated waiting queue. Each entry in a waiting
queue has two data items:

▪ value (of type integer)


▪ pointer to next record in the list

2. Two operations:

▪ block – place the process invoking the operation on the appropriate


waiting queue.
▪ wakeup – remove one of processes in the waiting queue and place it in
the ready queue.

INFORMATION TECHNOLOGY DEPARTMENT,


16
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

->Implementation of wait:

wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

->Implementation of signal:

Signal (S){

value++;

if (value <= 0) {

remove a process P from the waiting queue

wakeup(P); }

b) Monitors(NOV-2015)

1. high-level abstraction that provides a convenient and effective mechanism for


process synchronization
2. Only one process may be active within the monitor at a time

monitor monitor-name {

// shared variable declarations

procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }

INFORMATION TECHNOLOGY DEPARTMENT,


17
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Figure: Monitors

Solution to Dining Philosophers

monitor DP

enum { THINKING; HUNGRY, EATING) state [5] ;

condition self [5];

void pickup (int i) {

state[i] = HUNGRY;

test(i);

if (state[i] != EATING) self [i].wait;

void putdown (int i) {

state[i] = THINKING;

// test left and right neighbors

test((i + 4) % 5);

test((i + 1) % 5);

void test (int i) {

INFORMATION TECHNOLOGY DEPARTMENT,


18
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

if ( (state[(i + 4) % 5] != EATING) &&

(state[i] == HUNGRY) &&

(state[(i + 1) % 5] != EATING) )

state[i] = EATING ;

self[i].signal () ;

initialization_code() {

for (int i = 0; i < 5; i++)

state[i] = THINKING;

}}

->Each philosopher I invokes the operations pickup() and putdown() in the following
sequence:

dp.pickup (i)
EAT
dp.putdown (i)

Monitor Implementation Using Semaphores

1. Variables

semaphore mutex; // (initially = 1)


semaphore next; // (initially = 0)
int next-count = 0;

2. Each procedure F will be replaced by

wait(mutex);

body of F;

if (next-count > 0)

INFORMATION TECHNOLOGY DEPARTMENT,


19
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

signal(next)
else
signal(mutex);

2. Mutual exclusion within a monitor is ensured.


3. For each condition variable x, we have:

semaphore x-sem; // (initially = 0)

int x-count = 0;

4. The operation x.wait can be implemented as:

x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;

5. The operation x.signal can be implemented as:

if (x-count > 0)
{
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
5.Explain the Threading issues? (NOV-2016)

The fork( ) and exec( ) System Calls

• Q: If one thread forks, is the entire process copied, or is the new process single-threaded?
• A: System dependant.
• A: If the new process execs right away, there is no need to copy all the other threads. If it
doesn't, then the entire process should be copied.
• A: Many versions of UNIX provide multiple versions of the fork call for this purpose.

Signal Handling

• Q: When a multi-threaded process receives a signal, to what thread should that signal be
delivered?
• A: There are four major options:
1. Deliver the signal to the thread to which the signal applies.

INFORMATION TECHNOLOGY DEPARTMENT,


20
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

2. Deliver the signal to every thread in the process.


3. Deliver the signal to certain threads in the process.
4. Assign a specific thread to receive all signals in a process.
• The best choice may depend on which specific signal is involved.
• UNIX allows individual threads to indicate which signals they are accepting and which
they are ignoring. However the signal can only be delivered to one thread, which is
generally the first thread that is accepting that particular signal.
• UNIX provides two separate system calls, kill( pid, signal ) and pthread_kill( tid, signal
), for delivering signals to processes or specific threads respectively.
• Windows does not support signals, but they can be emulated using Asynchronous
Procedure Calls ( APCs ). APCs are delivered to specific threads, not processes.

Thread Cancellation

• Threads that are no longer needed may be cancelled by another thread in one of two ways:
1. Asynchronous Cancellation cancels the thread immediately.
2. Deferred Cancellation sets a flag indicating the thread should cancel itself when
it is convenient. It is then up to the cancelled thread to check this flag periodically
and exit nicely when it sees the flag set.
• ( Shared ) resource allocation and inter-thread data transfers can be problematic with
asynchronous cancellation.

Thread-Local Storage (Thread-Specific Data )

• Most data is shared among threads, and this is one of the major benefits of using threads
in the first place.
• However sometimes threads need thread-specific data also.
• Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-specific
data, known as thread-local storage or TLS. Note that this is more like static data than
local variables,because it does not cease to exist when the function ends.

Scheduler Activations

• Many implementations of threads provide a virtual processor as an interface between the


user thread and the kernel thread, particularly for the many-to-many or two-tier models.
• This virtual processor is known as a "Lightweight Process", LWP.
o There is a one-to-one correspondence between LWPs and kernel threads.
o The number of kernel threads available, ( and hence the number of LWPs ) may
change dynamically.
o The application ( user level thread library ) maps user threads onto available
LWPs.
o kernel threads are scheduled onto the real processor(s) by the OS.
o The kernel communicates to the user-level thread library when certain events
occur ( such as a thread about to block ) via an upcall, which is handled in the
thread library by an upcall handler. The upcall also provides a new LWP for the
upcall handler to run on, which it can then use to reschedule the user thread that is
about to become blocked. The OS will also issue upcalls when a thread becomes
unblocked, so the thread library can make appropriate adjustments.

INFORMATION TECHNOLOGY DEPARTMENT,


21
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• If the kernel thread blocks, then the LWP blocks, which blocks the user thread.
• Ideally there should be at least as many LWPs available as there could be concurrently
blocked kernel threads. Otherwise if all LWPs are blocked, then user threads will have to
wait for one to become available.

Figure:Lightweight process ( LWP )

6.Calculate the average waiting time and average turn around time for the FCFS CPU
scheduling for the given data, Specify which is the best algorithm, and discuss why? Illustrate
the scheduling with Gantt chart?(APRIL-2017)

Process Burst Time Arrival Time

P1 12 0

P2 25 2

P3 13 1
P4 7 0
P5 11
5

What is the major problem occurs in FCFS algorithm?

INFORMATION TECHNOLOGY DEPARTMENT,


22
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• The scheduling method is non preemptive , the process will run to the completion.

• Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.

INFORMATION TECHNOLOGY DEPARTMENT,


23
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• Although it is easy to implement , but it is poor performance since the average waiting
time is higher as compare to other scheduling algorithms.

7.Write a C program to coordinate the Producer and Consumer Problem? (APRIL-2017)

#include<stdio.h>
#include<conio.h>
int n,produced,consumed,buffer[5];
int result,i;
producer(int consumed,int result)
{
if((consumed==1)&&(i>=0))
{
result=result+1;
i++;
buffer[i]=result;
produced=1;
printf("produced buffer=%d\n",buffer[i]);
consumed=0;
return(produced,result);}
else
printf("buffer still to be consumed:\n");
}
consumer(int produced,int result)
{
if((produced==1)&&(i>0))
{
consumed=1;
printf("consumed buffer=%d\n",buffer[i]);
produced=0;i--;
return(consumed,result);
}
else
printf("buffer still to be produced");
}
int main()

INFORMATION TECHNOLOGY DEPARTMENT,


24
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

{
clrscr();
produced=0;
consumed=1;
result=0;
i=0;
while(1)
{
printf("\nenter the choice:");
printf("\n1.producer\n2.consumer\n3.exit\n");
scanf("%d",&n);
if(n==3)
exit(0);
else if(n==1)
producer(consumed,buffer[i]);
else if(n==2)
consumer(produced,buffer[i]);
else exit(0);
}
}
8.Explain the classical problems for synchronization hardware and draw the process using
real time scheduling with suitable examples?(NOV-2018)

• To generalize the solution(s) expressed above, each process when entering their critical
section must set some sort of lock, to prevent other processes from entering their critical
sections simultaneously, and must release the lock when exiting their critical section, to
allow other processes to proceed. Obviously it must be possible to attain the lock only
when no other process has already set a lock. Specific implementations of this general
procedure can get quite complicated, and may include hardware solutions as outlined in
this section.
• One simple solution to the critical section problem is to simply prevent a process from
being interrupted while in their critical section, which is the approach taken by non
preemptive kernels. Unfortunately this does not work well in multiprocessor
environments, due to the difficulties in disabling and the re-enabling interrupts on all
processors. There is also a question as to how this approach affects timing if the clock
interrupt is disabled.
• Another approach is for hardware to provide certain atomic operations. These operations
are guaranteed to operate as a single instruction, without interruption. One such operation

INFORMATION TECHNOLOGY DEPARTMENT,


25
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

is the "Test and Set", which simultaneously sets a boolean lock variable and returns its
previous value, as shown in Figures:

• Another variation on the test-and-set is an atomic swap of two booleans, as shown in


Figures:

INFORMATION TECHNOLOGY DEPARTMENT,


26
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• The above examples satisfy the mutual exclusion requirement, but unfortunately do not
guarantee bounded waiting. If there are multiple processes trying to get into their critical
sections, there is no guarantee of what order they will enter, and any one process could
have the bad luck to wait forever until they got their turn in the critical section. ( Since
there is no guarantee as to the relative rates of the processes, a very fast process could
theoretically release the lock, whip through their remainder section, and re-lock the lock
before a slower process got a chance. As more and more processes are involved vying for
the same resource, the odds of a slow process getting locked out completely increase. )
• Below Figure illustrates a solution using test-and-set that does satisfy this requirement,
using two shared data structures, boolean lock and boolean waiting[ N ], where N is the
number of processes in contention for critical sections:

9.Discuss in detail about Classic problems for synchronization?(NOV-2109)

INFORMATION TECHNOLOGY DEPARTMENT,


27
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem

Bounded-Buffer Problem

In bounded buffer problem the buffer size is specified.

7. N buffers, each can hold one item


8. Semaphore mutex initialized to the value 1
9. Semaphore full initialized to the value 0
10. Semaphore empty initialized to the value N.
11. The structure of the producer process

while (true) {

// produce an item

wait (empty);

wait (mutex);

// add the item to the buffer

signal (mutex);

signal (full);

12. The structure of the consumer process

while (true) {

wait (full);

wait (mutex); // remove an item from buffer

signal (mutex);

signal (empty); // consume the removed item

Readers-Writers Problem

INFORMATION TECHNOLOGY DEPARTMENT,


28
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

1. A data set is shared among a number of concurrent processes

o Readers – only read the data set; they do not perform any
updates
o Writers – can both read and write.

2. Problem – allow multiple readers to read at the same time. Only one single writer
can access the shared data at the same time.
3. Shared Data
n Data set
n Semaphore mutex initialized to 1.
n Semaphore wrt initialized to 1.
n Integer readcount initialized to 0.
4. The structure of a writer process

while (true) {

wait (wrt) ;

// writing is performed

signal (wrt) ;

5. The structure of a reader process

while (true) {

wait (mutex) ;

readcount ++ ;

if (readcount == 1) wait (wrt) ;

signal (mutex) // reading is performed

wait (mutex) ;

readcount - - ;

if (readcount == 0) signal (wrt) ;

signal (mutex) ;}

Dining-Philosophers Problem

INFORMATION TECHNOLOGY DEPARTMENT,


29
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

Dining Philosophers

There are N philosophers sitting around a circular table eating spaghetti and discussing
philosophy.The problem is that each philosopher needs 2 chopsticks to eat, and there are only N
chopsticks, each one between each 2 philosophers.

1. Shared data

o Bowl of rice (data set)


o Semaphore chopstick [5] initialized to 1

2. The structure of Philosopher i:

While (true) {

wait ( chopstick[i] );

wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );

signal (chopstick[ (i + 1) % 5] );

// think }

10.Write the brief notes on Real Time Scheduling and Various Scheduling
Algorithms?(NOV-2019)

REAL-TIME SCHEDULING:

• Real time computing is divided into two types

1. Hard real time computing

2. Soft real time computing

INFORMATION TECHNOLOGY DEPARTMENT,


30
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• Hard real time system are required to complete a critical task within a gurantee amount of
time.

• Generally a process is submitted along with the statement of amount of time in which it
needs to complete perform i/o.

• The scheduler then either admits the process, guaranteeing that the process will complete
on time ,or rejects the requests as impossible. This is known as RESOURCE
REVOLUTION.

• Such a guarantee require that the scheduler know exactly how long each type of OS
function takes to perform and therefore each operation must be guaranteed to take a
maximum amount of time.

• They are a compound of special purpose between running on hardware dedicated to their
critical process and lack the full functionality of modern computers and operating system.

• SOFT REAL TIME computing is less reactive. It requires that critical process receive
priority over less fortunate ones.

• Implementing soft real time functionality requires careful design of the scheduler and
related aspects of os.

• First the system must have priority scheduling and real-time process must have the highest-
priority. The priority of real time processes must not degrade over time, even though the
priority of non-real time process may.

• Second the dispatch latency must be small.

• smaller the latency, faster the real-time process can start executing , once its runnable

• To keep dispatch latency low, we need to allow system calls to be pre-emptible .There are
several ways to achieve this.

• one is to insert pre-emption points in long, duration system calls, that check to the whether
a high-priority process terminates, the interrupted process continue with the system calls.

• Preemption points can be placed only at “safe" locations in the kernel-only where kernel
os are not modified. Even with preemptions points dispatch latency can be large because
only a few preemption points can be practically added to kernel.

• Another method for dealing with preemptible .

• With this effect the kernel can always be preemptible, because any kernel data being,
updated are protected from modification by the high-priority process. This the most
effective method.

• For a higher priority process to read or modify kernel data currently being, accessed by
lower priority, the higher priority should wait for the lower priority to finish. This is known
as PRIORITY INVERSION.

INFORMATION TECHNOLOGY DEPARTMENT,


31
IT-T53 –OPERATING SYSTEMS
RAJIV GANDHI COLLEGE OF ENGINEERING AND TECHNOLOGY, PUCUCHERRY

• In fact, a chain of process could all be solved via the PRIORITY-IHERITANCE


PROTOCOL, in which all these process (the one accessing resource that the higher-
priority process needs inherit the finished, thier priority to its original value.

DISPATCH LATENCY

• The conflict phase of dispatch latency has two components.

1. Pre-emption of any process running in the kernel.

2. Release by low-priority processes resources needed by the high-priority process.

INFORMATION TECHNOLOGY DEPARTMENT,


32
IT-T53 –OPERATING SYSTEMS

You might also like