Os Decode Unit 2
Os Decode Unit 2
Ans. : The dispatcher is the module that gives control Q.4 Define process. How many different state. 1
of the CPU to the process selected by the short-time process has ? Explain when a process changes the
scheduler. The act of assigning a processor to the first state with a state dJagram ?
process on the ready list is called dispatching and is ~[JNTU : Aprfl-18, Marks SJ
performed by a system entity called the dispatcher.
OR Explain process states.
Q.3 Define process. Differentiate between a ~[JNTU: Nov-15, Marks 5]
process and a program.
Ans. : • Process is an active entity that requires a set Ans. : • Each process has an execution state which
of resources, including a processor, program counter, indicates what process is currently doing.
registers to perform its function. Multiple processes • The process descriptor is the basic data structure
may be associated with one program. used to represent the specific state for each proces.5.
• Process means a program in execution. Process • Fig. Q.4.1 shows a process state diagram.
execution must progress in sequential order.
Interrupted
Event completion
or lnpuVOutput
completion
______ ---- --
I
i
i
\, ............ I
Fig. QA.1 ProcH1 state diagram
(2 - 1)
2·2 Prou,. and,_~PU Sch_edullng
• The pro~"'\'ss states are as follows : • Process control block will change according to the
1. New : Operating system \:renles new process by operating system. PCB is also called task control
using fork( } systl'm call. 11,ese process are newly block.
created process and resources are not allocated.
• The PCB is identified by an integer Process ID
2.. Ready : The process is competing for the CPU.
(PIO}. When a process is running, u., n,:u . _, .t:
Process reaches to the head of the list (queue}.
state is inside the CPU. When the OS stops
3. Running : The process that is currently being
running a process, it saves the register's values in
executed. Operating system allocates all the
hardware and software resources to the process the PCB.
for execution. • When a process is created by operating system, it
4. Blocked/Waiting : A process is waiting until allocates a PCB for it. OS initializes PCB and puts
some event occurs such as the completion of an PCB on the correct queue. Following information is
input-output operation.
stored in process control block.
S. Exit/End : A process is completes its operations
and releases it all resources. 1. Process identification : Each process is unique ly
identified by the user's identification and a
Q.5 What Is proces s control block ? Explain
pointer connecting it to its descriptor.
various entries of It.
2. Priority number : Operating system allocates the
Ans. : Process Control Block
priority number to each process. According to
• Operating system keeps an internal data structure
the priority number it allocates the resources.
to describe each process it manages. When OS
creates process, it creates this process descriptor. In 3. Program counter : The PC indicates the address
some operating system, it calls Process Control of the next instruction to be executed for this
Block (PCB}. current process.
• Fig. Q.5.1 shows process control block. 4. Memory allocation : It contains the value of the
base registers, limit registers and the page tables
Process identification · depending on the memory system used by the
operating system.
Priority number
5. 1/0 status information : It maintains information
Program counter about the open files, list of 1/0 devices allocated
to the process etc.
Memory allocation
6. List of open files : Process uses numbe r of files
for operation. Operating system keeps track of all
1/0 status information opened file by this process.
7. Process state : Process may be in any one of the
List of open files state : new, ready, running, and waiting,
terminate.
Accounting information
• When process changes the state, the operating
system must update information in the process's
Number of registers
process control block.
Process state
• Process control block maintain other information
which is not included in PCB block diagram. This
Fig. Q.5.1 Procns control block information includes CPU scheduling, file
• Operating system maintains pointers to each • Every program has ot least onE' thread . Programs
process's PCB m a per user process table or system without multithreading executes Sl•qucntially. That
wide process table. This information is used to is, after executing one instruction the next
access PCB quickly. instruction in sequence 1s executed .
Q.8 What is thread ? List advantages of thread. • In this condition, each user process is backed by a
kernel thread. When process issues a system call to
Ans. : • Thread is a dispatchable unit of work. It
consists of thread ID, program counter, stack and read a file, the process's thread will take over, find
out which disk accesses to generate, and issue the
register set.
low level instructions required to start the transfer.
• Thread is also called a Light Weight Process (LWP).
Because they take fewer resources then a process. A It then suspends until the disk finishes reading in
thread is easy to create and destroy. the data.
• Fig. Q.8.1 shows a thread. • When process starts up a remote TCP connection,
its thread handles the low-level details of sending
Heavy weight process out network packets.
Stack
Mask
Data
!
Address space and
global mformation
1. Context switching time is minimized.
2. Thread support for efficient communication.
3. Resource sharing is possible using threads.
4. A thread provides concurrency within a process.
Local
information 5. It is more economical to create and context
switch threads.
Fig. Q.8.1 Thread
DICOOI "
T ~TEGHNICAL pUBLICATIONS~- An vp tnrust tor /ulowtedge
OpmtNn~ ~vstrms 2- 4 Procell and Cl'U SdtuluUng
Q,9 Consider an environment In whlrh there f• 8 cxc1.: t1tc separat, threads maultancou,ly on £efaI"te
one • to • one mnpplng betweeh 1111e1 • l•vel processors
thrtadt and kernel • level thread, that allow, one
or more thread, within ft proce11 to IHue blocking Q.10 Olfferenu between tt,r~d nnd prouff,
Ans. : Difference between thread and pro, ess
,yatem call• while othe1 thread, continue to run.
Explain why this model can make multl-threaded
Thread
· -----:
.Prouss
program• nm faster than their ,Ingle . threaded
counter parts on a uniprocu1or machine. No.
r. ~
~(JNTU : June-12) J, Thread is also called Process iJ a]50 calle,~
~ lightweight process. heavyweight prcx.e,-,
Ans. : • User level thread uses user space for thread
scheduling. These threads are transparent to the t 2. : Opera~ng sy; em is not
! required for thread
II Operating sy,fem
interface is required for
operating system. User level threads are created by
switching. process w. 'tdung.
runtime libraries that cannot execute privileged
instructions.
• In Kernel level thread, thread management is done
I
3. ... One thre;d can read, wri i Each process operates
! te or even completely
! clean another threads
independently of the
other process.
by Kernel. Operating systems support the Kernel stack.
level thread. Since Kernel managing threads, Kernel In multiple processing,
4. All threads can share
can schedule another thread if a given thread same set of open files each process executes
blocks rather than blocking the entire processes. and child processes. the same code but has
its own memory and
• User level threads are also called many to one file resources.
mapping thread. Kernel level thread support one to ---t
one thread mapping.
5. l
If one thread is blocked If one server process is '
and waiting then i blocked then other
• The issue here is that a program spends much of its second thread in the l server process cannot
same task can run. i execute until the first
time waiting for I/0 to complete. In a
1 process is unblocked.
multi.threaded program, one kernel level thread can 1 - - - - 4 - - .... ---··-·· -··· - i -
make the blocking system call, while the other 6. Uses fewer resources. Uses more resources.
kernel level threads can continue to run.
Q. 11 Explain thread Ufecycle with diagram.
• Operating system uses user level thread and kernel
level thread. User level threads are managed Ans. : Thread lifecycle
without kernel support. Operating system support • Fig. Q.11.1 shows a thread lifecycle. A thread has
and manage the kernel level threads. one of the following states.
• Modern operating system support kernel level 1. New : Thread is just created.
threads. Kernel performs multiple simultaneous
2. Ready : Thread's start( ) method invoked and
tasks in operating system. In most of the
now it is executing. OS put thread into Ready
application, user threads are mapped with kernel
queue.
level threads.
3. Running : Highest priority ready thread enters
• On a uniprocessor machine, a process that would
the running state. Thread is assigned a processor
otherwise have to block for all these calls can
and now is running.
continue to run its other threads.
4. Blocked : This is the state when a thread is
• In a multi.threaded application, each task can be
waiting for a lock to access an object.
performed by a separate thread.
5. Waiting : Here thread is waiting indefinitely for
• H one thread is executing a long process, it does another thread to perform an action.
not make the entire application wait for it to finish.
• H a multi.threaded application is being executed on
Te
a system that has multiple processors, the OS may
TECHNICAL PUBLJCATIONS'· An up thrust for knowledge ---
DICOOI
Opm1ting Svstems 2-5 Prou11 and CPU Schtdul
-·-· -·· -_ -·•"'----• . -
I
Ans. : • Thread is light weight process. A thread is a 2. Implement ed by a Operating system
flow of execution through the process code. It thread library at the support directly to
user level. Kernel threads.
contains program counter, stack pointer and stack for
data storage. 3. User level thread can r Kernel level threads
un on any operating are specific to the
• Because a thread is smaller than a process, thread l
i system. operating system.
creation typically uses fewer resources than process
4. I Support provided at
creation Support may be
the user level called provided by Kerne) is
• Creating a process requires allocating a Process user level thread. called Kernel level
Control Block (PCB), a rather large data structure. threads.
The PCB includes a memory map, list of open files, 5. Multi thread Kernel routines
and environm ent variables. application cannot take themselves can be
• advantage of multithrea ded.
• Allocating and managing the memory map is multiproce ssing.
typically the most time-cons uming activity. Creating -----. .-
6. : Example : POSIX
either a user or kernel thread involves allocating a Example : Windows
Pthreads and Mach 95/98/NT, Sun Sol.am
small dat.a structul'e to hold a register set, sta~ C-threads. and Digital UNIX.
and priority. 7. User level threads are Kernel level thread
• Each thread belongs to exactly one process and no also called many to support one to one
thread c.an exist outside a process. Each thread one mapping thread. thread ma~ g.
represents a separate flow of control.
g, system uses
Q.14 Write ehor t note on mult tthr~ adln • Wind ows 95/XP and Lmux oper ating
threa d and this one to UTl" thre" d map pmg .
Ans, : • Operating system uws use r k>vel
ikcrneJ level threa d · U~r level thr d
ea s arc ma.na g, d ion of kern el
:, • Only overhead in this meth od is creat
\\itho11t k.t.'tllel 8Uppo rI· n-- . of this
...,.,._ratmg syst1•m supp ort level th read for user level thread. &c.ause
.
and manage the kcmcl level thread s over head , syste m performan ce IS slow dcrtm
kem el lt•vel
• Mod em oper ating system supp nrf 2. Many to one
th read s. Kern el perfo rms mult1ph• simultane ous user thread
• Man y to one map pmg means mar y
tasks in oper ating syswm. In mo.st of the maps with one h?rn el thread. Fig Q ~4.2
show s
kern el
application, user threads arc map ped with many to one map pmg.
level threadc:.
operating
• D1fforent methods of mapping is used in
sy<:tcm are as follows : Process A
fems! space
Kerne
tnread
l
Fig. Q.14.2 Many to one multJthreadlng mode
i-thr eade d
• Oper ating system blocks the entire mult
whe n a single thread blocks
proc ess
beca use the entire multi-threaded proc ess
is a smgle
Thread
library threa d of cont rol So whe n oper ating syste
m receive
'--- ---i ,--- ---4 ---- ---+ ---- --L ~!E :l space ~~!e:e_ any a blocking I/O request, it blocks the
entire
Kerne
process.
in user
Kernel Kernel Kernel • Thread library handle thread management
level level allow
level
thread 2 thread 3 space. The many-to-one model does not
thread 1 mult iple
individual proc esse s to be split across
el CPUs.
Fig. Q.14.1 One to one muttlthreadlng mod
effective
data • This type of relationship provides an
• In this meth od, the operating system allocates context-switching environment, easily
Here
structure that represents kernel threads. implementable even on simple kernels
with no
llel on
multiple thre ads are run in para threa d supp ort.
multiprocessor system. ble threads
• Green threa ds for Solaris and G~'U porta
ases, the
• Num ber of threads in the system incre impl eme nt the many-to-one model in the
past, but
.
amo unt of memory requ ired is also increases few systems continue to do so today.
• ht th~ mnny lo ninny moppln~ mony \llll't IL•vl•I • Per process uttriuutcs tlfC as fo llows :
thn•a,h m.,p~ with l'qu,11 numlwr of kl 1 rwl thrt.•ads.
1
I. Addn•ss space
1~,g. Q.14 .3 i;hows 111,.my to mnny lhrrnd mnppin~.
2. Global varH1blcs
• M,my to mony mapping is nlso coiled M-lo•N
thn•,1d mapping. TI1r1.•nd pooling is used to 3. Open files
1mpl1.•nwnt this method. 4. Child processes
Thread library
with scheduler
User space
---~------------
Kernel space
Kernel level Kernel level Kernel level Kernel level Kernel level
thread 1 thread 2 thread 3 thread 4 thread 5 '
I
Fig. Q.14.3 Many to many multlthreadlng model
------
T•rECHNICAL PUBLICATIONS~- An up thrust for knowledge
i - 11 l'wr,11 amt l I'll Srht1f11l111g
,ntt fwr ?
Q.17 What la tht. tole of dl11 IIF [JNTU : Apnl-18 , Markt 3}
• ' I1 I I .., !lho rt•lc rm s<h,,duler, Jt rec~
aves
.,\n~. : • Di~p.,h hl't i,t1,·,•s I pn,11•!:l!l lit•ll't ll'd hy lhc
. k,'rth 'l n\\, i , ~ ~ti '', 11 " Ill ll' t l'lJ In tlw
,,,n t ro l Hl \1 ft''H t 1I l\l mi inh•rn1pl
n1 systl'm l'Hll ,
' ' ,1.
I 1 sy stem !'!.
• nw J..,,nwl disn,1t ' h ' '1 I' 1.0,·i '1,•tl ti ll' ,mn t ,ti ion for tlw l'Xt•1 ullvP nnd Uw sub
•r
Q , 18 Wh at I• boulldf ng waiting ?
to enti>r
•q , t t t I · I rt• it is gr,mtcd th" pcrm1!'is1on
Ans. : .-\t~r ,\ pmc,•s..s m •ld,t o rt m:s o en t'r ts nit1c<1 st-dlon nnd bPfo
I
An s.:
r-·- ~· ·•·· Short term Medium term
· Sr. No . Long term
It is swapping.
It is CPU scheduler.
1. Jt is job scheduler. Speed is in between both.
scheduler. Speed is very fast.
2. : Spe ed is less than sho rt tenn Reduce the degree of
Less control over degree of
It con trol s the deg ree of multiprogramming.
3.
mul tipr ogr amm ing.
-- -- -- -
multiprogramming.
-- --- - -
-- -- -- -· ····- - -·· - ____,,
DICODI
Te TECHNICAL PUBLICATIONS®·
An up thrust for knowledge
2-9
4 Ahscnt or minimal in tim(' sharing / Minimal in time sharing system . Time sharing system use mediun,
S)'l\tem. ,
term scheduler.
5. lt 5elect proce..,~es from pool and load / lt select from among the processes Process can be reintroduced into
them into memory for execution. that are ready to execute. memory and its execution can be
continued.
6. Proce_ss_ s_ta_te Is (New ~ ady). ~ _ead; to R~g) ~- _ _ _ _ _ _ _ _ _ __
Pr~~.:s~~~a!e is_(R
____ ___
7. Select a good process, mix of J/O bound Select a new process for a CPU
.._and CPU bound. quite frequently.
♦ Short term
P5 P3 P4 P11-----..-i .
Central Exit / End
scheduling processing
unit
Suspended
process
Time out
~~
,i:;
~(!'/
Interrupt from
,._~ ,..._ _ _..J higher priority Event
✓,~..,
v process wait
i
!
i Child
I terminate
\
'·---- - - - - ----·--- _ _ _ _ _ _ __ _ _ /
Flg. Q.21.1 Proceaa scheduling queueing diagram
Te
TECHNICAL PUBLICATIONS®· An up thrust for knowledge
2 HI
• Lnng «-rm scheduht'S \ ,1 ntt\.,ls the dl't;f'l'C (If tho contents c,f a CPU's registers and J)rogrnm
mutttpr,)srunumng in n\\1ltltn11k111g S) stc111s ll co,1111111nt 1111y pcilnl 111 ttm~
:f.U\'\'lJl'S a ™ll\[\\''\_'d "''' ,,r f,,bs S\ld\ ,,11 l l l h11111d • ~wilt lung the CPU to ,,Mthcr procc~s requires
and O'U ~,nnd. per lt111111ng n slntc nnvc of the current proC~!UI nntl a
• L~mg tt-rn, '-clwduling is \\Sl'd in ~.,l tnnc ,,pemtln8 slutc rcstor..· of n different process This tusk ls
£)"Stem. lime .:,,haring "'~ratn1o ~r~ll'm htt!I 1w 1,111g known us ft cont .. xt awltch ,
n~ ~ t-dukr. • A wntcxt switch can mean a reg11&tcr context
».dlum t91ffl scheduler !!Witch, n ta k context swltc..h, a thread context
swih:h or a process context switch.
• ~fodium term sdlt.-<l\ller is p,ut of sw,1pping
:unction. 5'm,etimcs tt ~mows the prnccss from • A register Is n small amount of very fost memory
mem,,n It also redU1.--es the degree of inside of .1 CPU that is used to speed the exccuhon
mult:iprogranun.ing. of computer programs by providing quick access to
commonly used values.
• If pnx-ess makes an 1/0 request and it is in
memory then operating system takes this process • A program counter is a specialized register that
into suspended states. indicates the position of the CPU in its instruction
sequence and which holds either the address of the
• Once the process becomes suspended, it cannot
instruction being executed or the address of the
make any progress towards completion.
next instruction to be executed, depending on the
• In this situation, the process is removed from specific system.
memory and makes free space for other process.
• Context switching can be described in more detail
The suspended process is stored in the secondary
as the Kernel performing the following activities
storage device i.e. hard disk. This process is called
with regard to processes (including threads) on the
swapping.
CPU :
Short tam, scheduler 1. Suspending the progression of one process and
• Short term scheduler is also called CPU scheduler. storing the CPU's state (i.e., the context) for
It selects the process from queue which are ready that process somewhere in memory.
to execute and allocate the CPU for execution. 2. Retrieving the context of the next process from
memory and restoring it in the CPU's registers
• Short term scheduler is faster that long term and
scheduler. This scheduler makes scheduling
3. Returning to the location indicated by the
decisions much more frequently than the long-term program counter in order to resume the process.
or mid-term schedulers. A scheduling decision will
• Context switches can occur only in Kernel mode
at a mini.mum have to be made after every time
(system mode). Kernel mode is a privileged mode
slice, and these are very short
of the CPU in which only the Kernel runs and
• It is also known as dispatcher. which provides access to all memory locations and
t for knowledge
PUBLICATIONS"- An up thrus
o,,trating System~ . 2 - 12 Process and CPU Scheduling
Q.26 What •yetem call• have to be ex.ecuted by a • Distributed computing systems make use of these
command Interpreter or ,hell In order to ,tart a facilities/resources to provide Application
new procen ? Explain briefly. Programming Interface (API) which allows IP~ to
~ [JNTU : Nov,-15, Marks 5) be programmed at a higher level of abstraction.
Ans. : • Command interpreter reads commands from (e.g., send and receive).
tht? u~r or from a file of commands and executes • Five types of mt~r-process communication are as
tht>m, usually by turning U1em into one or more follows :
system calls.
1. Shared memory permits processes to
• It is usually not part of the kernel since the communicate by simply reading and writing
comm.md interpreter is subject to changes.
to a specified memory location.
, UNIX systems, a fork system call followed by an 2. Mapped memory is similar to shared memoi:r,
e,ec system call need to be performed to start a except that it is associated with a file in the file
new process.
system.
, The fork call clones the currently executing process, 3. Pipes permit sequential communication from
while the exec call overlays a new process based on one process to a related process.
a different executable over the calling process.
4. FIFOs are similar to pipes, except that
, All processes in UNIX are created using the fork( ) unrelated processes can communicate because
system call. The forking process is called the parent the pipe is given a name in the file system.
process. The new process is called the child process. 5. Sockets support communication between
• A command line interpreter, or shell, program is a unrelated processes even on different
mechanism with which each interactive user can computers.
issue commands to the OS and by which the OS
• Purposes of IPC
can respond directly to the user.
1. Data transfer : One process may wish to send
• Whenever a user has successfully logged into the
data to another process.
computer, the OS causes the user process to execute
a shell. 2. Sharing data : Multiple processes may wish to
operate on shared data, such that if a process
r2~6 ~. lnte~~;c~s~ Communication J modifies the data, that change will be
immediately visible to other processes sharing
Q.27 Explain about Inter procus communication. it.
~ (JNTU : April-18, Marks 5] 3. Event modification : A process may wish to
notify another process or set of processes that
Ans. : Interprocess communication
some event has occurred.
• Exchange of data between two or more separate,
4. Resource sharing : The Kernel provides default
independent processes/threads is possible using semantics for resource allocation; they are not
IPC. Operating systems provide facilities/resources suitable for all application.
for Inter-Process Communications (IPC), such as
5. Process control : A process such as a debugger
message queues, semaphores and shared memory.
may wish to assume complete control over the
• A complex programming environment often uses execution of another process.
multiple cooperating processes to perform related
operations.
1
2. 7 : CPU Scheduling I
• These processes must communicate \vith each other
Q.28 What do you mean by turnaround time ?
and share resources and information. The Kernel
~[JNTU : Dec-17, Marks 2)
must provide mechanisms that make this possible.
These mechanisms are collectively referred to as Ans. : The interval from the time of submission of a
interprocess communication. process to the time of completion is the turnaround
4. Response time : It is the time from the Q.34 Consider the following four proce111u, with
submissi on of a request until the first response is the length of the CPU bunt time given In
mllllHConda.
produced .
Vpmting Systrm, 2-14 Proetu and CPU Scheduli"g
Anival TJme (D\s) Burst Time (ms) processor to the next waiting procec;s, TI,e system
t I ' "'
♦
6
. then places the preempted process at .~ back of
1 5 the ready queue.
2 5
....... ,. ............ .. • Processes are placed in the ready queue using a
2 3 FIFO scheme. With the RR algorithm, the principle
design issue is the length of the time quantum to
find average waiting time and turnaround time be used.
for given proce•• using FCFS and SJF
• For short time slice, processes will move through
Algorithm• ? ~[JNTU : Dec.-19, Marks 10)
the system relatively quickly. It increases the
Ans. : Gantt chart for FCFS :
processing overheads.
• The RR scheduling algorithm is designed especially
7 12 17 20 for timesharing systems.
Gantt chart for SJF :
! ·-----···~- Waiting
6
Time
9 14
Turnaround time
20
• Round robin is not useful for single user system
because it is designed especially for timesharing
systems.
• Consider the following set of process that arrive at
time 0, with the length of CPU burst given in
'
•I ··-·
milliseconds. Calculate the average waiting time
Process FCFS SJF FCFS SJF
···- .•... and average turnaround time. Provide the Gantt
P1 0 13 i
6 19 chart for the same. (Time slice = 2)
P2 6 0 ;' 11 5
' Process Burst time
P3 10 7 15 j 12
P4
1
15 i 4 18 7 P1 5
'
P2 2
Average waiting time in FCFS = 0+ 6 +10+l 5 = 7.75
P3 6
. SJF = 13+0+7+4 = 6
Average wai"ting tim"e m P4 4
4
• We can draw the Gantt chart for this :
. . FCFS = 6+11+15+1 8
Average turn around time m
4
Gantt chart:
= 12.5
. e m· SJF __ 19 + 5 + 12 + 7 12 1415 17
Average turnaroun d tim 0 2 4 6 8 10
4
P2 2-0-2
not ? B'[JNTU : Dec.-19, Marks 6) ! -
P3 4 - 0 + 10 - 6 + 15 - 12 .. 11
Ans. : • Round robin is a preemptive scheduling -·- - -
6 - 0 + 12 - 8 = 10
algorithm. It is used in interactive system. P4
• Here process are given a limited amount of time of 0 33
.. .
Average waiting time = 10+2+11+1
4
=4 = 8.25
processor time called a time slice or time quantum.
li a process does not complete before its quantum
expires , the system preempts it and gives the
T TECHNICAL
1
PUBLICATIONS.,- An up thrust for knoWHJdge
2 - 1S l~otm •ml CPU Sth,,t,11 1
l'uma.round time : l'umanl\lntl tnne ... Wi111ing firn,, t Burst time
furnaround time
r, JO • 5 • 15
P2 2 t 2 .. J
P3 llt6•17
P4 10 + 4 • 14
'2 29
I'3 3
P4 7
I'5 12
P,
0
P2 IP3I P4 P5
10 39 42 49 61
Waiting time
p 1 "" 0 ,. p2 • 10; P • 39 ; P 42 ; P5
3 4 • • 49
Average waiting time • 0 + 10 +39 + 42 + 49 • 28 milliseconds
2
II) Non-preemptive SJF : Gantt chart :
I P3 I
0 3
P4
I
10
P1
I
20
P5
I
32
P2
61
I
Waiting time
P1 = 10; P2 "' 32; P3 ... 0; P4 .. 3; P5 ... 20
Average waiting time .. 10 +32 +O+3 +20 • 6; • 13 millisecond
s
Ill) RR : Gantt chart :
I
0
pf
I
10
P2 IP31
20 23
P4
I
30
P5
I
40
P2 !Psi
50 52
P2
61
I
Waiting time
P1 • 0, P2 • 10 + (40 - 20) + (52 - 50) • 32; P3 • 20; P4 • 23 P5 • 30 + (50 - 40) • 40
.. . 0+32+20+23+40 115 . . d
Average waiting time•
5
•
5 • 23 m1111secon
Non-preemptive SJF gives minimum average waiting time as compared to FCFS and RR method. In
non-preemptive SJP, process assign to CPU that has the smallest next CPU burst.
Q.45 St.ate and explain determinate modeling. 11:i" [JNTU : June-12, Marks 7)
Am.: Dl>tenninistk modeling uses an analytic method . It takes a particular predetermined workload and defines
the performance of each algorithm for that workl<Jad. It gives real calculation of each case.
Solution : For the FCFS the processes are executed as follows :
P1 P3 P4 PS
0 8 9 12 14 20
P2
7 .......
P3
······--·..................
P4
____ - - - - - 7
9
10
............................................... - • •
11
----t
PS 10 16
P1 P4 P3 PS
0 8 9 11 14 20
-- Process
Pl
; Waiting time
0
Turnaround time
8
P2 7 8
P3 9 12
--- -- ---· ---- ····-·-····-
P4 6 8
· - - · ·· .. ........
PS 10 16
A verage wai'tin
' g tim
' e • 8+8+125+8+16 --10.4
• For the non - preemptive priority scheduling the processes are executed same as FCFS algorithm. So
average waiting is same as FCFS waiting time.
• The average waiting time of SJF is small as compared to FCFS and priority scheduling algorithm.
• Deterministic modeling is simple to implement and accurate also.
Q.48 Couldcr the following Ht of prouu, with the length of the CPU bunt given In mllllMCOnch.
-- ~
Pi 10 3
-
I Ii 1 1
I ~ 2 3
!
i & 1 4
~ 5 2
The procue• an wumect to have antved la die order I\, Pz, Ps, P4 , P5 all at time O. What a. tum
arouacl ttme of udl pn,cw b, appt,lna priodt, ech1+eH-. .....ltlm? W [JNTU: Nov.-15, Martes 10)
P5
P1
IP2 I P3 IP4 I
0 10 11 13 14 19
2. SJF
IP2 IP4 I P3 P5 P1
0 1 2 4 9 19
3. Non-preemptive priority
IP2 I P5 P1 P3 IP41
0 1 6 16 18 19
4. RR (quantum = 1)
I0 P1 IP2 I P3 IP4 I P5 P1 P5 P1 I P5 I P1
P1
2 3 5 6 8 10 12 14 15 17 19
(b) Turnaround Time
:
Process FCFS SJF Non preemptive RR
Priority
Pi 10 19 16 19
P2 11 1 1 3
' 18 5
P3 13 4
P4 14 2 19 '
6
················
Ps 19 9 6 15
10+ 11 + 13 + 14+ 19 19+1+4+2+9 16+ 1+ 18+ 19+ 6 19+3+5+6+15
Average
Turnaround 5 5 5 5
Time= • 13.4 •7 • 12 • 9.6
Ps ' 6+(10-8)+(14-12)-10
--
0+10+11+13+14 9+0+2+1+4 6+ O+ 16+ 18+ 1 9+2+3+5 +10
Average
5 5 5 5
Waiting i j
Time• • 9.6 - 3.2 • 8.2 • 5.8
; t i
-----
DICOOI
Proctu and CPU Schtd
Opr.a rl9ig ""'Sfn:u 2 • 19 %~
P1 Ps
0 75 115 140 160 205
2. SJF
P1
0 75 100 120 160 205
3. SRTF
I P1 I P3 P2
205
0 10 35 75 80 100 145
I I I I
0
P1
15
P2
30
P3
45
P1
60
P2
75
IP3 I
85
P1 I P2 I P4
100 110
I
Ps
125
I
P1 IP41 Ps
140 155 160 175
I
P1 Ps
190 205
Turnaround Time :
Waiting Time :
FCFS SJF RR
Process
0 0 130 115
P1
65 110 25 60
P2
g
- rnu11 an d CPU Schedulin
o,,er11tins sy~trm!I 1 10 ~ ~P
p 0 50
3 105 6.5
p 0 60
• 60 20
75
~c; 75 75
J5
-
with
etruct th e G antt
fJret b) R ou nd Ro bi n w ith = 3 c) Round robin • -·
Q ,'8 Con Shorteet Jo b q ~( JN T~ : ;:. ..
m al nl n: h: :o ~; t> •chedullng for th e f oJJ owing.
q • 4 d) Sh or tu t re
Process
--,
P3 P4
p~ -
. . . . . !.....
I
.... P2 ...... ••••
......................... J'i
1
3
Arrival time 0
2 1
_, 0 5
;- 12 8
CP U Bu rst time__(.m ms) 10 6
.
;._ _
t: - - - - - -. __
Ans. : G an tt char
1) SJF
1_
P5
I _:_--:
~ ~ ~ P ~ 1_ _J/_ _ .::P
3
~ _J/ 41
29
6 1 11 19
b) SRTF
I JI_ _ _~ P3:_ _41 JI
I p
-oo- - ~
2
1
~ -
P5
- - :;
P4
- - _ :_ - ~ - -
19
- ~ P 1_ _
I
11 29
6
C) RR (q • 3) P3
P1 P2 P4 P3 I P5 I P1 / P4 / P3 /P 1/ I
P1 I P4 P3 P5
I I I 34 37 38 41
I0 3
I
P2
6 9
1 1
12 15
1
18 21 24 27 29 32
D) RR (q • 4) /p1 P3
P5 P1 I P2 / P4
l
P3 P1 / I
P4
l
P3
I I 41
I P1
l P2
l 12 16 ention with the
20 24 26 30 34 35 37
0 4 8
the tim e in di cated and also m
r execution at
e fo llo w in g pr oc eH ee an i" e fo
Q.49 A aw n e th e gl"en In mJJlle
econde. - - - ,f
C P U -b un t ti m :
:
Jength of th e
j
e (ms) I
Priority / Arrival tim
. :
(ms) /
~ · /
•
: ~ :+= :- ~- I
I
1
7 4
C
E
Robin
pr oc ee ee a ue lng FCFS , Round
tf on of th ae
t JJJ .. .tr at ln s th e ue cu duJJns
0 G Jw a G an tt ch ar
iv e an d N on Preemptive). e for ea ch of th e above tche
reem pt nd tim
(quantum • 5)
, an d Pr io ri ty (Ptin g tim e an d average tu rn ar ou
e av er ag e w ai
U) C aJ cu Ja te th
al,c,rtthm.
l
o,C OO f
0/'h'nH",\' Sy~trm, ;Z - 21
b) RR (Quantum • 5)
E
A /8/ C I
A B C D 29 30 32
19
24
10 15
0 5
time :
W1ftfng time and turnaround
Turnaround time
Waiting dme
RR Preemptive Non pre emp tive
preemptive FCPS prio rity
Job FCFS RR Preemptive Nonpriority , priority
Priority
29 32 32
22 22 10
A 00 19
30 10 06
04 00 16
B 10 24
31 I 21 21
22
I C 15 24 14 14
OS 26 18 04 09
~• 22 14 00
08 OB 30 22 13 13
25 17
-- - -
e turnaround time
Average waiting time and averag
,--- AV trlf t waittn1 time Averaie turnaround time
Mtthod
72 104
10+ 16+ 22+ 26+ 30 = T
00+ 10+ 15+ 22+ 25 = 14.4 = 20.B
FCFS • ----...---- = ,
RR
IQ+ 24+ 24+ 14+ Ii
-- -_-_-_-r--_- -~-- =, = 98
19.6
29+ 30+ 31 + 18+ 22
=
130
T = 26
Preemptive priority .,
22+ 4+ 14+ O+ 8
=, =48
9.6
32+ 10+ 21 + 4+ 13
=, 80
= 16
Non-preemptive priority
22+ 0+ 14+ 5+8
:> =,"'49
9.8
32+ 06+ 21 + 9+ 13
:, = ,
81
= 16.2
• Clilssilt1;11t11)n , 1 1n11J11p10,, or y ,
1 111 (lll l'I • ll I pr11• th t Ir vc1l:
follow~
pr t} , , , wly
1. I oosely lOupJ .. d : It I nl 111 , lulil pr,x..•
""r , 1r r 1u t.•r It" lc:, cl d I t ril)ut; d
multiprocf' .,._,
autono mous !.y:;t, m With
11
',,JI, 1.:11,,11 of • Jh,, y111a;: f,,, tru f," ;: y tttrt ,
and 1/0 channl 'ls own nwm metrlfiry
• Jh,, forr. ,r1,;1,,111 ,aJJ , ,,,a~ a 1 1' a
2. Functionally aptcla11 7,ed proc "~,.,,r, : It ums1stn ( Jt1f I t1f t} I • J1i1TI r t
o ( a master, g~n,.r,11-purpoSt prr,,, ~!,,;r , M,J,ti' r 0-.e parerJ
< hild }1d tJ ,., ( m ,al} w py ,if J
processors ar(•. rontroll,•d th ,. 5yst,•m ,md
rr ,,rr1t1r;,
prov1 d es services. An ,-xam 1 . an I/0
proc~ sor P " HJ 2 <J11ld JI r1;r rdr r, the ~ r,v, VVif'/4" d •~
par,,r t.
3. Tightlf y couple d multip rocess or : It cons1st,j ()( a
set o proces sors that share a comm<m main 3 (hJld m~ ,-nt'; ,,yt-r fjJ,, -k•.1/.:n yt<,:· fr.f, 'f:.
1
Th par~nt
me:mory. e popul ar mulb-corC! aIC'h1tf'cturf-
falls into thtS category. - · 4-. Child b<:gfr1t Jjf,. 'lllfr tht: M::..e fl'~stl: - 1;; -~
as p:.ir~nt.
Issue Relating to the Scheduling :
Ex.amp}~ : Prou,& creabrm Ir C.:1"1 /
$ firBtfat'~
b. Coarse and very coarse-grained parallelism : ~1~
fustfurk: p;.d =
Minimum synchroniz.ations among the processes.
Set of concu rrent processes rumring on a Did a for', lt r'J';l..lluhd ;:s g'1t~.t = ~g ,;o/..;;;;.:a = ; ~:
multi- progra mmed uni-processor system uses Did a fork. It I~"tilILF.l'd <J ~~ ~ ~1 ~ gl?t;;p ~ = '1i4
this type of parallelism. $
c. Medium-grained parallelism : High d~ee of
Q...52 Detcri be aeque nce of operat ion pnfon ne.d b;
coordination is neede d among those threads,
kernel on fork.
which leads to a mediu m-gra ined parallelism.
Ans. : • Tr~ Y..eme! d~ ti'!£: fo!l<N,'lli g 1¥:1.f.!fflee (Jf
d. Fine-grained parallelism represents a much
more complex use of parallelism, and rem:uns a operations for for}
very difficult area. 1. It alloca ~ a slot in ttu: r'~S tar,.,e for :Jtf :ew
process.
2.10 : System call Interlace for 2 It asS1 6ns a uruque ;~ -: Jr: ber u; fra: c:dld
Process Management proces~.
3. It makes a lc,g:ical copy of the cmrte/l of lr.ce
Q.51 What i. fork ? &plai n prou:M autio n In parent ;,roces!
UNIX.
4- It i:raements filf ar'-1i mnd.e w,le cour.t.m for
Ana.: files assoa.a~ \l,itii £--.e pnJC-65
• The unix system call for proress creation is called ; It r ~ tr.£ fJ r t . ~ of fr.1: dri1d tD t:tf
fork( ). parent prrx:es5 rld a vafoe w tri:e drl1d prrx.e.-:.s
l'roct 6S and C1 ll ~chtduli1tg
1
2-2J
f111II ? Also
Q.53 What •~ the ru1o n1 fot fork to Q,55 ExpJoln exec •v•tem ull.
ghire the usu of fork .
Ans .:
Ans. : • lnc two mnm ~ac.onc: for fork to tail
nrn :
• ·nH? exec system call is used to execute a file which
1 If too mam· proccssr-.: are nlt'l'ady in
the systt,ni is residing in an ,ictive process. When exec is
callPd
is
,,~hich usually mt>anc: that something else the previous executable file is replaced and new
file
\\'1\) ll8 l)f is cxecu t,:,d .
2 lf the total num ber ot pro~ss~s tor this real user • The exec system call causes a calling proce
ss to
ram.
ID exl"eeds the sysw.m's limit. change its context and execute a different prog
exec()
• There are two \L--C.s for fork : • The user data segment which executes the
whose
that system call is replaced with the data file
1. When a process wants to duplicate itself so calling
rent name is provided in the argument while
the parent and child can each execute diffe
is exec().
5e\.'tions of code at the same time . This
, that
common for nehvork servers the parent wait
s for • When a process calls one of the exec functions
ram
a service request from a client. When the requ est process is completely replaced by the new prog
main
amves the pare nt calls fork and lets the
child and the new program starts executing at its
to
handle the request. The parent goes back function.
waiting for the next service request to arrive. • The process ID does not change across
an exec,
merely
2. Whe n a process wants to execute a diffe
rent because a new process is not created; exec
and
program. This is common for shells. In this
case, replaces the current process its text, data, heap
from
the child does an exec right after it returns
from stack segments with a brand new program
the fork. disk.
process
m call. • The new program is loaded into the same
Q.54 &pl ain tuk performed by exec syste a new
rm the space. The current process is just turned into
Ans. : The exec system call must perfo is not
process and hence the process id PID
following task : changed, this is because we are not creating
a new
executable
1. Parse the path nam e and access the process we are just replacing a process with
another
file. process in exec.
2. Verify the execute permission for the file.
• The prototypes of the exec functions are :
le.
3. Read the head er and check if valid executab #inc lude <uni std.h >
e,
4. If the file has SUID or SGID bits set in its mod int execl(const char *pathname, cons t char
*argO,... };
that
change the caller's effective UID or GID to int execv(const char *pathname, char *const
argv [I};
*argvO, ...
of the owner. int execle(const char *pathname, cons t char
ent
5. Copy the arguments to exec and the environm char *const envp II*/);
st argv(] ,
variables into kernel. int execve(const char *pathname, char *con
OICOOI
2- 24 Prnctu 11nd CPU Schtd111ing
whl•rt• pld 1, tht• proct•ss ID of th1.• zombie child and Q.11 Thread is a _ _ process.
11tat 11ddr 19 thl• .iddrcss in user space of an integer 1 Q.12 Process control block is also called as
that will cont,un thl· exit btatus code of the child.
Q.13 A program is a _ _ . entity.
Q.57 Explain ult tyttem call.
Q.14 Process control block is also called a
An,.:
Q.15 The list of processes waiting for a particular
• Processes on ,, UNIX system terminate by executing
1/0 device is called a
the exit system l'all. The syntax for the call is exit
(1t1tu1); Q.16 The _ _ scheduler controls the degree of
where the valut' of stdtus is returned to the parent multiprogramming.
pl'Ol-ess for its examindtion. Q.17 _ _ time is the sum of the periods spent
• The t>xit {) functions terminate the calling process. waiting in the ready queue.
Q.18 FCFS scheduling algorithm is _ _ .