0% found this document useful (0 votes)
49 views

OS Unit 1

This document provides a high-level introduction to operating system processes and components. It discusses the need for an operating system to interface with hardware, manage multiple processes and storage. The key components of a computer system including the CPU, memory, I/O modules and system bus are outlined. Process and interrupt management are also briefly described.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

OS Unit 1

This document provides a high-level introduction to operating system processes and components. It discusses the need for an operating system to interface with hardware, manage multiple processes and storage. The key components of a computer system including the CPU, memory, I/O modules and system bus are outlined. Process and interrupt management are also briefly described.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

opualingfing

systems
UNIT -
1

INTRODUCTION 4

PROCESS MANAGEMENT

[email protected] VIBHA MASTI


OPIRAIGSYS : 'T
l N T R O D U C T l O N

NEED for OS

Access hardware interfaces
.

Manage multiple processes


Storage management files


Protection and security

Definition user

Intermediary between user and hardware t


User-friendly application
°

allocator , control
resource
program I
°

Compromise b/w usability and utilisation


of
°

computer components Top -

Level View hardware


°
Processor

Memory
.

°
1/0 modules
°

System Bus

CPU Main memory


system
o

C
! L PC :
prog counter
PC MAR Bus instruction
IR : instruction
instruction register
IR MBR instruction MAR: address
memory
!
110 AR
register
Execution 7 data
yo BR
data
MBR :
memory buffer
data register
110 Module i n -
z
yo AR
input-output
:
" '

address
register
(
110 BR input ) output
:

buffer
Buffers
register
COMPUTER SYSTEM ORGANISATION

.
cpucs) and device controllers have access to shared
memory
via a common bus


CPUs) and data controllers compete for memory cycles and can run

concurrently

Memory controller is provided to synchronise to


°
access
memory

disks mouse keyboard printer monitor

CPU
disk
controller
USB controller graphics 1/0 devices
adapter

memory

computer system operation



1/0 devices , CPU execute
concurrently

Each device controller of device and


in
charge particular type has
°

local
memory

Device controller has for each action ( keyboard input )


registers
°


CPU loads data from main
memory to local buffer

°
Device controller sends interrupt to CPU when task is completed
Bootstrap

°
When
system is booted , first to be executed is Bootstrap
program ,

which is stored on ROM or EEPROM

°
Also referred to as firmware

Initial ises all of system ( CPU registers device controllers


aspects
°

, ,

main etc )
memory

Load OS kernel onto


memory

first that is created for


After booting program
is init ; waits
°

event to occur

interrupt
Interrupt

Transfers control to the interrupt service routine ( ISR)
through the

interrupt vector ; return address needs to be saved

Interrupt vector table contains addresses of all service routines

OS
.
is an
interrupt -
driven
program

of saved OS
.
state CPU
by by storing registers and PC onto stack

Type of interrupt
for device ( 1/0 )
polling
-

-
vectored interrupt system aimer )

Action to for each interrupt determined code


be taken
by

segment
completion of task

fetoerage structure

hierarchy of memory
°

RAM : volatile main accessible


memory directly by CPU
° -

implemented semiconductor
with
technology
-

-
DRAM
info : capacitor
-

charge on

frequent charging required

EEPRO mobile
phones factory installed
ROM
My :
programs
°

Von Neumann model : fetch decode , execute


cycles
°

,
control unit Aw

decode execute


^

RAM cstore)
fetch
<

secondary memory nonvolatile


. -

°
Hard disk
-
disk surface : tracks , sectors
-
disk controller : interaction

r

SSD -
solid state disk
-
faster
flash speed
memory
-

size

caching
o

-
level 1 and level 2
faster for frequent
storage
-

u
access


Device driver
-
interface b/w controller and
Kemal
caching
.
information copied from slower to faster , temporarily

structure
yo

After 1/0 starts control returns to after


program only
°

user

completion of yo

-
CPU idles until next interrupt given ( wait instruction)

No simultaneous yo processing can occur


-

°
After yo starts , control returns to without
user
program
waiting for yo completion

system call : request to OS to allow user to wait for 1/0


completion input-output
T
-
Device status table : entries for each of the 1/0 devices Hype ,
address , state )

OS indexes into 1/0 device table ( check if busy / idle , assign


if idle)
program
Direct memory Access structure

used for devices


high speed 1/0 ( close to
memory speeds)


device controller transfers data from device directly to main

without CPU intervention


memory
°

only one interrupt generated per block ( instead of one

interrupt per byte)


computer Architecture

way hardware components are connected


together to form computer system

computer organisation
structure and behaviour of computer system as seen by the user

computer system Architecture

single general purpose processor

special purpose processors : disc controller ,


keyboard (device specific ) ,

graphics controller -
run limited no .
of instructions

managed by OS
°

eg
: disk control microprocessor
-
receives
sequence requests from CPU
,

implements queue and scheduling algorithm ,


relieves main CPU
Multiprocessor system

parallel systems , tightly coupled systems


-

multiple processors)
(

advantages

-
increased throughput
economy of scale cheaper than n single processor systems
-

increased reliability tolerant


systems
-

i.
Asymmetric multiprocessing
each processor
assigned a specific task ( boss -
subordinate)

2 .

Symmetric Multiprocessing
each processor performs all tasks

architecture
Symmetric multiprocessor
DUAL CORE DESIGN

°
Multi -

chip and multicore


Systems containing all chips ; chassis
containing multiple separate
systems

°
Command ( Linux)

cat /proc/cpuinfo | more

Blade servers

°
multiple processor boards , 1/0 boards , networking boards on same

chassis


board boots independently and runs its own OS


some blade-
server boards are multiprocessor ; multiple independent
multiprocessor systems

I
Clustered systems

Multiples systems working together cover a network)


°

CSAN)
Shares storage via storage network
• -
area

High availability service which survives failures if failure this


• -

→ ,

Asymmetric clustering one machine in hot standby mode macohiene


takes
-

each
Symmetric clustering multiple nodes running apps monitoring
:
-

other

for
Some clusters
high performance computing CH Pc )

are

apps must be written to use


parallisation


Some have distributed lock manager CDLM) to avoid conflicts over

shared data

clustered
system

SAN allows many systems


to attach to a pool of
storage
OS structure
multiprogramming
-

c.batch needed for efficiency


Multiprogramming system)

single user cannot keep CPU and 110 devices busy at all

times

and data) that CPU always has


Organises jobs ( CPU so
°
one

job to execute

• Subset of total jobs kept in


memory

°
One
job selected and executed via job scheduling

it has to wait for yo etc) OS switches to another


.
When , , job

°
Reduce CPU idling
OS structure -

Multitasking

Timesharing (multitasking)

°
CPU switches jobs frequently that users can
so interact with each

job while it is
running (interactive computing)


Response time Cl second

Each has at least in


user one
program executing memory
-

CPU
scheduling if several jobs to at same time
a
run

and out
swapping moves
processes in of memory


Virtual memory : execution of processes not
completely in memory

Cpu

user 3
user l user 2

Interrupt Driven

°
Hardware interrupt by one of the devices


Software interrupt cexeption or trap)
-
software error (divide by O)

for OS service
request
-

infinite
loops processes modifying OS et
-

,
Dual Mode and Multimode Operation


User mode and Kemal mode


Mode bit provided by hardware

distinguish
-

privileged only : Kemal mode instructions


-

system call
changes mode to Kemal , return resets it

multi mode support by mode for VMs


CPUs VM
Manager guest
• -
:

Timer

interrupt computer after specified period


variable timer implemented by fixed rate clock and counter

clock tick , counter decrements


every


interrupt occurs when counter reaches 0 ; prevents prog from running
for too
long
Kemal Data structures

(a) Array

each element can be accessed directly
main
memory

°
multiple bytes → no . of bytes x

items with
.

varying size ?

(b) Linked list


°
SLL
°
DLL

CLL

advantages
°
:

varying size
-

easy insertion ( deletion

disadvantages
-
retrieval : Ocn) for size n

-
Kemal
algorithms
stacks and
queues
-

e) Stack
e
LIFO

OS : stack of function calls

para ms , local vars , return address pushed onto stack


o

return from function call


pops items from stack

(d) Queue

FIFO
°
task scheduling CPU
printer print jobs
°
(e) Trees
°
BST
-
search : och)


Balanced BST

search : 0 Clog n)
-

Linux for CPU


scheduling next task
-
-

-
red black -
trees

o
Stat ( filename > → YO block ( 4096 etc)

Unix command

(f) Hash Functions and


Maps
implement hash
map

.
Key value pairs
:

o
constant search time

(g) Bitmap

string of n
binary digits representing status of n items
o

availability of each resource : o or I


-
O : available
-
l : not available

eg
:
bitmap 001011101

0,1 , 3,7 available


2, 4 , 5,6 unavailable
Environments
Computing
.
where task is
being performed

1 .
Traditional
.
stand alone
general purpose machine

blurred -
internet

portals provide web access


e

servers
eg company

:

2 .
Mobile

handheld smartphones ,
tablets

GPS ,
gyroscope

°
AR

IEEE 802 II - wireless cellular data network
,
Distributed
3 .

computing
collection of
separate computers


TCP/IP
-
LAN
- WAN

-
MAN
Metropolitan
- PAN Personal -
BT

. Network OS

4 .
client-Server

servers
respond to client requests
compute server system
-

-
file server
system

5
.

Peer-to-peer
.
P2P : no client and servers

can act client server or both


peer as
-

with lookup
-
nodes
registered central table
-

discovery protocol : requests and responses


Skype (VoIP ) Napster

,
6 .
Virtualisatin
° host OS run
guest OS as
application

emulation -
source CPU diff from target CPU
leg : PowerPC to

Intel x 867 not compiled to native code ; interpretation

virtual isatin : OS
natively compiled for running guest oses

CPU
also natively compiled

°
VMM -
virtual machine
manager

hardware
JVM
bytecode generated is not specific
° -
-

(a) No virtual machine (b) Virtual machine


Cloud
7 .

Computing
computing storage service across network
°

, , apps as a

extension of virtual isatin


logical
°

Amazon Elastic Cloud CEC 2) has 10005 of servers


,
millions of
VMs


Public cloud : via internet for anyone willing to
pay

Private cloud : by for its own use


company
°
run a

Hybrid cloud : both public and private components

.
Services
-
SaaS Software
:
as a Service leg : word processor) Aws

Paas : Platform Service database server software stack)
as a
leg
-
: -

IaaS : Infrastructure as a Service


leg storage
: available for backup)
-

°
cloud
computing environments composed of traditional oses, vmms
,

cloud
management tools
load balancers spread traffic (servers)
across
apps
-
8 .
Real Time-
Embedded systems

most prevalent form of computers
-
real time OS
-

special computing environments


-
some OS , some no OS

°
real time-
OS : well -
defined time constraints
-
soft real -
time systems ( do not hamper results with small

delay)
-
hard real time -

systems (
hamper results with small delay)

Jewkes
e
OS
provides environment for execution of
programs and services to

and
programs users

OS services
helpful to user :

D User Interface
CLI
,
GUI ,
Batch

2)
Program Execution

system loads program into


memory and execute , terminate
(
normally or abnormal
§ ,
errors exceptions
,
abort interrupt
,

exit CD →
,

failure cases

3) 1/0 Operations
Cfile device)
a
running program may require yo or
View of OS services

and
OS
Design Implementation

what to do
policy
°
:

mechanism : how to do

separation of policy from mechanism important , allows max

flexibility
e
creative task

implementation of oses :
-
earlier , assembly
then system programming langs Algol , PHI
-
-

now C , Ctt
°
mix of languages
-
lowest levels in
assembly
in
-
main body C
-

system programs in 4 Ctt ,


scripting languages like PERL , Python,
shell scripts

High level easier to port to other hardware


language but slower
°

°
Emulation : run OS on non native
-
hardware

Process concepts
°
OS executes various
programs
-
batch system -

jobs
time shared systems user or tasks
programs
-
- -

process program: in execution , sequential

segments / parts grow


-
code or text section
-
current activity -
PC
, processor registers
-
stack -

function parameters ,
grow
local variables ,
return address
data section
global variables
-
-

heap -

dynamically allocated memory

program passive entity; file stored



: on disk

active ; when executable file loaded onto memory


process :

execution of via CLI Gul etc


program
o
,

be several processes ( multiple


one program users
executing
e
can same

program)
local variables
}

segdmaetnat f
size of
program
a

1 #include <stdio.h>
2 #include <stdlib.h>
3
4 int x;
5 int y = 15;
6
7 int main(int argc, char *argv[]) {
8 int *values;
9 int i;
10
11 values = (int *) malloc(sizeof(int)*5);
12
13 for (int i = 0; i < 5; ++i) {
14 values[i] = i;
15 }
16 return 0;
17 }

in Linux , bss - static


f
➜ Desktop size a.out
__TEXT __DATA __OBJC others dec hex
16384 16384 0 4295000064 4295032832 100010000
Process States

New created
process being
°
:

instructions executed
Running being

:

Waiting process waiting for event


°
:

Ready waiting to be assigned to a


processor
:

process finished execution



Terminated :

shunted
Ctrl z : into
background suspended
. -

;
Ctrl -
e : abort

suspend into bg Abort

process O→ abort

.


O →
bring to foreground

Process State Diagram

Ctrl
completed
-
c
Ctrl -
z

assigns to CPU
Process control Block ( PCB)

Every process has a PCB ; info associated with each process task

control block)

Process state : CPID)


waiting
°
etc
running ,


Pc : next instruction

CPU
registers contents of centric registers
process

: -

information : allocated to the


memory management memory

process

Accounting information : CPU used , clock time elapsed since start ,


°

time limits

yo status information : 110 devices allocated to list of open files


°

process ,

linux : ps -
aux works

O
CPO switch from to
process process

multiprogramming
.

context
switching
.

moved to ready
queue

assign to Cpu

Process Scheduling

Process scheduler selects available


among processes
°

maintains
scheduling queues of processes emigrate among queues)

Job set of all in


system
-

queue : processes
Ready queue : set of in
memory , ready and waiting to execute
-

processes
-

Device queues : set of processes waiting for yo devices


Ready Queue Ee various yo Device Queues

linked list
of PCBs

Process scheduling

device queue

round
← robin
Schedulers

°
Short term schedulers ( CPU schedulers ) selects next to be
process
- :

(from
executed and assigns it to CPU
ready queue)
invoked Cms)
frequently
-

sometimes scheduler
the
only
-

should
Long term scheduler (
job scheduler) : selects which be

process
-

brought ready queue ( from job


to
queue) pg 28

invoked
infrequently Cs min)
-

degree of multiprogramming
-

Medium term scheduler : if of


degree multiprogramming needs to
°
-

decrease

process from memory


remove out
swap
-

store on disk (backing store)

bring into
memory from disk continue in
to
swap
-

swapping
-

so

°
Processes
1/0 bound : more time on 110 , less on CPU

CPU bound : more time on CPU (


long infrequent
,
CPU bursts)

Long-term scheduler : good process mix


Context
Switching
CPU
switching between
processes must save old process state and

load saved state while


switching back to it

°
Context : represented in PCB


More time spent on context
switching ,
more time wasted ;
context
switching time is an overhead ; complex OS and PCB

means
longer context switch

Time depends on hardware


availability

OPERATIONS ON PROCESSES

°
creation
-
create
process windows
:
,
for KC ) : linux
°
termination

creation

parents create children (tree)


°

°
PID : identifier
resource
sharing

children share
parents 4 all
-

subset of
I
children share parent 's
no
sharing
°
execution
-
simultaneous
-

sequential
TREE OF PROCESSES IN LINUX

NO

interrupt
daemon :
↳ cannot be
controlled by
terminal

Bourne -

Again
shell
+
login shell
( bash
,
zsh etc)

daemon

od

bash zsh
forKC) : Process creation

°
for KC ) system call : creates new
process
6 variations

°
exec C) system call used after forKC) to replace
'
:
process memory
with new
space program

termination

exit L ) OS to delete process


°

system call : asks

returns status data from child to


wait L)
parent
-

resources deallocated by OS

parent terminate execution of children abort L)


can
using
°

child exceeds allocated resources

task no
longer required
-

parent is
terminating (exiting)
-

Some oses : if parent is


terminating all its children must
°

terminate
-

cascading termination ( children →


grandchildren)

Waite ) :
parent waits for children to execute and terminate

J returns status info and pid


-

c-
linux : include memory
wait ( & status)
pid location
-

<
syst wait - h> s

did status)
Zombie : no parent waiting (
parent sleeping; not
get

parent terminated without wait L ) (child still


.

orphan :
executing)

#include <unistd.h> Terminal : with Waite)


#include <stdlib.h>
#include <stdio.h>

int main() {
int pid;
pid = fork();

if (pid < 0) {
/* without wait L)
1. Too many processes in memory
2. Max children processes
*/
printf("Forking error\n");
exit(1);
}

else if (pid == 0) {
/* Child process */
For more -
man fork
printf("Child process\n");
} man wait

else {
/*
Wait for child process to finish executing
NULL - irrespective of status of child process
*/
wait(NULL);
/* Parent process - ID of child process */
printf("Parent process\n");
}

return 0;
}
exec commands

#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>

int main() {
int pid;
pid = fork();
Terminal
if (pid < 0) {
/*
1. Too many processes in memory
2. Max children processes
*/
printf("Forking error\n");
exit(1);
}

else if (pid == 0) {
/* Child process */
printf("Child process\n");
execl("/bin/ls", "ls", NULL);
2
}
after
anything
else { is not run
/*
Wait for child process to finish executing
NULL - irrespective of status of child process
*/
wait(NULL);
/* Parent process - ID of child process */
printf("Parent process\n");
}

return 0;
}

Arguments
yargumfontsommand
execl("/bin/ls", "ls", "-l", NULL);
CPU SCHEDULING

From RQ to CPU order of execution


assignment

:
,

free ,
Single core system only one
process at time Once CPU is

: a .

next
process

CPU utilisation
Multiprogramming maximise and minimise
idling

:

Several in at
processes memory once


When one
process is
waiting ,
OS takes CPU away from it and

assigns new
process r
yo

Multi core
process of keeping CPU
busy extended to all
°
-
: cores

Alternate sequence of YO and CPU Bursts

maximise CPU utilisation


CPU Burst Time
Histogram

short CPU bursts


multiple
'

few CPU bursts


long
°

CPU Scheduler


Short term
-
scheduler
Queue unordered
queue tree
°
: FIFO , linked list
, ,
°
records : PCB
Preemptive and Non -

Preemptive scheduling


when a
process
1 .
switches from running to
waiting -
1/0 , event
2. switches from
running to
ready -

interrupt
3 switches from
.

waiting to
ready
4. terminates

scheduling under 1 and 4 : non


preemptive allowed to to
.
run
- -

wait state
completion with CPU
,
switching to

others :
preemptive process asked to release CPU
involuntarily
° -

} rcaoncedition
-

access to shared data


-
kernel mode

interrupts during crucial OS tasks


-

Preemptive used
scheduling by most oses

Mutex locks conditions shared


prevent race while
accessing
°
:

data from kernel data structures


DISPATCHER

Gives control of CPU to selected short-term


process by

scheduler
context
switching
-

switching to user mode


-

jumping to restart
-

Dispatch latency : time taken


by dispatcher to
stop me
process
and start another

scheduling criteria

.
CPU Utilisation :
-40.1 .
to -90.1 . (Max)

of processes executed unit cmax)


Throughput : no in time
°
.

wait t
d CPU

Turnaround time : time taken to execute particular process burst

(min) -

performance metric

waiting time : time for which a


process waits in ready queue
( min)

Response time : amount of time it takes from when request


submitted ,
until first
response (min)
fchedulin.gr
ALGORITHMS

1 .
First come , First serve CFCFS)

-
executed in order of arrival

effect : shorter

Jonny
tasks after long
tasks

if arrival order : Gantt chart


.
Pi , Pr Ps
y
,

for of P . -24
waiting
°
time P, -
-
O t -
a
-

for Pa 24 of 13=27
waiting time
-
-

,
t -

for of
waiting time 13=27 t a 13=30
-

average waiting time -0+24+27 17


o
= =

turnaround time 27
average
. -
-

.
if arrival order Pz ,
Ps ,
P
,

P, 6
°
tw =
, Pz -
- O
,
Pz =3
,
average tw =3

tea 17=30 , Pa =3 Pg 6
average tea 13
=
-
°
-

, ,
Shortest Job First CSJF)
2
.

Scheduling
with shorter CPU burst time executed first
process
°

optimal minimum
average waiting time
• -

knowing the
length of next
process difficulty
e -

°
Gantt chart for SJF

average wait time =


0+3+91-16 =
7
4

of Next
Predicting length CPU Burst

estimate (should be similar to previous one)


only an
°

using length of bursts


previous using exponential
.

averaging actual length



Tnt , ( t a) Tn d 0.5
a th t
usually
= -
:

T T
predicted Ed El
Q: calculate with T, 10 and
exponential averaging
=
,
2=0.5 the
with
algorithm is SJF
previous runs as 8,7 , 4,16

Initially T lo and 2=0.5 and run times 8 7 4 , 16


-
-
, , ,

Possible order : 4, 7 ,
8
,
16

Tnt ; L tht Ll d)
-

Tn

72=0.5×4 t 0.5×10 =
7

Try =
0.5×7+0.5×7 =
7

Ty = 0.5×8 t 0.5×7 = 7.5

T, = 10

Ti 7
Tz 7
-
-

74--7.5

PREDICTION
EFFECT of L

o
L = O :
Tnt , =
In
previous round does not effect


2=1 :
Tnt , =
tn
CPU burst
only previous counts

simplified exponential averaging

Q:

what is turnaround time with


average
d) FCFS
Lii ) SJF

(i )
P, Pz Pz
O 8 12 13

average turnaround 8-112-112 11


=
=

(Ii) Ps Pz P,

O 1 5 13

turnaround Its 6.33


average
= =

3
3 .
Shortest Remaining Time First CSRTF) scheduling
°

preemptive version of STF

°
arrival time taken into account ( current time -
arrival)

if has
preempt currently executing process new
process

shorter burst time

wait
9 7 7

time 0 3 2

15 9
2 5

°
Gantt chart

c- preempted

waiting time 9+01-15+2=6.5


average
. =

turnaround time =
exit time -
arrival time

waiting time turnaround time burst time


-
-
-
4.
Priority scheduling
priority defined ( smaller higher priority)
by integer
°

preemptive ( arrival time ; shortest


remaining time)

non
preemptive
-

SJF is
priority scheduling algorithm where
priority is
°

inverse of CPU burst time


Problem = starvation (never executed if
priority very low)

Solution ( as time increases ,


ageing priority increases)
°
=

Q: arrival) Find
Non -

preemptive priority queue Cno .

avg wait
time .

K Ps P, Pz Py
O 1 6 16 18 19

average waiting Otl -16 t 16-118 8-2


= =

5
Q: Preemptive

Arrival time

:
4
5

P, R P, Ps R Ps R,
.

O 2 3 7 12 16 18 19

average =
( 16 -
O -
lo) -113-2
-
1) + ( 18-4 2) 1- ( 19
- -
5- 1) +
(12-7-5)
wait time

5=3,1--6.2
Q:

Process Burst Time Arrival Time


P, 10 0

20 2
Pz
30 6
Pz
P, Pz Pz
O 10 30 60

i .
O context switches

O:

wait time

13 -
O -

9=4
5 -
I -

4=0

22-2-9=11

Preemptive SJF = SRTF

Po P, Po Pu
O 1 5 13 22

time 5
is
average wait =
=
5 . Round Robin


each
process gets a small unit of CPU time ( time
quantum
)
q usually 10 looms
-

after this time has elapsed is and added


process preempted
°

to the end of the ready queue

if there and
are
processes in
ready queue
the the time

n

time
quantum is q , then
each
process gets yn of the CPU

in chunks at most units


, g

waits than Cn units


Dq
°
no
process more -

timer to schedule next


interrupts every quantum process

q should
be
°

performance context switch


q large
→ FIFO d time
-

overhead switches)
small high (too context

too
many
-

Q: 4
q
-

Gantt chart

higher average turnaround than SJF ,


better
response
do
average waiting time
-4754-17 31=5-67
-
• -
=

O: if
q
-
- 2 for prev question

P, Pz B P , 13/4 P, P, P, P, P, P, P, P, P, P,

Time Quantum and context switching

shorter
q ,
more context switches

( burst)

(
Turnaround Time and Time Quantum

burst

time

%
%
"
il
%
%
↳ FIFO

Windows
Linux use RR
scheduling
°

Multilevel
6 .

scheduling
e

ready queue → 2
separate partitions
foreground (interactive)
-

( batch)
background
-

process permanently in a
given queue
°

each
scheduling algorithm
queue

:

foreground RR :

background FCFS :
-
in background
run run in
foreground

bring back to foreground

scheduling done between


queues
.

fixed priority scheduling ( starvation)


-
time slices C 801 fg)
- .
Multilevel Feedback
7 . Queue
scheduling
moves between various
process queues ageing
°
-

Multilevel feedback queue scheduler:


°

no . of queues
each
queue 's scheduling algorithm
-

method to determine when to


upgrade
-

a
process
-

method to determine when to demote a process


method to determine which enters it needs
queue a
process when
-

service


three queues -

example
-

do : RR -

quantum 8ms
-

O, : RR -

quantum 16 MS

Oz : FCFS

Scheduling
°

- new
job enters Go ,
served FCFS
-

job receives 8ms


-

if not completed ,
moved to Qi
-
Q, served FCFS
-

jobs receive additional 16ms

if not and moved to


complete , preempted 92
-

Az is FCFS
MULTIPLE PROCESSOR SCHEDULING

Asymmetric scheduling single master assigns processes to other


°

processor; only master has access to system data structures

( communicates with OS)

Symmetric scheduling CSMP ) each has its ready


°

processor own
:

and processes have


queue scheduling algorithm or all common

ready queue


Modern oses : Windows Linux MacOS support SMP
, ,

threads

private
ready
queue

symmetric multiprocessing csmp)


each
processor
: own cache ; buffer
-

buffer of cache
populated with process data

process migrates : cache flushed , new cache filled

core 1
}
core

cache
- populated
cache with P ,
]
^
flushed n
data

PI data in migrated
buffer

I
Processor
affinity has affinity for processor which it is

:
process on

currently running
soft affinity OS
keeps same core (tries not
-

process on
:

to
migrate) but not
guaranteed
hard affinity allows
process to
migrate between
processors
-

-
Linux : soft affinity
sched set affinity C) system call
supports hard affinity
-
-
-

ACCESS to MEMORY


Main
memory architecture can affect processor affinity

/
on

chip
memory

and
Scheduling memory placement algorithms work
together
°

LOAD BALANCING


SMP tasks ( each CPU own task)

Push load
migration periodic task checks on each CPU and pushes

:

overloaded CPU task to other CPU

Pull idle task from


migration processors pull waiting
°
:
busy processor
°
Counteracts benefits of affinity
processor
Multicore processors

Multiple cores on
single chip

Faster, less
power
°

threads
Multiple hardware
idling CPU

,
request memory for data another thread
Memory stall takes time ;
°
:
,

can compute while previous thread is fetching from


memory

c
compute cycle m
memory cycle
thread
> c M C M C M C M

]
time

thread .
J c M C M C M C

thread o
, c M C M C M C

]
time

Chip Multithreading
CMT each
assigned
-

core
-

multiple threads

Intel :
hyperthreading
°

°
Quad -
core system with 2 threads

to the OS
per core :
logically 8 cores
( ID
cat /proc/cpuinfo | more page

O c- logical cores

O ← cores

Multithreading
n
. .
coarse
grained
thread executed on processor until long latency event such

-

as

memory stall

of
cost
switching is
high
°

-
state is saved

2 . Fine
grained
-
cost lower

finer level of
granularity
.
Multithreaded multicore Processor

°
two levels of scheduling

OS decides which software thread runs on a


logical CPU
°
How each core decides which hardware thread to run on

physical core

Real-Time CPU
scheduling
embedded real time CPUs
systems
°
-

D soft Real-Time
systems
as to when task is scheduled
no
guarantee
-

2) Hard Real -
Time systems
must be serviced deadline
by
°

interrupt latency interrupt arrival to service


°

dispatch latency switch CPUs



conflict phase of dispatch latency

of
preemption process
-

Kemal mode
running in

low
-
release by priority
process of resources needed

by high priority

scheduling
ALGORITHMS

1 .

Priority -
based
Scheduling
preemptive
.

.
soft real -
time , not hard

°
CPU required at constant

intervals
t: time
processing
o

d :
deadline
p :
period

O Et E d Ep


rate of periodic task -
-
YP

Monotonic
2 . Rate
scheduling
of period
inverse
priority
°
:

Q: P1 and P2 are 50 and 100, respectively—that


is, p1 = 50 and p2 = 100.
The processing times are t1 = 20 for P1 and
t2 = 35 for P2. The deadline for each
process requires that it complete its CPU
burst by the start of its next period.

CPU utilisation =
ti 50
Pa 100
p,
-
-
-
-

Pi t = 20 ta
-
-
35
,

171--21=0.4 172--35 = 0.35 total = 0.75


50 Too

(
case 1 : P2 priority higher than P1
priority should be
based on period)

Pa P, P, needs to

o
35 55
complete before 50

misses deadline
Case 2 : Pl
priority higher than P2 missed deadlines

R , Pz

P, Pz P, Pz n
P,

O 20 50 70 85 / 100 120
idles

Q: consider processes with p,


-50
,
ti -
- 25 ,
pz
-
- so
,
ta -
- 35

PI misses deadline

misses deadline

Pa P,
priority order
also
°
35 60
wrong


worst case utilisation of CPU for N
processes = NC 24N
-
I )

3 .
Earliest Deadline First LEDF)
Scheduling
priorities assigned dynamically according to deadlines
°

deadline
.
earlier the
, higher the
priority
.
soft real -
time
O: consider processes with p,
= 50
,
ti -
- 25 ,
pz
-
- so
,
ta -
- 35

4 .

Proportional share
scheduling
°
T shares allocated all
among processes

each
app NIT of time CN shares)

processor
:

assume total F- 100 shares and 3 A , B, C


processes

eg
:

A → 50
not more
B → 15 85 shares
than 15 more
c → 20 can be allocated

POSIX
5 .
Real-Time
scheduling
°
POSIX 1b . standard

API
provides functions for threads in real time
managing
° -

Scheduling classes

SCHED FIFO threads scheduled FCFS


1 . -
:
using with
queue , no

for equal
time
slicing priority
-

similar to
2 SCHED RR : FIFO but time
slicing occurs
-
.
-

get posix scheduling algorithm


g
pthread_attr_getsched_policy(pthread_attr_t *attr, int *policy)

pthread_attr_setsched_policy(pthread_attr_t *attr, int policy)


t set
$ cat /proc/1/limits

) →
teal time priority O

priority scheduling

OT
f v

priority twitching

Linux machines : Fair scheduler CCFS)


completely
°

-
check slides
-
see : red black
-
trees

Windows machine
-
check slides
inter process communication
-

processes :
independent or
cooperating
.

"
do not affect t
each other affect each
other

cooperating processes
°

information sharing
-

computation speedup : divide task

modularity dependency
-

: cores
,

-
convenience


communication models

message passing shared


process
common

✓ shared
segment

(buffer)

Producer -
consumer Model

producer : process writes to buffer


consumer : reads from buffer
process
°
I .
Unbounded buffer

.
Producer can keep producing data and writing into

buffer

Consumer cannot read from an


empty buffer ; must
wait

°
no
practical limit on buffer

2 . Bounded buffer

°
Producer waits when buffer full


consumer waits when buffer empty

implemented as circular
array

#define BUFFER_SIZE 10

typedef struct {
...
} item;

item buffer[BUFFER_SIZE];

int in = 0;
int out = 0;

0 I 2 3 4 5 6 7 8 9

printout empty : in -
- out
O l 2 3 4 5 6 7 8 9

3 45 6 89 full one is wasted


I 2 7 queue :
T T
in
out
full :(in -113% BUFFER -
SIZE -
- out

Producer

item next_produced;

while (true) {
/* produce an item in next_produced */

while (((in + 1) % BUFFER_SIZE) == out); /* do nothing */

buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}

consumer

item next_consumed;

while (true) {
while (in == out); /* do nothing */

next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next_consumed */
}
fhaled MEMORY

two more have to same


or
processes access
memory
°

>
fastest form of IPC

ceg client q )
.

synchro nisi ng access : server

semaphores : shared
memory access

message passing

processes communicate and synchronise

send ( receive (
IPC message) message)

:
,

°
if P g Q wish to communicate, they must

1. establish communication link

via
2.
exchange messages send I receive

link between size , direction etc


messages
• -
.
physical link shared memory , hardware bus , network
'
:

direct / indirect
logical link :
synclasync automatic explicit
°

, ,

buffering

Defect COMMUNICATION

processes named explicitly


send CP ,
message )
.

receive co ,
message)

automatic links link


pair
°
one
, per

usually bidirectional

unique ID
Indirect COMMUNICATION
I

messages sent ( received to/from maiboxes (


ports)

if share mailbox
only processes a
they communicate

,
can

each have several links and each link


pair may can
.

connect several
process


link : Uni or bidirectional

create new
port / mailbox

°
issues : P, sends to shared mailbox with Rz
, Rz ; who

gets message?
Blocking Ee Non -

Blocking

Blocking
Synchronous
°

send : sender blocked


blocking until
message received
°

blocking receive : receiver blocked until


message sent

Non
blocking
-

asynchronous

send : the sender sends


non
blocking the
message continue
• -

receive the receiver receives null or valid


non
blocking a
°
- :

message

if both sender and receiver are blocking ,


rendezvous
between sender and receiver

Buffering
attached to for
queues of messages link ;
temporary queue

messages
system with
message
no
buffering
queues :
.

y
l .
zero
capacity :
queue; sender waits for receiver; rendezvous
no

2 . bounded capacity finite length of n messages , sender waits if


:

full automatic
buffering
3 unbounded infinite sender waits
.

capacity :
length ; never
Pipes
°
half -

duplex IPC parent -

child
p ,
no
relationship
ordinary (unnamed) and named
pipes pipes
-

Ordinary Pipes

producer c. write read)


consumer
style
°
- -

O -
stain

fd[ D O ) fdco] I -
Std out

↳ write I 2- stderr
read

half (unidirectional)

-

duplex

for two
.

way , two
pipes
-

Linux Windows :
:
pipe anonymous pipes
°

Named Pipes

More powerful than


ordinary pipes
°

bidirectional no
parent-child relationship
°

,
-
several processes same
pipe

two pipes for two


°

way
-

°
FIFO : once retrieved
,
data removed

°
UNIX :
read C ) write C) , close c)
mkfifoc) ,open C) ,
-

byte oriented
-

half -

duplex
-

same machine

°
Windows
-
Create Named Pipe ,
connect Named Pipe c) ,
Read Fila) ,
write Filet ), Disconnect Named Pipe l )
-
full duplex
-
same or different machine

byte or
message oriented
-

← read
end
O
T
write [ pipe pipeline
-

end
operator
Search for Pattern

for:L
'
.

I
pipe

Named pipe fifo


-

fifo file

of
% ↳
normal
pipe
manual entry L2)
system call
-

manual 4) command
entry
-

pipe -
c

Ubuntu
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int main() {
int fd[2]; MacOS
char buf[5];
pipe(fd);
write(fd[1], "hello", 5);
read(fd[0], buf, 5);
write(1, buf, 5);
printf("\n");
return 0;
}
One way communication
-

#include <stdio.h>
#include <unistd.h> output
#include <stdlib.h>

int main() {
int fd[2];
char buf[12];
int pid;

pipe(fd);
pid = fork();

if (pid < 0) {
printf("error\n");
exit(1);
Two communication
}
way
-

else if (pid == 0) { #include <stdio.h>


write(fd[1], "I am child\n", 12); #include <unistd.h>
} #include <stdlib.h>

else { int main() {


read(fd[0], buf, 12); int fd[2], fd1[2];
write(1, buf, 12); char buf[12], buf1[7];
} int pid;
return 0;
} pipe(fd);
pipe(fd1);

Output pid = fork();

if (pid < 0) {
printf("error\n");
exit(1);
}

else if (pid == 0) {
close(fd[0]);
write(fd[1], "I am child\n", 12);
read(fd1[0], buf1, 7);
write(1, buf1, 7);
}

else {
close(fd[1]);
read(fd[0], buf, 12);
write(fd1[1], "parent\n", 7);
write(1, buf, 12);
}
return 0;
}

You might also like