OS slides
OS slides
Introduction
Dr. Barsha Mitra
BITS Pilani CSIS Dept., BITS Pilani, Hyderabad Campus
Hyderabad Campus
What is an Operating System
Reference Books:
R1. W. Stallings, “Operating Systems: Internals and Design Principles”, 6th
edition, Pearson, 2009.
R2. Tanenbaum, Woodhull, “Operating Systems Design & Implementation”,
3rd edition, Pearson, 2006.
R3. Dhamdhere, “Operating Systems: A Concept based Approach”, 2nd
edition, McGrawHill, 2009.
R4. Robert Love, “Linux Kernel Development”, 3rd edition, Pearson, 2010.
BITS Pilani, Hyderabad Campus
Topics to be covered
• Introduction • File System Interface
• OS Structures • File System Implementation
• Processes • I/O Systems
• Threads • Protection
• CPU Scheduling
• Process Synchronization
• Deadlocks
• Main Memory Management
• Virtual Memory
• Mass Storage
• Chamber Consultation
• Notices
• Make-up Policy
OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair resource use
OS is a control program
Controls execution of user programs to prevent errors and improper use of
the computer (like a program should not delete a section of hard-drive, a
program should not interfere with other program)
kept in memory
One job is selected and run via job
scheduling
When it has to wait for I/O, OS
switches to another job
Peer-to-peer Computing
does not distinguish clients and
servers
nodes join and may also leave P2P
network
advantage over client server system
Choice of Interface
• smaller in size
• slower messages messages
exec()
loads a binary file into memory and starts execution
destroys previous memory image
It tells/allows the child to perform some tasks different than its parent
call to exec() does not return unless an error occurs, hence any statement written after execlp()
command will not be executed for that process
wait()
parent can issue wait() to move out of ready queue until the child is done
NOTE: Assume that a parent has 3 child processes, it has called for wait function, so as soon as any one
child terminates , parent will resume its execution, ie, wait() doesn’t wait for all children to terminate
but even 1 child termination is enough for wait() function to return. Incase two or more children
terminate, then pid of any of the child process will be randomly selected and returned to the parent
BITS Pilani, Hyderabad Campus
Process Creation
#include <sys/types.h> //pid_t datatype is in this header file
#include <stdio.h>
#include <unistd.h>
#include<sys/wait.h>
int main()
{
pid_t pid;
pid = fork(); /* fork a child process , this will create a
child process and return 0 to child process and return pid of child to parent process*/
hello
hello
Output:
Child Process : 0
I am Parent : 1234
Another Possible output:
I am Parent : 1234
Child Process : 0
BITS Pilani, Pilani Campus
Example 3
Output:
Hello
Hello
Hello
Hello
Hello
Hello
Hello
Hello
return 0;
}
In shared memory:
• One process creates shared memorysegment and other processes link/attach to that memory segment to
communicate with each other (craetion,attaching,deattching will require system call but writing/reading in shared
• Two process are not allowed to write simultaneously in the shared memory segment, but two/more can read
simultaneously
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
item next_consumed;
while(true){
while (in == out) //buffer is empty
; /*do nothing*/
Consumer next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
struct my_msgbuf {
long mtype;
char mtext[200];
};
int main(void)
{
struct my_msgbuf buf;
int msqid;
long m;
key_t key;
while(1) {
if (msgrcv(msqid, &buf, sizeof(buf.mtext), buf.mtype, 0) == -1) {
perror("msgrcv");
exit(1);
}
m=buf.mtype;
printf("Reader: %ld %s\n", m, buf.mtext);
}
return 0;
}
BITS Pilani, Hyderabad Campus
#include <stdio.h> // WRITER
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h> printf("Enter lines of text, ^D to quit:\n");
#include <sys/ipc.h> /*setting msg type*/
#include <sys/msg.h> buf.mtype = 1;
Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each subset
Many-to-One
One-to-One
Many-to-Many
Signals are used in UNIX systems to notify a process that a particular event has
occurred
The signal is delivered to a process
When delivered, signal handler is used to process signals
Synchronous and asynchronous signals
Synchronous signals
illegal memory access, div. by 0
delivered to the same process that performed the operation generating the signal
Asynchronous signals
generated by an event external to a running process
the running process receives the signal asynchronously
Ctrl + C, timer expiration
default type
If thread has cancellation disabled, cancellation remains pending until thread enables it
Default type is deferred
Cancellation only occurs when thread reaches cancellation point
Establish cancellation point by calling pthread_testcancel()
If cancellation request is pending, cleanup handler is invoked to release any
acquired resources
Thread-local storage (TLS) allows each thread to have its own copy
of data
TLS data are unique to each thread
Process AT BT FT TAT WT RT
P1 0 7 7 7 0 0
P2 8 3 20 12 17-8 9
P3 3 4 11 8 7-3 4
P4 5 6 17 12 11-5 6
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
Use FCFS in case of tie
SJF is optimal – gives minimum average waiting time for a given
set of processes
The difficulty is knowing the length of the next CPU request
For long-term (job) scheduling in a batch system, use the process time
limit that a user specifies when the job is submitted
Process AT BT FT TAT WT RT
P1 0 7
P2 0 3
P3 0 4
P4 0 6
Process AT BT FT TAT WT RT
P1 0 7
P2 8 3
P3 3 4
P4 5 6
Process AT BT FT TAT WT RT
P1 0 7
P2 8 3
P3 3 2
P4 5 6
fixed priority
preemptive
scheduling among
queues
Note: only when queue1 is
empty will lower queues be
executed,
process arriving in
q0 can preemp
process running in
lower queues
P2 0 (5 – 2 – 5)
P3 0 (2 – 2 - 2)
API allows specifying either PCS or SCS during thread creation, contention
scope values
PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling
Can be limited by OS – Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM
• On systems implementing the M:M model, PTHREAD_SCOPE_PROCESS
policy schedules user-level threads onto available LWPs
• PTHREAD_SCOPE_SYSTEM policy will create and bind an LWP for each
user-level thread effectively mapping a user-level thread to a kernel level
thread
BITS Pilani, Hyderabad Campus
Pthread Scheduling
register2 = counter
register2 = register2 - 1
counter = register2
Consider this execution interleaving with counter = 5 initially:
S0 producer execute register1 = counter register1 = 5
S1 producer execute register1 = register1 + 1 register1 = 6
S2 consumer execute register2 = counter register2 = 5
S3 consumer execute register2 = register2 – 1 register2 = 4
S4 producer execute counter = register1 counter = 6
S5 consumer execute counter = register2 counter = 4
BITS Pilani, Hyderabad Campus
Critical Section Problem
do { do {
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn = = j); while (flag[i] && turn = = i);
critical section critical section
flag[i] = false; flag[j] = false;
remainder section remainder section
} while (true); } while (true);
Pi Pj
signal(semaphore *S) {
wait(semaphore *S) {
S->value++;
S->value--; if(S->value <= 0){
if(S->value < 0){ remove a process P from S->list;
wakeup(P);
add this process to S- }
>list; }
block(); // to take this
process from running to ready queue
}
}
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
do { do {
... wait(full);
/* produce item in next_produced */
wait(mutex);
...
...
wait(empty); /*remove item from buffer to next_consumed*/
wait(mutex); ...
... signal(mutex);
/* add next_produced to the buffer */
signal(empty);
...
...
signal(mutex); /*consume the item in next_consumed */
signal(full); ...
} while (true); } while (true);
Integer read_count initialized to 0, keeps track of how many processes are reading
the shared obj.
wait(mutex);
read count--;
if (read_count == 0) signal(rw_mutex);
signal(mutex);
} while (true);
BITS Pilani, Hyderabad Campus
Dining-Philosophers Problem
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
What is KNOT??
A collection of vertices and edges s.t. every vertex in the knot has outgoing
edges that terminate at other vertices in the knot
A strongly connected subgraph of a directed graph s.t. starting from any
node in the subset it is impossible to leave the knot by following the edges
of the graph
BITS Pilani, Hyderabad Campus
Deadlock Handling
No Preemption –
If a process that is holding some resources requests another resource that cannot
be immediately allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is
waiting
Process will be restarted only when it can regain its old resources, as well as the
new ones that it is requesting
When a process P1 requests some resources and they are allocated to some other
waiting process P2, then preempt the desired resources from P2 and give them to P1
If the resources are not allocated to a waiting process, then P1 must wait
While waiting P1’s resources may be preempted
Circular Wait – impose a total ordering of all resource types, and require
that each process requests resources in an increasing order of enumeration
define a 1:1 function F : R N (N is set of natural nos.)
Say a process Pi has requested a no. of instances of Ri
Later, Pi can request resources of type Rj iff F(Rj) > F(Ri)
Alternatively, if Pi requests an instance of Rj then Pi must have released all
instances of Ri s.t. F(Ri) >= F(Rj)
Several instances of same resource type must be requested for in a single
request
Proof by contradiction
Multiple instances
4.If Finish [i] == true for all i, then the system is in a safe state
BITS Pilani, Hyderabad Campus
Banker’s Algorithm Example
5 processes P0 through P4;
3 resource types: A (10 instances), B (5 instances), and C (7 instances)
Snapshot at time T0:
The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety
criteria
BITS Pilani, Hyderabad Campus
Resource-Request Algorithm
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances
of resource type Rj
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti ;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti ;
If safe the resources are allocated to Pi
If unsafe Pi must wait, and the old resource-allocation state is restored
P3 2 1 1 2 2 2 P3 0 1 1
P4 0 0 2 4 3 3 P4 4 3 1
Detection algorithm
Recovery scheme
Rollback – return to some safe state, restart process for that state, total
rollback or partials
Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
BITS Pilani, Hyderabad Campus
Fragmentation
First Fit:
300KB 600KB 350KB 200KB 750KB 125KB
Best Fit:
300KB 600KB 350KB 200KB 750KB 125KB
Worst Fit
300KB 600KB 350KB 200KB 750KB 125KB
n = 2 and m = 4
32-byte memory and 4-byte pages
• Memory structures for paging can get huge using straight-forward methods
• Cost a lot
• Don’t want to allocate that contiguously in main memory
• Hierarchical Paging
• Hashed Page Tables
• Inverted Page Tables
Pager guesses which pages will be used before swapping out again
Pager brings in only those pages into memory
How to determine that set of pages?
Need new MMU functionality to implement demand paging
If pages needed are already memory resident
If page needed and not memory resident
Need to detect and load the page into memory from storage
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
7 7 7 2 2 4 4 4 0 1 1 1
0 0 0 0 0 0 3 3 3 0 0
1 1 3 3 2 2 2 2 2 7
Equal allocation – For example, if there are 100 frames (after allocating
frames for the OS) and 5 processes, give each process 20 frames
If a process does not have “enough” pages, the page-fault rate is
very high
Page fault to get page
Replace existing frame
But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the degree of
multiprogramming
Another process added to the system
Page-Fault Frequency
(PFF)
Define an upper and
lower limits of page
fault rate
If page fault rate is
too low take away
a frame
If page fault rate is
too high allocate
one more frame
Note:
1) allocate buddy best fit, then from left chunk to right chunk
2)If u have a appropriate size chunk currently available, do not split or merge any other
chunks ,just allocate the available chunk, example in next slide when allocate(F)
Wrong!
4k (A) 2K(B) 4K 2K(E) 4K
BITS Pilani, Pilani Campus
Advantages and Disadvantages
Advantage –
• Easy to implement a buddy system (Linux)
• Allocates block of correct size
• It is easy to merge adjacent holes
• Fast to allocate memory and de-allocating memory
Disadvantage –
• It requires all allocation unit to be powers of two
• It leads to internal fragmentation
236 cylinders
(199 – 53) +
(199 – 0) +
(37 – 0) =
382 cylinders
Usually Better
performance/r
esponse time
than scan
BITS Pilani, Hyderabad Campus
Disk Management
To use a disk to hold files, the operating system still needs to record its own
data structures on the disk
Partition the disk into one or more groups of cylinders, each treated as
a logical disk
Logical formatting- creation of a file system, OS stores the initial file
system data structures on the disk, data structures include maps of free
and allocated space and an initial empty directory
• Naming problem
• Grouping problem
index table
Cluster pointers into single block called index block
Need index table
Random access
Dynamic access without external fragmentation, but have
overhead of index block
0 1 2 n-1
…
1 block[i] free
bit[i] =
0 block[i] occupied
Counting
Because space is frequently contiguously used and freed
Keep address of first free block and count of following free blocks
Free space list then has entries containing addresses and counts
BITS Pilani, Hyderabad Campus
BITS Pilani, Hyderabad Campus
OPERATING SYSTEMS (CS F372)
I/O Systems
Dr. Barsha Mitra
BITS Pilani CSIS Dept., BITS Pilani, Hyderabad Campus
Hyderabad Campus
Overview
• I/O management is a major component
• Ports, buses, device controllers connect to various devices
• Device drivers encapsulate device details
• Common concepts – signals from I/O devices interface with computer
• Port – connection point for device
• Bus - daisy chain(Device a connected to b,b to c ,c to comp.) or shared direct access
• PCI bus common in PCs
• expansion bus connects relatively slow devices
• SCSI BUS
• Controller (host adapter) – electronics that operate port, bus, device
• Sometimes integrated on a chip
• Sometimes separate circuit board plugging into the computer, contains processor,
microcode, private memory BITS Pilani, Hyderabad Campus
I/O Hardware
Types of I/O
• Blocking
• Nonblocking – input from keyboard and mouse
• Asynchronous – disk and n/w I/O
BITS Pilani, Hyderabad Campus
Other Aspects
• Error Handling
• OS can recover from disk read, device unavailable, transient write failures
• Retry a read or write, for example
• Most return an error number or code when I/O request fails
• System error logs hold problem reports
• I/O Protection
• User process may accidentally or purposefully attempt to disrupt normal
operation via illegal I/O instructions
• All I/O instructions defined to be privileged
• I/O must be performed via system calls
BITS Pilani, Hyderabad Campus
BITS Pilani, Hyderabad Campus
OPERATING SYSTEMS (CS F372)
Protection
Dr. Barsha Mitra
BITS Pilani CSIS Dept., BITS Pilani, Hyderabad Campus
Hyderabad Campus
Goals of Protection