OS Unit 1
OS Unit 1
MISSION:
M1: To impart outcome based education for emerging technologies in the
field of computer science and engineering.
M2: To provide opportunities for interaction between academia and
industry.
M3: To provide platform for lifelong learning by accepting the change in
technologies
M4: To develop aptitude of fulfilling social responsibilities
CO, PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2
PO,PS
O
CO1 3 2 2 2 2 1 1 1 1 1 1 3 1 2
CO2 3 3 3 3 2 1 1 1 1 2 1 3 2 2
CO3 3 3 3 2 3 1 1 1 1 2 1 3 2 2
CO4 3 3 3 3 2 1 1 1 2 2 2 3 2 2
1. Introduction: Objective, scope and outcome of the course. 1 4. File management: file concept 1
Introduction and History of Operating systems: Structure and operations; 1 types and structures, directory structure 2
processes and files Processor management
■ Kernel – the one program running at all times (all else being application
programs).
4. Multitasking
Simple/Monolithic Structure
In this case, the operating system code has no structure.
It is written for functionality and efficiency (in terms of time and space).
DOS and UNIX are examples of such systems.
Layered Approach
The modularization of a system can be done in many ways.
In the layered approach, the operating system is broken up into a number of layers
or levels each built on top of lower layer.
The bottom layer is the hardware; the highest layer is the user interface.
A typical OS layer consists of data structures and a set of routines that can be
invoked by higher-level layers.
1 16
Er.pushpendra singh chundawat
Virtual Machines
• The computer system is made up of layers.
• The hardware is the lowest level in all such systems.
• The kernel running at the next level uses the hardware instructions to
create a set of system call for use by outer layers.
• The system programs above the kernel are therefore able to use either
system calls or hardware instructions and in some ways these programs
do not differentiate between these two.
• System programs, in turn, treat the hardware and the system calls as
though they were both at the same level.
• In some systems, the application programs can call the system programs.
The application programs view everything under them in the hierarchy as
though the latter were part of the machine itself.
• This layered approach is taken to its logical conclusion in the concept of a
virtual machine (VM).
• The VM operating system for IBM systems is the best example of VM
concept.
1 17
Er.pushpendra singh chundawat
Virtual Machines Cont..
There are two primary advantages to using virtual machines:
first by completely protecting system resources the virtual machine
provides a robust level of security.
Second, the virtual machine allows system development to be done
without disrupting normal system operation.
The user did not interact directly with the system; instead, the user
prepared a job, (which consisted of the program, data, and some control
information about the nature of the job in the form of control cards) and
submitted this to the computer operator.
The job was in the form of punch cards, and at some later time, the output
was generated by the system. The output consisted of the result of the
program, as well as a dump of the final memory and register contents for
debugging.
Such systems in which the user does not get to interact with his jobs and
jobs with similar needs are executed in a “batch”, one after the other, are
known as batch systems.
The operating system picks and executes from amongst the available jobs
in memory.
The job has to wait for some task such as an I/O operation to complete.
In this system, a user can run one or more processes at the same time.
Unix was initially written in assembly language. Later on, it was replaced by
C, and Unix, rewritten in C and was developed into a large, complex family
of inter-related operating systems. The major categories include BSD, and
Linux.
“UNIX” is a trademark of The Open Group which licenses it for use with
any operating system that has been shown to conform to their definitions.
Er.pushpendra singh chundawat 1 25
Examples of Operating System Cont..
macOS
Mac-OS is developed by Apple Inc. and is available on all Macintosh
computers.
It was formerly called “Mac OS X” and later on “OS X”.
MacOS was developed in 1980s by NeXT and that company was
purchased by Apple in 1997.
Linux
Linux is Unix-like operating system and was developed without any Unix
code. Linux is open license model and code is available for study and
modification. It has superseded Unix on many platforms. Linux is
commonly used smartphones and smartwatches.
Er.pushpendra singh chundawat 1 26
Examples of Operating System Cont..
Microsoft Windows
Microsoft Windows is most popular and widely used operating system.
It was designed and developed by Microsoft Corporation.
The current version of operating system is Windows-10.
Microsoft Windows was first released in 1985.
In 1995, Windows 95 was released which only used MS-DOS as a
bootstrap.
1 29
Er.pushpendra singh chundawat
Cont..
Note that the stack and the heap start at opposite ends of the process's
free space and grow towards each other.
If they should ever meet, then either a stack overflow error will occur, or
else a call to new or malloc will fail due to insufficient memory available.
Key among them are the program counter and the value of all
program registers.
CPU registers: Like the Program Counter (CPU registers must be saved
and restored when a process is swapped in and out of CPU)
Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS resources
(like open files and signals).
But, like process, a thread has its own program counter (PC), register set, and stack
space.
( Note that these objectives can be conflicting. In particular, every time the
system steps in to swap processes it takes up time on the CPU to do so,
which is thereby "lost" from doing any useful productive work. )
For servers (or old mainframes), scheduling is indeed important and these
are the systems you should think of.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU
using time multiplexing.
The OS maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in the same
queue.
When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
1 42
Er.pushpendra singh chundawat
Process Scheduling
1. New: Newly Created Process (or) being-created process.
2. Ready: After creation process moves to Ready state, i.e. the process is ready
for execution.
3. Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor).
6. Suspended Ready: When the ready queue becomes full, some processes are
moved to suspended ready state
In simple terms, it is like loading and unloading the process from running state to
ready state.
2. An Interrupt occurs
1 44
Er.pushpendra singh chundawat
Context Switch vs Mode Switch
A mode switch occurs when CPU privilege level is changed, for example when a
system call is made or a fault occurs.
The kernel works in more a privileged mode than a standard user task.
If a user process wants to access things which are only accessible to the kernel, a
mode switch must occur.
The currently executing process need not be changed during a mode switch.
A mode switch typically occurs for a process context switch to occur. Only the kernel
can cause a context switch.
1 45
Er.pushpendra singh chundawat
CPU-Bound vs I/O-Bound Processes:
A CPU-bound process requires more CPU time or spends more time in the
running state.
An I/O-bound process requires more I/O time and less CPU time.
Their main task is to select the jobs to be submitted into the system and to decide
which process to run.
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
1 47
Er.pushpendra singh chundawat
Comparison among Scheduler
1 48
Er.pushpendra singh chundawat
Context Switching
• A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time.
• Using this technique, a context switcher enables multiple processes to share a single
CPU.
• When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block.
• After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.
Priority Scheduling
Non-preemptive algorithms are designed so that once a process enters the running
state, it cannot be preempted until it completes its allotted time,
The processer should know in advance how much time process will take.
Er.pushpendra singh chundawat 1 55
Cont
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements
or any other resource requirement.
The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
Impossible to implement in interactive systems where required CPU time is not known.
It is often used in batch environments where short jobs need to give preference.
Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
They make use of other existing algorithms to group and schedule jobs with
common characteristics.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.
Er.pushpendra singh chundawat 1 62
CPU Scheduling in OS
Arrival Time: Time at which the process arrives in the ready queue.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
1 63
Er.pushpendra singh chundawat
Comparison among Scheduling Algorithm
FCFS can cause long waiting times, especially when the first job takes too
much CPU time.
Both SJF and Shortest Remaining time first algorithms may cause
starvation. Consider a situation when the long process is there in the
ready queue and shorter processes keep coming.
If time quantum for Round Robin scheduling is very large, then it behaves
same as FCFS scheduling.
1 64
Er.pushpendra singh chundawat
What is Thread
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
Threads have been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors.
1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process.
2 Process switching needs interaction with operating Thread switching does not need to interact with operating system.
system.
3 In multiple processing environments, each process All threads can share same set of open files, child processes.
executes the same code but has its own memory and
file resources.
4 If one process is blocked, then no other process can While one thread is blocked and waiting, a second thread in the
execute until the first process is unblocked. same task can run.
5 Multiple processes without using threads use more Multiple threaded processes use fewer resources.
resources.
6 In multiple processes each process operates One thread can read, write or change another thread's data.
independently of the others.
Efficient communication.
Types of Thread
The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Er.pushpendra singh chundawat 1 69
Kernel Level Threads
In this case, thread management is done by the Kernel.
All of the threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Er.pushpendra singh chundawat 1 71
Multithreading Models
Some operating system provide a combined user level thread and Kernel
level thread facility.
In a combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process.
Kernel-Level Threads
• This allows a program to handle many user requests at the same time.
• Since even a single user request may result in multiple processes running in the
operating system on the user's behalf, the processes need to communicate with
each other.
• Each IPC method has its own advantages and limitations so it is not unusual for
a single program to use all of the IPC methods.
1 77
Er.pushpendra singh chundawat
Approaches IPC
File : A record stored on disk, or a record synthesized on demand by a file
server, which can be accessed by multiple processes.
1 78
Er.pushpendra singh chundawat
Approaches Cont..
Pipe :
A unidirectional data channel.
Data written to the write end of the pipe is buffered by the operating system until it is
read from the read end of the pipe.
Two-way data streams between processes can be achieved by creating two pipes
utilizing standard input and output
Shared Memory :
Multiple processes are given access to the same block of memory which creates a
shared buffer for the processes to communicate with each other.
Message queue :
A data stream similar to a socket, but which usually preserves message boundaries.
Typically implemented by the operating system, they allow multiple processes to read
and write to the message queue without being directly connected to each other.
• send (message)
•Receive (massage)
1 82
Er.pushpendra singh chundawat
Why IPC
1 83
Er.pushpendra singh chundawat
Unicast and Multicast IPC
1 84
Er.pushpendra singh chundawat
Unicast IPC MultiCast IPC
1 85
Er.pushpendra singh chundawat
.
• Int shmctl(int shmid , int cmd , struct ds *buf); cmd is one of the
following
• IPC_STAT
• IPC_SET
• IPC_RMID
1 96
Er.pushpendra singh chundawat
Characteristic of Semaphore
It is a mechanism that can be used to provide synchronization of tasks.
Semaphore can be implemented using test operations and interrupts, which should
be executed using file descriptors.
1 97
Er.pushpendra singh chundawat
Types of Semaphores
The two common kinds of semaphores are
Counting semaphores
Binary semaphores.
Counting Semaphores
This type of Semaphore uses a count that helps task to be acquired or released
numerous times.
If the initial count = 0, the counting semaphore should be created in the unavailable
state.
However, If the count is > 0, the semaphore is created in the available state, and
the number of tokens it has equals to its count.
Binary Semaphores
The binary semaphores are quite similar to counting semaphores, but their value is
restricted to 0 and 1.
In this type of semaphore, the wait operation works only if semaphore = 1, and the
signal operation succeeds when semaphore= 0. It is easy to implement than counting
semaphores.
Signal operation
This type of Semaphore operation is used to control the exit of a task from a
critical section. It helps to increase the value of the argument by 1, which is
denoted as V(S).
Copy CodeP(S)
{
while (S>=0);
S++;
}
Er.pushpendra singh chundawat 1 102
Synchronization Hardware and Software
Some times the problems of the Critical Section are also resolved by hardware. Some
operating system offers a lock functionality where a Process acquires a lock when
entering the Critical section and releases the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to enter
as it is locked. It can only do so if it is free by acquiring the lock itself.
Mutex Locks
In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization
At that time, the lower priority task holds for some time and resumes when the
higher priority task finishes its execution.
The process that keeps the CPU busy will release the CPU either by switching context or
terminating.
It is the only method that can be used for various hardware platforms.
That's because it doesn't need specialized hardware (for example, a timer) like preemptive
Scheduling.
Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.
Offers low scheduling overhead It can lead to starvation especially for those real-
time tasks
Tends to offer high throughput
Bugs can cause a machine to freeze up
It is conceptually very simple method
It can make real-time and priority Scheduling
Less computational resources need for Scheduling difficult
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.
Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.
Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing
Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.
Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution
Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes
execution. Then, P3 starts execution till it completes.
Step 7) Let's calculate the average waiting time for above example.
Er.pushpendra singh chundawat 1 113
KEY DIFFERENCES
In Preemptive Scheduling, the CPU is allocated to the processes for a
specific time period, and non-preemptive scheduling CPU is allocated to
the process until it terminates.
Preemptive algorithm has the overhead of switching the process from the
ready state to the running state while Non-preemptive Scheduling has no
such overhead of switching.
This can lead to the inconsistency of shared data. So the change made by one
process not necessarily reflected when other processes accessed the same
shared data.
1 122
Er.pushpendra singh chundawat
How Process Synchronization Works?
1 123
Er.pushpendra singh chundawat
Sections of a Program
Here, are four essential elements of the critical section:
Entry Section: It is part of the process which decides the entry of a particular process.
Critical Section: This part allows one process to enter and modify the shared variable.
Exit Section: Exit section allows the other process that are waiting in the Entry Section,
to enter into the Critical Sections. It also checks that a process that finished its
execution should be removed through this Section.
Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section.
The entry to the critical section is handled by the wait() function, and it is represented as
P().
The exit from a critical section is controlled by the signal() function, represented as V().
Other processes, waiting to execute their critical section, need to wait until the current
process completes its execution.
Progress: This solution is used when no one is in the critical section, and someone
wants in. Then those processes not in their reminder section should decide who should
go in, in a finite time.
Bound Waiting: When a process makes a request for getting into critical section, there
is a specific limit about number of processes can get into their critical section. So, when
the limit is reached, the system must allow request to the process to get into its critical
section.
Er.pushpendra singh chundawat 1 126
Solutions To The Critical Section
In Process Synchronization, critical section plays the main role so that the problem must
be solved.
Here are some widely used methods to solve the critical section problem.
Peterson Solution
Peterson's solution is widely used solution to critical section problems. This algorithm
was developed by a computer scientist Peterson that's why it is named as a Peterson's
solution.
In this solution, when a process is executing in a critical state, then the other process
only executes the rest of the code, and the opposite can happen. This method also
helps to make sure that only a single process runs in the critical section at a specific
time.
The process which enters into the critical section while exiting would
change the TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting
turn=3 and therefore P3 breaks out of wait loop.
Er.pushpendra singh chundawat 1 128
Synchronization Hardware and Software
Some times the problems of the Critical Section are also resolved by hardware. Some
operating system offers a lock functionality where a Process acquires a lock when
entering the Critical section and releases the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to enter
as it is locked. It can only do so if it is free by acquiring the lock itself.
Mutex Locks
In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization
At that time, the lower priority task holds for some time and resumes when the
higher priority task finishes its execution.
The process that keeps the CPU busy will release the CPU either by switching context or
terminating.
It is the only method that can be used for various hardware platforms.
That's because it doesn't need specialized hardware (for example, a timer) like preemptive
Scheduling.
Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.
Offers low scheduling overhead It can lead to starvation especially for those real-
time tasks
Tends to offer high throughput
Bugs can cause a machine to freeze up
It is conceptually very simple method
It can make real-time and priority Scheduling
Less computational resources need for Scheduling difficult
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.
Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.
Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing
Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.
Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution
Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes
execution. Then, P3 starts execution till it completes.
Step 7) Let's calculate the average waiting time for above example.
Er.pushpendra singh chundawat 1 139
KEY DIFFERENCES
In Preemptive Scheduling, the CPU is allocated to the processes for a
specific time period, and non-preemptive scheduling CPU is allocated to
the process until it terminates.
Preemptive algorithm has the overhead of switching the process from the
ready state to the running state while Non-preemptive Scheduling has no
such overhead of switching.