0% found this document useful (0 votes)
25 views

OS Full Notes

The document discusses the components and functions of an operating system kernel. It describes the kernel as the core component of the OS that interacts directly with hardware and performs crucial tasks like process management, memory management, file management and I/O management. It also discusses the differences between user space and kernel space.

Uploaded by

Shreya Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

OS Full Notes

The document discusses the components and functions of an operating system kernel. It describes the kernel as the core component of the OS that interacts directly with hardware and performs crucial tasks like process management, memory management, file management and I/O management. It also discusses the differences between user space and kernel space.

Uploaded by

Shreya Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT

1
AN INTRODUCTION TO OPERATING SYSTEMS

Application software performs specific task for the user.


System software operates and controls the computer system and provides a platform to run
application software.

An operating system is a piece of software that manages all the resources of a computer
system, both hardware and software, and provides an environment in which the user can
execute his/her programs in a convenient and efficient manner by hiding underlying
complexity of the hardware and acting as a resource manager.

Why OS?
1. What if there is no OS?
a. Bulky and complex app. (Hardware interaction code must be in app’s
code base)
b. Resource exploitation by 1 App.
c. No memory protection.
2. What is an OS made up of?
a. Collection of system software.

An operating system function -


- Access to the computer hardware.
- interface between the user and the computer hardware
- Resource management (Aka, Arbitration) (memory, device, file, security, process etc)
- Hides the underlying complexity of the hardware. (Aka, Abstraction)
- facilitates execution of application programs by providing isolation and protection.

User
Application programs
Operating system
Computer hardware

The operating system provides the means for proper use of the resources in the operation of
the computer system.
LEC-2: Types of OS

OS goals –

• Maximum CPU utilization

• Less process starvation

• Higher priority job execution

Types of operating systems –


- Single process operating system [MS DOS, 1981]

p
- Batch-processing operating system [ATLAS, Manchester Univ., late 1950s – early 1960s]
- Multiprogramming operating system [THE, Dijkstra, early 1960s]
- Multitasking operating system [CTSS, MIT, early 1960s]
- Multi-processing operating system [Windows NT]

el
- Distributed system [LOCUS]
- Real time OS [ATCS]
eH
od
C
Single process OS, only 1 process executes at a time from the ready queue. [Oldest]

Batch-processing OS,
1. Firstly, user prepares his job using punch cards.
2. Then, he submits the job to the computer operator.
3. Operator collects the jobs from different users and sort the jobs into batches with
similar needs.
4. Then, operator submits the batches to the processor one by one.
5. All the jobs of one batch are executed together.

- Priorities cannot be set, if a job comes with some higher priority.


- May lead to starvation. (A batch may take more time to complete)
- CPU may become idle in case of I/O operations.

p
el
eH
Multiprogramming increases CPU utilization by keeping multiple jobs (code and data)
in the memory so that the CPU always has one to execute in case some job gets busy with
I/O.
- Single CPU
- Context switching for processes.
- Switch happens when current process goes to wait state.
- CPU idle time reduced.

Multitasking is a logical extension of


od

multiprogramming.
- Single CPU
- Able to run more than one task
simultaneously.
- Context switching and time sharing used.
- Increases responsiveness.
- CPU idle time is further reduced.

Multi-processing OS, more than 1 CPU in a single computer.


C

- Increases reliability, 1 CPU fails, other


can work
- Better throughput.
- Lesser process starvation, (if 1 CPU is
working on some process, other can be
executed on other CPU.
Distributed OS,
- OS manages many bunches of resources,
>=1 CPUs, >=1 memory, >=1 GPUs, etc
- Loosely connected autonomous,
interconnected computer nodes.
- collection of independent, networked,
communicating, and physically separate
computational nodes.
RTOS
- Real time error free, computations
within tight-time boundaries.
- Air Traffic control system, ROBOTS etc.

p
el
eH
od
C
LEC-3: Multi-Tasking vs Multi-Threading

Program: A Program is an executable file which contains a certain set of instructions written
to complete the specific job or operation on your computer.
• It’s a compiled code. Ready to be executed.
• Stored in Disk

Process: Program under execution. Resides in Computer’s primary memory (RAM).

Thread:
• Single sequence stream within a process.
• An independent path of execution in a process.
• Light-weight process.
• Used to achieve parallelism by dividing a process’s tasks which are independent path

p
of execution.
• E.g., Multiple tabs in a browser, text editor (When you are typing in an editor, spell-
checking, formatting of text and saving the text are done concurrently by multiple

el
threads.)

Multi-Tasking Multi-Threading
The execution of more than one task A process is divided into several different
eH
simultaneously is called as multitasking. sub-tasks called as threads, which has its
own path of execution. This concept is
called as multithreading.
Concept of more than 1 processes being Concept of more than 1 thread. Threads are
context switched. context switched.
No. of CPU 1. No. of CPU >= 1. (Better to have more than
1)
od

Isolation and memory protection exists. No isolation and memory protection,


OS must allocate separate memory and resources are shared among threads of that
resources to each program that CPU is process.
executing. OS allocates memory to a process; multiple
threads of that process share the same
memory and resources allocated to the
C

process.

Thread Scheduling:
Threads are scheduled for execution based on their priority. Even though threads are
executing within the runtime, all threads are assigned processor time slices by the operating
system.

Difference between Thread Context Switching and Process Context Switching:


Thread Context switching Process context switching
OS saves current state of thread & switches OS saves current state of process &
to another thread of same process. switches to another process by restoring its
state.
Doesn’t includes switching of memory Includes switching of memory address
address space. space.
(But Program counter, registers & stack are
included.)
Fast switching. Slow switching.
CPU’s cache state is preserved. CPU’s cache state is flushed.

p
el
eH
od
C
LEC-4: Components of OS

1. Kernel: A kernel is that part of the operating system which interacts directly with
the hardware and performs the most crucial tasks.
a. Heart of OS/Core component
b. Very first part of OS to load on start-up.
2. User space: Where application software runs, apps don’t have privileged access to the
underlying hardware. It interacts with kernel.
a. GUI
b. CLI

A shell, also known as a command interpreter, is that part of the operating system that receives
commands from the users and gets them executed.

Functions of Kernel:

p
1. Process management:
a. Scheduling processes and threads on the CPUs.
b. Creating & deleting both user and system process.

el
c. Suspending and resuming processes
d. Providing mechanisms for process synchronization or process
communication.
2. Memory management:
a. Allocating and deallocating memory space as per need.
eH
b. Keeping track of which part of memory are currently being used and by
which process.
3. File management:
a. Creating and deleting files.
b. Creating and deleting directories to organize files.
c. Mapping files into secondary storage.
d. Backup support onto a stable storage media.
4. I/O management: to manage and control I/O operations and I/O devices
od

a. Buffering (data copy between two devices), caching and spooling.


i. Spooling

1. Within differing speed two jobs.


2. Eg. Print spooling and mail spooling.
C

ii. Buffering
1. Within one job.
2. Eg. Youtube video buffering
iii. Caching
1. Memory caching, Web caching etc.

Types of Kernels:
1. Monolithic kernel
a. All functions are in kernel itself.
b. Bulky in size.
c. Memory required to run is high.
d. Less reliable, one module crashes -> whole kernel is down.
e. High performance as communication is fast. (Less user mode, kernel
mode overheads)
f. Eg. Linux, Unix, MS-DOS.
2. Micro Kernel
a. Only major functions are in kernel.
i. Memory mgmt.
ii. Process mgmt.
b. File mgmt. and IO mgmt. are in User-space.
c. smaller in size.
d. More Reliable
e. More stable
f. Performance is slow.
g. Overhead switching b/w user mode and kernel mode.
h. Eg. L4 Linux, Symbian OS, MINIX etc.

3. Hybrid Kernel:
a. Advantages of both worlds. (File mgmt. in User space and rest in Kernel
space. )
b. Combined approach.
c. Speed and design of mono.
d. Modularity and stability of micro.

p
e. Eg. MacOS, Windows NT/7/10
f. IPC also happens but lesser overheads
4. Nano/Exo kernels…

el
Q. How will communication happen between user mode and kernel mode?
Ans. Inter process communication (IPC).
eH
1. Two processes executing independently, having independent memory space (Memory
protection), But some may need to communicate to work.

2. Done by shared memory and message passing.


od
C
LEC-5: System Calls

How do apps interact with Kernel? -> using system calls. user mode se kernel mode main swutch
krne ke liye the medium is system calls
Eg. Mkdir laks
- Mkdir indirectly calls kernel and asked the file mgmt. module to create a new
directory. aim -> create directory
- Mkdir is just a wrapper of actual system calls. right click new folder
gui has made it easy by clicking new folder
- Mkdir interacts with kernel using system calls.
button
mkdir button chal jayega in cli
Eg. Creating a process.
- User executes a process. (User space) user space and kernel space ke beech main jo
layer hai that is system call interface
- Gets system call. (US)
- Exec system call to create a process. (KS) user space se command jayegi ki mkdir dhundh
- Return to US. ke do through sci to vahan pe mkdir ka kuch
function pda hoga ( which mkdir ki system call)
Transitions from US to KS done by software interrupts. toh voh run hojayega then root ke side main ek
node add hojayegi of your folder name
System calls are implemented in C.

A system call is a mechanism using which a user program can request a service from the kernel for
which it does not have the permission to perform.
User programs typically do not have permission to perform operations like accessing I/O devices and
communicating other programs.

System Calls are the only way through which a process can go into kernel mode from user mode.

user app ne
command di which is
mkdir and glib c use
c code main convert
krdega fir software
interrupt khega ab
switch krna hai toh sci
mkdir ke
corresponding system
call dhundega jo
kernel space main
map hogi and then
execute hogi.then
disk main voh
particular folder ban
jayega
Types of System Calls:
1) Process Control just read them
a. end, abort
b. load, execute
c. create process, terminate process
d. get process attributes, set process attributes
e. wait for time
f. wait event, signal event
g. allocate and free memory

2) File Management
a. create file, delete file
b. open, close
c. read, write, reposition
d. get file attributes, set file attributes

3) Device Management
a. request device, release device
b. read, write, reposition
c. get device attributes, set device attributes
d. logically attach or detach devices

4) Information maintenance
a. get time or date, set time or date
b. get system data, set system data
c. get process, file, or device attributes
d. set process, file, or device attributes

5) Communication Management
a. create, delete communication connection
b. send, receive messages related to inter process communication
c. transfer status information
d. attach or detach remote devices

Examples of Windows & Unix System calls:


Category Windows Unix

Process Control CreateProcess() fork()


ExitProcess() exit()
WaitForSingleObject() wait()

File Management CreateFile() open ()


ReadFile() read ()
WriteFile() write ()
CloseHandle() close ()
SetFileSecurity() chmod() file ka mode change krta hai
InitlializeSecurityDescriptor() umask(
SetSecurityDescriptorGroup() chown()

Device Management SetConsoleMode() ioctI()


ReadConsole() read()
WriteConsole() write()

Information Management GetCurrentProcessID() getpid ()


SetTimer() alarm ()
Sleep() sleep ()

Communication CreatePipe() pipe ()


CreateFileMapping() shmget ()
MapViewOfFile() mmap()
kill -9 pid -> forcefull kill
LEC-6: What happens when you turn on your computer?

5 steps process
power supply ke pas
i. PC On electricity jayegi -
ii. CPU initializes itself and looks for a firmware program (BIOS) stored in motherboard , hard disk in
sab ke pas gyi
BIOS Chip (Basic input-output system chip is a ROM chip found on
BIOS pehle use hota tha
mother board that allows to access & setup computer system at most ab uefi use hota hai
basic level.) cpu initialize hua fir voh
1. In modern PCs, CPU loads UEFI (Unified extensible firmware bios chip ke pas gya toh
interface) load further program then
iii. CPU runs the BIOS which tests and initializes system hardware. Bios bios or uefi test run krte hai
loads configuration settings. If something is not appropriate (like missing to load some settings from
RAM) error is thrown and boot process is stopped. a memory area which is
backed up by cmos battery
This is called POST (Power on self-test) process. ab actual bios ka program
(UEFI can do a lot more than just initialize hardware; it’s really a tiny load hoga with settings
operating system. For example, Intel CPUs have the Intel Management then post chlega har ek ke
Engine. This provides a variety of features, including powering Intel’s corresponsing tests
chlenge ki essential
Active Management Technology, which allows for remote management

p
hardware hai ya nahin
of business PCs.) ab bios and uefi boot
iv. BIOS will handoff responsibility for booting your PC to your OS’s device ko hand off krega
bootloader. toh ab voh voh program

el
dhundega jo os ko asli
1. BIOS looked at the MBR (master boot record), a special boot main os ko boot/on krega.
sector at the beginning of a disk. The MBR contains code that boot loader on the actual os
loads the rest of the operating system, known as a “bootloader.” mbr vali place pe boot
The BIOS executes the bootloader, which takes it from there and loader pda hota hai bios
eH
begins booting the actual operating system—Windows or Linux, mbr ko use krta hai and uefi
for example. efi ko ; efi main disk ke 0th
inex pe na rkhke humne
In other words,
partition bna diya like c
the BIOS or UEFI examines a storage device on your system to drive and d drive or usme
look for a small program, either in the MBR or on an EFI system boot loader rkha hai
partition, and runs it. now boot loader loads the
v. The bootloader is a small program that has the large task of booting the full os
rest of the operating system (Boots Kernel then, User Space). Windows windows - bootmgr.exe
od

uses a bootloader named Windows Boot Manager (Bootmgr.exe), most is program se os initialize
Linux systems use GRUB, and Macs use something called boot.efi hoga sb os different
program hota
ab os further baki cheezen
krega
C
1 byte = 8 bits
Lec-7: 32-Bit vs 64-Bit OS

1. A 32-bit OS has 32-bit registers, and it can access 2^32 unique memory addresses. i.e., 4GB of
physical memory.
2. A 64-bit OS has 64-bit registers, and it can access 2^64 unique memory addresses. i.e.,
17,179,869,184 GB of physical memory.
3. 32-bit CPU architecture can process 32 bits of data & information.
4. 64-bit CPU architecture can process 64 bits of data & information.
5. Advantages of 64-bit over the 32-bit operating system:
a. Addressable Memory: 32-bit CPU -> 2^32 memory addresses, 64-bit CPU -> 2^64
memory addresses.
b. Resource usage: Installing more RAM on a system with a 32-bit OS doesn't impact
performance. However, upgrade that system with excess RAM to the 64-bit version of
Windows, and you'll notice a difference.

p
c. Performance: All calculations take place in the registers. When you’re performing math in
your code, operands are loaded from memory into registers. So, having larger registers
allow you to perform larger calculations at the same time.

el
32-bit processor can execute 4 bytes of data in 1 instruction cycle while 64-bit means that
processor can execute 8 bytes of data in 1 instruction cycle.
(In 1 sec, there could be thousands to billons of instruction cycles depending upon a
processor design)
eH
d. Compatibility: 64-bit CPU can run both 32-bit and 64-bit OS. While 32-bit CPU can only
run 32-bit OS.
e. Better Graphics performance: 8-bytes graphics calculations make graphics-intensive apps
run faster.

0 to 31 bit ki array ka register bnega dusre main 0 to 63 bit ka


od

1. 64 bit jada unique adressed locate kr payega as compared to 32 bit


2. 64 bit data add krna 32 bit ke liye kafi difficult hoga pehle voh last ka 32 bit krega then starting ka toh 64 bit
better hai as voh 1 cpu cycle main krdega
3. 8 times f 32 bit ka last adress hoga
4. 64 bit system main agar hum 4 gb ka ram install krde toh use kr payenge but 32 bit main koi ek hi horha
hoga
5. performance wise bhi 64 bit hi better hai as voh kam cpu cycles lega toh 64 bit double amount of work kr
C

rha hai in 1 cpu cycle


6. graphics ki calculation is better in 64 bit sytem as cpmpared to 32 bit
Lec-8: Storage Devices Basics

What are the different memory present in the computer system?

CPU

closest to cpu that's why most expensive


additional memory jo kuch temporary data ko store
krne main use ati hai ya phir koi program bar bar
use horha toh voh cache main store hojata hai

ram is main memory - yahan cpu ke instructions milte hai

yahan pe files and programs pde hote hain

p
register is most expoensive as voh transistors
se bhra pda rehta hai and best material se
bnta hai
aces speed sbse jada register ka hota hai
register ka storage size sbse chota hota hai

el
volatility - primary storage is volatile computer
band hone ke bad sab ud jayega isme but
secondry memory main sbkuch rehta hai
eH
1. Register: Smallest unit of storage. It is a part of CPU itself.
A register may hold an instruction, a storage address, or any data (such as bit sequence or individual
characters).
Registers are a type of computer memory used to quickly accept, store, and transfer data and
instructions that are being used immediately by the CPU.
2. Cache: Additional memory system that temporarily stores frequently used instructions and data for
quicker processing by the CPU.
od

3. Main Memory: RAM.


4. Secondary Memory: Storage media, on which computer can store data & programs.

Comparison
1. Cost:
a. Primary storages are costly.
C

b. Registers are most expensive due to expensive semiconductors & labour.


c. Secondary storages are cheaper than primary.
2. Access Speed:
a. Primary has higher access speed than secondary memory.
b. Registers has highest access speed, then comes cache, then main memory.
3. Storage size:
a. Secondary has more space.
4. Volatility:
a. Primary memory is volatile.
b. Secondary is non-volatile.
Lec-9: Introduction to Process

1. What is a program? Compiled code, that is ready to execute. .cpp file ko compile krta hai compile toh voh executable
hai so program jo execute hone ke liye tayyar hai any
2. What is a process? Program under execution. app is program . and pogram under execution is process

3. How OS creates a process? Converting program into a process.


STEPS: user ko apna koi kam krvana hai toh voh kam krvana ka
way hai process
a. Load the program & static data into memory.
1. program ka data and static data memory main load hoga -> static data is like variables
b. Allocate runtime stack. jo initialize huye hai program main voh sab ya array ye sab
c. Heap memory allocation. 2. ab runtime stack allocate hogi ; us process run krne liye jo stack lgegi ab voh allocate kr
rhe hain ; it is a part of memory needed for local variable, fn argument and return value.
d. IO tasks. 3. heap allocate - ab dyanamic data ke liye ye krenge ; part of memory use for dyanamic
allocation
e. OS handoffs control to main (). 4. i/o ke liye handles bna lenge ; stderr -> error handler
4. Architecture of process: 5. os only knows about main toh voh main ko control dedega ab os ka kaam khtam
-> har process ka ek parent hota hai toh os ko pta hona chahiye ki program execute hua
fully ya nahin toh uske liye hi return 0 krte hai ki process execute hogya

p
stack overflow-> jab voh itna badh jaeyga ki heap ko touch krde toh ye error ata
hai similarly heap main memory insufficuentt error ata hai toh heap bdhta gya apne
memory clear ni ki toh ram bhi limited hi hai -> randome acess memory

el
1st error removekrne ke liye base condition lgao to return and 2. ke liye jo
unessecary objects hai unko deallocate kro
eH
process table main processes maintain rehti hai every entry of process is
5. Attributes of process: called as pcb ( process control block)
a. Feature that allows identifying a process uniquely.
b. Process table
i. All processes are being tracked by OS using a table like data structure.
ii. Each entry in that table is process control block (PCB).
od

c. PCB: Stores info/attributes of a process.


i. Data structure used for each process, that stores information of a process such as
process id, program counter, process state, priority etc.
6. PCB structure:

fetch the instruction ->


C

pc++-> execute the


instruction ( as already
fetch ho chuka hai pc
vala )
sp -> stack pointer register ; BP-> base pointer
register ; save the registers of the process
files jo opened hai unke bare main
same devices jo open krke baithi ho process
needed all this information if dusra proces start
hojaye
Registers in the PCB, it is a data structure. When a processes is running and it's time slice expires, the
current value of process specific registers would be stored in the PCB and the process would be swapped
out. When the process is scheduled to be run, the register values is read from the PCB and written to the
CPU registers. This is the main purpose of the registers in the PCB.
generation se termination tak ka safar

Lec-10: Process States | Process Queues

1. Process States: As process executes, it changes state. Each process may be in one of the following
states.
a. New: OS is about to pick the program & convert it into process. OR the process is being
created. program -> process jab ban rha hai tab
b. Run: Instructions are being executed; CPU is allocated. a.2 process memory main hai so ready state
c. Waiting: Waiting for IO.
d. Ready: The process is in memory, waiting to be assigned to a processor.
e. Terminated: The process has finished execution. PCB entry removed from process table.

p
scheduler dispatcher -> ready queue
se processes ko uthake running state
main bhejna on the basis of priority

el
agar P3 wait state main gyi toh firse
ready state main jayegi jahan ready
queue hogi fir age firse scheduler ke
through running state main jayegi

2. Process Queues: job queue -> sari processes new state main hai toh job
eH
scheduler voh job ko uthake ready state main dal rha hai
a. Job Queue: tph J.S. processes ko disk se uthata hai fir new state main lata
i. Processes in new state. hai then ready main.
ii. Present in secondary memory.
iii. Job Schedular (Long term schedular (LTS)) picks process from the pool and
loads them into memory for execution. ye har ek min baad check krta hai ki koi aur job ayi
b. Ready Queue: hai ya nahin
i. Processes in Ready state. ready se running state main bhejta hai
od

ii. Present in main memory.


iii. CPU Schedular (Short-term schedular) picks process from ready queue and
dispatch it to CPU. sts as use delay jada ni rkhna agar koi process wait main chla gya
toh jaldi se dusra process le ayo as cpu idle ni baithna chahiye
c. Waiting Queue:
ek bari main ready queue main kitne processes reh skti i.e.
i. Processes in Wait state. degreeomp hai so ye lts handle krta hai ya
C

3. Degree of multi-programming: The number of processes in the memory.


a. LTS controls degree of multi-programming.
4. Dispatcher: The module of OS that gives control of CPU to a process selected by STS.
dispatcher is different from sts as dispatcher ka work hai to give control of cpu to the
process which is chose by sts
job queue is in secondry storage also we want mix of processes koi i/o ho koi cpu vali ho aise
MTS- medium term scheduler -> degreeomg bahut increase hogyi toh kuch processes bahut jada memory le rhi hai jisse memory hi khatam
ho ja rhi hai so now we need to swap some. Toh hum kuch processes ko uthate hai and swap space main save krdete hai ye generally
secondry space hoti hai ab ready queue mast chal rhi hai ab agar p1 terminate hogyi toh p3 p4 ko swap space se uthake firse le aungi inside
ready queue
LEC-11: Swapping | Context-Switching | Orphan process | Zombie process

1. Swapping
a. Time-sharing system may have medium term schedular (MTS).
b. Remove processes from memory to reduce degree of multi-programming.
c. These removed processes can be reintroduced into memory, and its execution can be continued
where it left off. This is called Swapping.
d. Swap-out and swap-in is done by MTS.
e. Swapping is necessary to improve process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up.
f. Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.

p
el
eH
p1->p2 2. Context-Switching
toh p1 ka pcb a. Switching the CPU to another process requires performing a state save of the current process and a
sbkuch save
krlega ab tak pc, state restore of a different process.
registers, b. When this occurs, the kernel saves the context of the old process in its PCB and loads the saved
state,fd's and
context of the new process scheduled to run.
every process is a c. It is pure overhead, because the system does no useful work while switching.
child of a process jo
sbse pehli process d. Speed varies from machine to machine, depending on the memory speed, the number of registers
that must be copied etc.
od

hoti hai is init. p1 se


p2 bani ab manlo ki 3. Orphan process
p1 exception ki vajah
se terminate kr gyi toh a. The process whose parent process has been terminated and it is still running.
ab p2 ka parent hi ni b. Orphan processes are adopted by init process. parent process waits until child exits tab process table se child ki
entry delete hoti hai
bcha toh os p2 ka c. Init is the first process of OS.
parent init ko bna deta
hai 4. Zombie process / Defunct process
a. A zombie process is a process whose execution is completed but it still has an entry in the process
C

exit status of child read krne


ka tarika hai wait();
table.
like jab child process exit b. Zombie processes usually occur for child processes, as the parent process still needs to read its
hogya toh us time se wait ke child’s exit status. Once this is done using the wait system call, the zombie process is eliminated
detect krne tak ke time tak
voh zombie rhega as os le from the process table. This is known as reaping the zombie process.
sare resources toh usse leliye c. It is because parent process may call wait () on child process for a longer time duration and child
bas ab uski entry hai process process got terminated much earlier.
table main so it is a xombie
process for that time. d. As entry in the process table can only be removed, after the parent process reads the exit status of
child process. Hence, the child process remains a zombie till it is removed from the process table.

agar parent process ne wait cal hi ni kiya but child process exit hogyi toh bahut sare zombie process ban jayenge toh os main hi issue
hai fir koi
ready queue->cpu through sts and dispatcher. scheduling process sheduling algorithm krti hai

LEC-12: Intro to Process Scheduling | FCFS | Convoy Effect

1. Process Scheduling
a. Basis of Multi-programming OS.
b. By switching the CPU among processes, the OS can make the computer more productive.
c. Many processes are kept in memory at a time, when a process must wait or time quantum expires,
the OS takes the CPU away from that process & gives the CPU to another process & this pattern
continues.
2. CPU Scheduler
p1 ko agar cpu a. Whenever the CPU become ideal, OS must select one process from the ready queue to be executed.
mil gya toh tab b. Done by STS.
tak cpu ko ni 3. Non-Preemptive scheduling
chodega jab tak
a. Once CPU has been allocated to a process, the process keeps the CPU until it releases CPU either by
voh terminate na
hojaye ya wait pe terminating or by switching to wait-state.
na chale jaye b. Starvation, as a process with long burst time may starve less burst time process.

p
c. Low CPU utilization.
p1 tab bhi cpu
4. Preemptive scheduling
chhod dega jab
time quantum a. CPU is taken away from a process after time quantum expires along with terminating or switching

el
khatam hojaye so to wait-state.
overhead hi more here as 10sec main 10 processes change hojayenge
iske andar b. Less Starvation
starvation ke kam c. High CPU utilization. 1. cpu should not wait
chances hai baki 5. Goals of CPU scheduling 2. job se termination tak ka time is TAT. so i want ki agar process
processes ka
a. Maximum CPU utilization
eH
ready queue main agyi toh jaldi se jaldi execute hojaye
b. Minimum Turnaround time (TAT). 3. waiting time kam ho
4. ready queue main ane se lekar jab pehli bar cpu mila us tak ka
c. Min. Wait-time
time in response time
d. Min. response time. 5. no. of processes completed per unit time is throughput
e. Max. throughput of system.
6. Throughput: No. of processes completed per unit time.
7. Arrival time (AT): Time when process is arrived at the ready queue.
8. Burst time (BT): The time required by the process for its execution.
od

9. Turnaround time (TAT): Time taken from first time process enters ready state till it terminates. (CT - AT)
10. Wait time (WT): Time process spends waiting for CPU. (WT = TAT – BT)
11. Response time: Time duration between process getting into ready queue and process getting CPU for the
first time.
12. Completion Time (CT): Time taken till process gets terminated.
13. FCFS (First come-first serve):
a. Whichever process comes first in the ready queue will be given CPU first.
C

b. In this, if one process has longer BT. It will have major effect on average WT of diff processes, called
Convoy effect.
c. Convoy Effect is a situation where many processes, who need to use a resource for a short time, are
blocked by one process holding that resource for a long time.
i. This cause poor resource management.

convoy effect->
gantt chart -> schedule hone ke sequence ka timeline
agar ek heavy job pehle agyi toh baki jobs ko bahut jada wait krna pad jayega toh avg waiting time badh
jayega
LEC-13: CPU Scheduling | SJF | Priority | RR
1. Shortest Job First (SJF) [Non-preemptive]
a. Process with least BT will be dispatched to CPU first.
b. Must do estimation for BT for each process in ready queue beforehand, Correct estimation of BT is
an impossible task (ideally.)
c. Run lowest time process for all time then, choose job having lowest BT at that instance.
d. This will suffer from convoy effect as if the very first process which came is Ready state is having a
large BT.
e. Process starvation might happen.
f. Criteria for SJF algos, AT + BT.
2. SJF [Preemptive]
a. Less starvation.
b. No convoy effect.
c. Gives average WT less for a given set of processes as scheduling short job before a long one
decreases the WT of short job more than it increases the WT of the long process.

p
3. Priority Scheduling [Non-preemptive]
a. Priority is assigned to a process when it is created.
b. SJF is a special case of general priority scheduling with priority inversely proportional to BT.

el
4. Priority Scheduling [Preemptive]
a. Current RUN state job will be preempted if next job has higher priority.
b. May cause indefinite waiting (Starvation) for lower priority jobs. (Possibility is they won’t get
executed ever). (True for both preemptive and non-preemptive version)
i. Solution: Ageing is the solution.
eH
ii. Gradually increase priority of process that wait so long. E.g., increase priority by 1 every 15
minutes.
5. Round robin scheduling (RR)
a. Most popular
b. Like FCFS but preemptive.
c. Designed for time sharing systems.
d. Criteria: AT + time quantum (TQ), Doesn’t depend on BT.
od

e. No process is going to wait forever, hence very low starvation. [No convoy effect]
f. Easy to implement.
g. If TQ is small, more will be the context switch (more overhead).
C
system processes -> created by os ( os ya kernel ko chlne ke liye jo hoti hai)
interactive procs-> user input ke liye wait krti hai (forground)
batch procs -> backgroung main chlti rehti hai no need of user input (background)

LEC-14: MLQ | MLFQ

1. Multi-level queue scheduling (MLQ)


a. Ready queue is divided into multiple queues depending upon priority.
b. A process is permanently assigned to one of the queues (inflexible) based on some property of
process, memory, size, process priority or process type.
c. Each queue has its own scheduling algorithm. E.g., SP -> RR, IP -> RR & BP -> FCFS.

har ek ko main diffrent


scheduling algo lga rhi
hun fir p1 ke nature ke
hisab se use us queue
main dal denge aur fir
voh vahi rhegi
priority wise ye sorted hai

p
hi ; sp jab tak terminate
ni hoti balki execute ni
hogi so sp ki processes
d. System process: Created by OS (Highest priority) ko cpu time jada mil rha

el
Interactive process (Foreground process): Needs user input (I/O). hai
Batch process (Background process): Runs silently, no user input required.
e. Scheduling among different sub-queues is implemented as fixed priority preemptive
scheduling. E.g., foreground queue has absolute priority over background queue.
f. If an interactive process comes & batch process is currently executing. Then, batch process will
eH
be preempted.
g. Problem: Only after completion of all the processes from the top-level ready queue, the
further level ready queues will be scheduled.
This came starvation for lower priority process.
h. Convoy effect is present. avg waiting time is more

2. Multi-level feedback queue scheduling (MLFQ)


od

a. Multiple sub-queues are present.


b. Allows the process to move between queues. The idea is to separate processes according to
the characteristics of their BT. If a process uses too much CPU time, it will be moved to lower
priority queue. This scheme leaves I/O bound and interactive processes in the higher-priority
queue.
In addition, a process that waits too much in a lower-priority queue may be moved to a higher
C

priority queue. This form of ageing prevents starvation.


c. Less starvation then MLQ.
d. It is flexible.
e. Can be configured to match a specific system design requirement.
Sample MLFQ design:

multiple sub queues hoti hai


now processes are allowed to move in queue toh jo jada time lerhi hongi unhe hum neeche vali queue
main dal skte hai jo jada interactive hongi toh unhe upar rkh denge (high priority). aging method is used to
increase the priority of lower priority processes after some time.
time quantum lga diya jo 2
main ni chal rhi toh use 4
vale main bhej diya usme ni
huyi toh 8 main then fcfs

design of mlfq-
no. queues µ
scheduling algo konsi hogi har queu
main
method to upgrade a process to a

higher queue-> ageing
demote a process use neeche kaise
bhejoge-> time quantum
jo bhi process ayegi voh konsi queue
main jayegi
3. Comparison:
FCFS SJF PSJF Priority P- RR MLQ MLFQ

p
Priority
Design Simple Complex Complex Complex Complex Simple Complex Complex
Preemption No No Yes No Yes Yes Yes Yes

el
Convoy Yes Yes No Yes Yes No Yes Yes
effect
Overhead No No Yes No Yes Yes Yes Yes
eH
preemption -> context
switching over a quantum
time
preemtion nahin hoga toh
overhead bhi ni hoga
od
C
multiple instructions ko same time pe execute krna is concurrency
threads are light weight process agar ek process main 2 independent tasks hai toh hum 2 thread bna
denge denge

LEC-15: Introduction to Concurrency

1. Concurrency is the execution of the multiple instruction sequences at the same time. It happens in
the operating system when there are several process threads running in parallel.
2. Thread: ms word main 3 task hai 1. text editor 2. spell
chceker 3. text formating
• Single sequence stream within a process. toh humne respinsiveness bda di threads main
• An independent path of execution in a process. divide krke ab voh teeno sath main horhe hai
• Light-weight process. paralelly
• Used to achieve parallelism by dividing a process’s tasks which are independent path of
execution. teeno threads main ek shared memory space use horha hoga
threads ke liye
• E.g., Multiple tabs in a browser, text editor (When you are typing in an editor, spell tcb hota hai
checking, formatting of text and saving the text are done concurrently by multiple threads.)
3. Thread Scheduling: Threads are scheduled for execution based on their priority. Even though
threads are executing within the runtime, all threads are assigned processor time slices by the

p
operating system.
threads ki cs main memory space change ni hota also cache space bhi
4. Threads context switching preserve rehti hai so this is fast
• OS saves current state of thread & switches to another thread of same process.

el
• Doesn’t includes switching of memory address space. (But Program counter, registers &
stack are included.) agar single cpu hai toh threads ka koi kam
hi nahin hai as sequentially hi chlega so
• Fast switching as compared to process switching multithreading vahi kro jahan jada cpu's hain
• CPU’s cache state is preserved.
eH
5. How each thread get access to the CPU? process ka
arcitecture main
• Each thread has its own program counter. sab same hoga
• Depending upon the thread scheduling algorithm, OS schedule these threads. bas har thread ke
• OS will fetch instructions corresponding to PC of that thread and execute instruction. liye ek new stack
bnegi st1, st2 etc.
6. I/O or TQ, based context switching is done here as well
• We have TCB (Thread control block) like PCB for state storage management while
performing context switching.
od

7. Will single CPU system would gain by multi-threading technique?


• Never.
• As two threads have to context switch for that single CPU.
• This won’t give any gain.
8. Benefits of Multi-threading.
interactive app main independent paths honge har task krne ke toh agar koi i/o main hai toh
• Responsiveness baki kam bhi parallely hote rhenge toh responsive bna
C

• Resource sharing: Efficient resource sharing.


• Economy: It is more economical to create and context switch threads.
1. Also, allocating memory and resources for process creation is costly, so better to
divide tasks into threads of same process.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

agar ek aur process bna deta to do new work toh inter process
communication krni pdti toh it is a big overhead as isolation bnani
pdti hai not so efficient

8 cores -> 8 cpu's toh ache se use horha unka

P ko T1 and T2 main divide kiya toh ab 3 cheezen chal rhi na P-> main thread then baki do hreads and agar
humne main main wait ni lgaya toh main thread exit kar jayega so error would come.we can resolve this
using t1.join() and t2.join(); t1 jab tak complete ni hojata toh main thread will wait similarly for t2;
there are chances ki different threads same resource ko access krne ki koshish krrhi hun

LEC-16: Critical Section Problem and How to address it

1. Process synchronization techniques play a key role in maintaining


the consistency of shared data
2. Critical Section (C.S)
a. The critical section refers to the segment of code where processes/threads
access shared resources, such as common variables and files, and perform
write operations on them. Since processes/threads execute concurrently, any
process can be interrupted mid-execution.
3. Major Thread scheduling issue
ek se jada threads acess a. Race Condition
kr rhi hai toh race cindtion
a jayegi so output will be
i. A race condition occurs when two or more threads can access shared
dependent on ki konsa data and they try to change it at the same time. Because the thread
thread jeetega scheduling algorithm can swap between threads at any time, you

p
count ++ main pehle temp = count
+1 horha phir count= temp horha toh
don't know the order in which the threads will attempt to access the
agar main ise atomic bna dun ki ye shared data. Therefore, the result of the change in data is dependent
ek hi bari main horha toh ( 1 cpu on the thread scheduling algorithm, i.e., both threads are "racing" to

el
cycle main hoajye) in c++ u can
make atomic variable -> atomic<int> access/change the data. also known as critical section problem
4. Solution to Race Condition
mutual exclusion le aao ki
pehle t1 krle ohir t2 execute a. Atomic operations: Make Critical code section an atomic operation, i.e.,
krle ye hum through locks kr Executed in one CPU cycle.
eH
skte hai (mutex) t1 thread
lock krdegi toh t2 ni ja
b. Mutual Exclusion using locks.
payegi ; lock,acquire and c. Semaphores
lock.release 5. Can we use a simple flag variable to solve the problem of race condition?
a. No.
6. Peterson’s solution can be used to avoid race condition but holds good for only 2
process/threads.
7. Mutex/Locks
od

a. Locks can be used to implement mutual exclusion and avoid race condition
by allowing only one thread/process to access critical section.
b. Disadvantages:
i. Contention: one thread has acquired the lock, other threads will be
busy waiting, what if thread that had acquired the lock dies, then all
other threads will be in infinite waiting. 1. T1 agar andar gyi aur lock krdiya toh toh T2 , T3
C

ii. Deadlocks debugging main issue cpu cycle khati rhengi aur manlo T1 dead hogya toh
T2 T3 infinite wait pe chali jayengi.
cs agar low priority thread ne
le liya toh high priority thread
iii. Debugging dikhta
ayega as squential ni
2. deadlock -> p1 is dependent on R1 and R1
ko wait krna pdega iv. Starvation of high priority threads. dependent on p2 and p2 is dependent on r2 and r2
is dependt on p1 so ek deadlock bna gya jab tak koi
-> soln of C.S.P. should have 3 conditions ek ni hoga toh dusra ni hoga toh p1 cycle types
1. mutual exclusion struture
2. progress -> fixed order na ho agar critical section free hai toh koi bhi chale jaye
3. bounded waiting -> indefinite waiting ni hona chahiye sabka ek limited waiting time hona chahiye
first two are mandatory
agar flag use krenge toh 1. toh hojayega but 2. main issue ayega as initial value of flag pe depend krega ki kon jayega pehle
for this peterson solution bnaya -> ab isme flag[2] vala array bnaya
code - yad krle

T1 T2
T1 ke 1st point se agar T2 pe context switching hui toh T2 chal
while(1){ while(1){
jayega tab tak T1 nahin chal payega similarli agar T2 se hua toh
flag[0] = T; flag[1] = T;
voh chlega toh mutual exclusion bhi hai and progress bhi but this
turn =1; turn =0;
is only safe for two threads.
while(turn==1 && flag[1]=T) while(turn==0 && flag[0]=T)
Critical section Critical section
flag[0]=F flag[1]=F
} }
1. single flag
2. peterson's soln
3. locks/mutex
4. semaphores (better solution)

LEC-17: Conditional Variable and Semaphores for Threads synchronization


vasically ek condn variable liya aur voh dusre thread ko wait krva rha hai jab tak condn met nahin ho ja rhi
1. Conditional variable
a. The condition variable is a synchronization primitive that lets the thread wait
until a certain condition occurs.
b. Works with a lock t1 tab tak block state main rhega jab tak t2 ake use bta ni deta ki jag jao fir uske bad
voh kam krne lgega
c. Thread can enter a wait state only when it has acquired a lock. When a
thread enters the wait state, it will release the lock and wait until another
thread notifies that the event has occurred. Once the waiting thread enters
the running state, it again acquires the lock immediately and starts executing.
d. Why to use conditional variable?
i. To avoid busy waiting. isse busy waiting nhin rhegi wait state main jane ke bad thread lock
release kr derha hai and wait krrha hai jab tak koi aur thread ake use na
e. Contention is not here. bta de ki even hogya hai ab tum occur hojayo
2. Semaphores

p
wait and signal hote hai a. Synchronization method. jitni sem ki value rhegi utne threads ko ek bar
toh jab bhi t1 ayegitoh voh
wait call kregi toh sem -- b. An integer that is equal to number of resources
-----------------------------------------------------------------------------
main parallely chlne ka mauka milega
hojayega and if sem <0 c. Multiple threads can go and execute C.S concurrently.

el
chla gya toh voh thread
block hojayegi as d. Allows multiple program threads to access the finite instance of resources
instances hai nhin abhi fir whereas mutex allows multiple threads to access a single shared resource
agar koi thread ne apna
kam krliya in critical section
one at a time. resource ke multiple instances ko denote krta hai semp
toh ab signal de degi jisme e. Binary semaphore: value can be 0 or 1. variable toh if it is 3 toh t1 ko ek instance mila toh uski value
eH
sem++ hojayega if sem i. Aka, mutex locks 2 hogyi and agar t1 exit kr gya toh iski value firse 3 hogyi
<=0 toh remove the P from
block and wakeup(p) f. Counting semaphore sem>1 mutex ka
wait() i. Can range over an unrestricted domain. internal
critical section implem.
signal()
ii. Can be used to control access to a given resource consisting of a finite binary
number of instances. sem ke
toh jab koi block se hta toh g. To overcome the need for busy waiting, we can modify the definition of jaisa hota
voh critical section main hai
jayega then jab complete the wait () and signal () semaphore operations. When a process executes the
od

hogya toh signal wait () operation and finds that the semaphore value is not positive, it must
wait. However, rather than engaging in busy waiting, the process car block
itself. The block- operation places a process into a waiting queue associated
with the semaphore, and the state of the process is switched to the Waiting
state. Then control is transferred to the CPU scheduler, which selects another
process to execute.
C

h. A process that is blocked, waiting on a semaphore S, should be restarted


when some other process executes a signal () operation. The process is
restarted by a wakeup () operation, which changes the process from the
waiting state to the ready state. The process is then placed in the ready
queue.
1. Producer consumer problem (bounded buffer problem) -> 1 producer thread 1 consumer thread; producre produce krke
buffer main dalta hai aur consumer pick krta hai buffer se buffer main n slots hai jo critical section hai;so synchronization ho
also producer must not insert data when buffer is full and lly consumer bhi pick na kre if buffer is empty.
Semaphore -> 1. mutex (binary sem) used to acquire lock on buffer 2. empty sem(counting sem) initial value=n;tracks empty
solts 3. full ->tracks filled slots initial value=0;
mutex ki vajah se mutual exclusion ajayega and full and empty ki vajah se agar buffer filled hai toh wait call hojayega
Lec-20: The Dining Philosophers problem
3rd problem -
2nd neeche likhi hai -> reader writer's problem

p
el
eH
ya toh thinking state ya eating state and to eat needs
1. We have 5 philosophers. atleast 2 forks (chamche) toh deadlock freesystem
bnana chahte hain
2. They spend their life just being in two states:
toh isme bhi semaphore -> sem[5]{1}
a. Thinking -> wait() -> fork[i]=>philo[i]->acquire
b. Eating -> release->fork[i]-> koi aur le skta hai ab
3. They sit on a circular table surrounded by 5 chairs (1 each), in the center of table is a bowl of
noodles, and the table is laid with 5 single forks.
4. Thinking state: When a ph. Thinks, he doesn’t interact with others.
od

5. Eating state: When a ph. Gets hungry, he tries to pick up the 2 forks adjacent to him (Left and
Right). He can pick one fork at a time. sem se ye insure kr liya ki koi voh fork na uthaye jo uth
6. One can’t pick up a fork if it is already taken. chuka hai but sbne left utha liya toh deadlock ajayega
7. When ph. Has both forks at the same time, he eats without releasing forks.
8. Solution can be given using semaphores.
a. Each fork is a binary semaphore.
C

b. A ph. Calls wait() operation to acquire a fork.


c. Release fork by calling signal().
d. Semaphore fork[5]{1};
9. Although the semaphore solution makes sure that no two neighbors are eating simultaneously
but it could still create Deadlock.
10. Suppose that all 5 ph. Become hungry at the same time and each picks up their left fork, then
All fork semaphores would be 0.
11. When each ph. Tries to grab his right fork, he will be waiting for ever (Deadlock) solutions->
12. We must use some methods to avoid Deadlock and make the solution work
a. Allow at most 4 ph. To be sitting simultaneously.
b. Allow a ph. To pick up his fork only if both forks are available and to do this, he must
pick them up in a critical section (atomically).
atomic bna diya dono ko
c. Odd-even rule.
an odd ph. Picks up first his left fork and then his right fork, whereas an even ph. Picks
up his right fork then his left fork.
13. Hence, only semaphores are not enough to solve this problem.
We must add some enhancement rules to make deadlock free solution.
--------------------------------------------------------------------------------------

Reader - writer's problem

database hai uspe kafi log read kr skte hai kafi write kr skte hain
if >1 readers are there toh no issues
if >1 writers or 1 writer and some other r/w thread parallely -> race condition and data inconsistent
Semaphore soln -> 1. mutex 2. wrt ( binary sem and common for both reader and writer)

writer soln reader soln ->

p
do { multiple reads hojaye but jab koi reader read kr rha hai toh write na ho i.e. agar write vali
wait(wrt); call critical section main hai toh read wait kre aur agar read kr rha hai toh writer wait krega
//do write operation and jab read krliya toh write ko signal de denge ki 0 reader hai
signal(wrt);

el
}while(true);
eH
od
C
mutual exclusion ki vajah se ek problem arise hoti hai known as deadlock

LEC-21: Deadlock Part-1

1. In Multi-programming environment, we have several processes competing for finite number of


resources
2. Process requests a resource (R), if R is not available (taken by other process), process enters in a
waiting state. Sometimes that waiting process is never able to change its state because the resource,
it has requested is busy (forever), called DEADLOCK (DL)
3. Two or more processes are waiting on some resource’s availability, which will never be available as
it is also busy with some other process. The Processes are said to be in Deadlock.
4. DL is a bug present in the process/thread synchronization method.
5. In DL, processes never finish executing, and the system resources are tied up, preventing other jobs
from starting.
6. Example of resources: Memory space, CPU cycles, files, locks, sockets, IO devices etc.
7. Single resource can have multiple instances of that. E.g., CPU is a resource, and a system can have 2

p
CPUs.
8. How a process/thread utilize a resource?
a. Request: Request the R, if R is free Lock it, else wait till it is available.

el
b. Use lock and use
c. Release: Release resource instance and make it available for other processes
eH
od

**9. Deadlock Necessary Condition: 4 Condition should hold simultaneously.


a. Mutual Exclusion
i. Only 1 process at a time can use the resource, if another process requests that
resource allocation graph ->
1. process vertex resource, the requesting process must wait until the resource has been released.
2. resource vertex b. Hold & Wait
3. edges -> 1. assign/allocated i. A process must be holding at least one resource & waiting to acquire additional
C

2. request resources that are currently being held by other processes.


c. No-preemption ek bar p1 ko r1 mil gya toh voh apni marzi se use release krega os ke kehne
pe ni dega vapis
multiple instances dikhane ke i. Resource must be voluntarily released by the process after completion of
liye uske neeche ... lga denge ki execution. (No resource preemption) P1 ka execution jab tak complete ni hojata tab tak
4 cpus hain voh r1 resource ko nahin chhodega
d. Circular wait
agar RAG ke i. A set {P0, P1, … ,Pn} of waiting processes must exist such that P0 is waiting for a
andar cycle hai
toh may be DL resource held by P1, P1 is waiting for a resource held by P2, and so on.
ho skta hai but 10. Methods for handling Deadlocks:
agar cycle nahin a. Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
hai toh no DL
deadlocked state.
may be as ho skta hai b. Allow the system to enter a deadlocked state, detect it, and recover.
koi side main process
ne bhi us resource ko
acquire kr rkha hai
aur agar usne release
krdiya toh kam hogya
c. Ignore the problem altogether and pretend that deadlocks never occur in system. (Ostrich
algorithm) aka, Deadlock ignorance.
11. To ensure that deadlocks never occur, the system can use either a deadlock prevention or
deadlock avoidance scheme.
12. Deadlock Prevention: by ensuring at least one of the necessary conditions cannot hold.
a. Mutual exclusion
i. Use locks (mutual exclusion) only for non-sharable resource.
ii. Sharable resources like Read-Only files can be accessed by multiple
processes/threads. read krna hai toh resource share kr skte hai toh ispe lock mat lgao
bas non shareable pe hi lgao
iii. However, we can’t prevent DLs by denying the mutual-exclusion condition,
because some resources are intrinsically non-sharable.
b. Hold & Wait
i. To ensure H&W condition never occurs in the system, we must guarantee that,
whenever a process requests a resource, it doesn’t hold any other resource.

p
ii. Protocol (A) can be, each process has to request and be allocated all its resources
before its execution. process ke start hone se phele hi jitne bhi resources use chahiye during it's life
cycle tum use allocate krvado beech main mat krvao
iii. Protocol (B) can be, allow a process to request resources only when it has none. It

el
can request any additional resources after it must have released all the resources
that it is currently allocated. jab usne koi aur resource na hold kr rkha ho tab hi next
c. No preemption resource le skta hai
i. If a process is holding some resources and request another resource that cannot do lock ek sath
eH
be immediately allocated to it, then all the resources the process is currently lgne ki koshish
ho rhi toh
holding are preempted. The process will restart only when it can regain its old collision hone
lgta hai toh do
resources, as well as the new one that it is requesting. (Live Lock may occur). lock ke beech
ii. If a process requests some resources, we first check whether they are available. If sleep dal denge
yes, we allocate them. If not, we check whether they are allocated to some other
process that is waiting for additional resources. If so, preempt the desired resource
from waiting process and allocate them to the requesting process.
od

d. Circular wait
i. To ensure that this condition never holds is to impose a proper ordering of
resource allocation.
ii. P1 and P2 both require R1 and R1, locking on these resources should be like, both
try to lock R1 then R2. By this way which ever process first locks R1 will get R2.
C

sequence wise acquiring kro ki pehle dono processes r1 ko ko acquire krne ke liye ldenge fir r2 ke liye toh
r1 jisko pehle mila toh r2 bhi use phele milega aur dusre kon bad main r1 phir r2; jisko r1 ni mila first time
toh voh wait krega r1 pe hi r2 ko request ni krega toh circular wait avoided by adding fixed order
current state pta hai
1. no. of processes
2. need of resources for each process
3. currently allocated resources to each process
4. max amount of each resource
to hum scheduling krenge is info ke basis pe ki DeadLock na ho
LEC-22: Deadlock Part-2

1. Deadlock Avoidance: Idea is, the kernel be given in advance info concerning which resources will
use in its lifetime.
By this, system can decide for each request whether the process should wait.
To decide whether the current request can be satisfied or delayed, the system must consider the
resources currently available, resources currently allocated to each process in the system and the
future requests and releases of each process.
a. Schedule process and its resources allocation in such a way that the DL never occur.
b. Safe state: A state is safe if the system can allocate resources to each process (up to its
max.) in some order and still avoid DL.
A system is in safe state only if there exists a safe sequence.
c. In an Unsafe state, the operating system cannot prevent processes from requesting
resources in such a way that any deadlock occurs. It is not necessary that all unsafe states

p
are deadlocks; an unsafe state may lead to a deadlock.
d. The main key of the deadlock avoidance method is whenever the request is made for
resources then the request must only be approved only in the case if the resulting state is a

el
safe state.
e. In a case, if the system is unable to fulfill the request of all processes, then the state of the
system is called unsafe.
f. Scheduling algorithm using which DL can be avoided by finding safe state. (Banker
eH
Algorithm)
2. Banker Algorithm
safe state ->
jahan deadlock a. When a process requests a set of resources, the system must determine whether allocating
na ho; agar kisi these resources will leave the system in a safe state. If yes, then the resources may be
P ki scheduling
nhin ho pati toh allocated to the process. If not, then the process must wait till other processes release
voh unsafe state enough resources.
hai
3. Deadlock Detection: Systems haven’t implemented deadlock-prevention or a deadlock avoidance
od

algorithm se detect technique, then they may employ DL detection then, recovery technique.
krenge ki system dl
main hai ya nahin a. Single Instance of Each resource type (wait-for graph method)
hai toh recovery i. A deadlock exists in the system if and only if there is a cycle in the wait-for graph.
krdenge aur nahin
hai toh algo firse In order to detect the deadlock, the system needs to maintain the wait-for graph
run krenge and periodically system invokes an algorithm that searches for the cycle in the
wait for graph main bas
wait-for graph.
C

processes hi hai unka ek dusre


pe dependency toh agar isme b. Multiple instances for each resource type
cycle mili toh dl hai valid for only
single instances
i. Banker Algorithm safe sequence hai toh no dl else hai
4. Recovery from Deadlock
a. Process termination jo low priority process hai use kill krdo toh resource free hojayega
i. Abort all DL processes
ii. Abort one process at a time until DL cycle is eliminated.
b. Resource preemption
i. To eliminate DL, we successively preempt some resources from processes and
give these resources to other processes until DL cycle is broken.
instead of killing process uska resource preemt krdo
cpu idle na baithe isliye hum ram main bahut sari processes rkhte hai so ram main processes rkhi jati hain that's memory
management
do processes independently execute honi chahiye as agar p1 60 pe hai toh 60+5 krke voh p2 ka area bhi acess kr pa rha
hai toh write kr skta hai toh isolation kaise ayegi?
P1 ko directly ram
ka acess nahin LEC-24: Memory Management Techniques | Contiguous Memory Allocation
milta hai uska
adress space 1. In Multi-programming environment, we have multiple processes in the main memory (Ready Queue) to
alag se chalta hai
keep the CPU utilization high and to make computer responsive to the users.
virtually 0 se lekar
size tak ab ek 2. To realize this increase in performance, however, we must keep several processes in the memory; that is, we
layer hogi bw that must share the main memory. As a result, we must manage main memory for all the different processes.
logical space and 3. Logical versus Physical Address Space
ram jo uske 0 ko a. Logical Address har ek process ka apna logical adress space hota hai
ram se map kregi i. An address generated by the CPU.
OS base (ram
ii. The logical address is basically the address of an instruction or data used by a process.
main kha se start
hai) and offset iii. User can access logical address of the process.
(size) store krta iv. User has indirect access to the physical address through logical address. through base
hai aur agar hum v. Logical address does not exist physically. Hence, aka, Virtual address. 0-size tak P1 ka virtual space hai
out of bound vi. The set of all logical addresses that are generated by any program is referred to as Logical
adress krne ki Address Space.

p
koshish kren toh
nhin kr payenge vii. Range: 0 to max.
as os inki help se b. Physical Address
pta lga lega ki voh i. An address loaded into the memory-address register of the physical memory.

el
p1 ke area main ii. User can never access the physical address of the Program.
hi hai ya nahin; iii. The physical address is in the memory unit. It’s a location in the main memory physically.
jo mapping horhi iv. A physical address can be accessed by a user indirectly but not directly.
hai voh p1 ko
nhin pta voh os v. The set of all physical addresses corresponding to the Logical addresses is commonly
known as Physical Address Space.
eH
ko pta hai
vi. It is computed by the Memory Management Unit (MMU).
vii. Range: (R + 0) to (R + max), for a base value R.
**c. The runtime mapping from virtual to physical address is done by a hardware device called the
memory-management unit (MMU).
0 to max
d. The user's program mainly generates the logical address, and the user thinks that the program is
running in this logical address, but the program mainly needs physical memory in order to (underlying
complete its execution. complexity chup rhi)
od
C

e.

4. How OS manages the isolation and protect? (Memory Mapping and Protection)
a. OS provides this _____________________________
Virtual Address Space (VAS) concept.
b. To separate memory space, we need the ability to determine the range of legal addresses that the
process may access and to ensure that the process can access only these legal addresses.
c. The relocation register contains value of smallest physical address (Base address [R]); the limit
register contains the range of logical addresses (e.g., relocation = 100040 & limit = 74600).
d. Each logical address must be less than the limit register.
base and offset
e. MMU maps the logical address dynamically by adding the value in the relocation register.
f. When CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit
registers with the correct values as part of the context switch. Since every address generated by the
CPU (Logical address) is checked against these registers, we can protect both OS and other users’
programs and data from being modified by running process.
g. Any attempt by a program executing in user mode to access the OS memory or other uses’
memory results in a trap in the OS, which treat the attempt as a fatal error.
h. Address Translation
base = 50
offset = 10

MMU

p
5. Allocation Method on Physical Memory
a. Contiguous Allocation
b. Non-contiguous Allocation

el Logical memory toh abstraction hai


eH
6. Contiguous Memory Allocation
a. In this scheme, each process is contained in a single contiguous block of memory.
b. Fixed Partitioning
i. The main memory is divided into partitions of equal or different sizes.
4 mb ke partitions kiye hai
fir p1 3 ki hi thi toh 1mb ki
space bach gyi toh emory
fragments main divide hogyi
od

fixed partition main size


limit hogya as 1 mb bhi
jada agya toh allocate ni kr
payenge do ko milake ni kr
skte
C

ii.
iii. Limitations:
1. Internal Fragmentation: if the size of the process is lesser then the total size of
the partition then some size of the partition gets wasted and remain unused.
This is wastage of the memory and called internal fragmentation.
2. External Fragmentation: The total unused space of various partitions cannot be
used to load the processes even though there is space available but not in the
contiguous form.
3. Limitation on process size: If the process size is larger than the size of maximum
sized partition then that process cannot be loaded into the memory. Therefore, a
limitation can be imposed on the process size that is it cannot be larger than the
size of the largest partition.
4. Low degree of multi-programming: In fixed partitioning, the degree of
multiprogramming is fixed and very less because the size of the partition cannot
be varied according to the size of processes.
c. Dynamic Partitioning
i. In this technique, the partition size is not declared initially. It is declared at the time of
process loading.
jaise processes ayegi
unke size ke basis pe
hi partition krenge

p
ii.
iii. ______________________________
Advantages over fixed partitioning

el
1. No internal fragmentation
2. No limit on size of process
3. Better degree of multi-programming
iv. Limitation
1. External fragmentation
eH
od

contiguos
memory
allocation ki vajah
se ye vala kr hi ni
paye even if 8 mb
ki space thi
C
LEC-25: Free Space Management

1. Defragmentation/Compaction
a. Dynamic partitioning suffers from external fragmentation.
b. Compaction to minimize the probability of external fragmentation.
free space ko manage krne
ke liye ek free linked list; c. All the free partitions are made contiguous, and all the loaded partitions are brought together.
node main starting adress d. By applying this technique, we can store the bigger processes in the memory. The free partitions
store hoga
are merged which can now be allocated according to the needs of new processes. This technique is
but abhi bhi free space alag
alag hai toh fragmentation ka also called defragmentation.
issue arha so we can do e. The efficiency of the system is decreased in the case of compaction since all the free spaces will be
defragmentation/compaction;
just shift krdega jo allocated transferred from several places to a single place. overhead badh jayega cpu yahan busy hojayega
hai un blocks ko 2. How free space is stored/represented in OS?
a. Free holes in the memory are represented by a free list (Linked-List data structure).
3. How to satisfy a request of a of n size from a list of free holes?
a. Various algorithms which are implemented by the Operating System in order to find out the holes

p
in the linked list and allocate them to the processes.
b. First Fit
i. Allocate the first hole that is big enough.

el
ii. Simple and easy to implement.
iii. Fast/Less time complexity
c. Next Fit
i. Enhancement on First fit but starts search always from last allocated hole. starting pointer save krne ki
need ni hai
ii. Same advantages of First Fit. jahan pe pichli bar allocate hua tha vahan ke next se first fit
eH
d. Best Fit
i. Allocate smallest hole that is big enough.
ii. Lesser internal fragmentation.
iii. May create many small holes and cause major external fragmentation. kafi chote chote holes bn jayenge
iv. Slow, as required to iterate whole free holes list.
e. Worst Fit
i. Allocate the largest hole that is big enough.
od

ii. Slow, as required to iterate whole free holes list. less external fragmentation

iii. Leaves larger holes that may accommodate other processes.


C
non contiguos allocation is paging; logical adress space divided in fixed size is known as page

LEC-26: Paging | Non-Contiguous Memory Allocation

1. The main disadvantage of Dynamic partitioning is External Fragmentation.


process ko a. Can be removed by Compaction, but with overhead.
pages main b. We need more dynamic/flexible/optimal mechanism, to load processes in the partitions.
divide kiya hai 2. Idea behind Paging
and page size a. If we have only two small non-contiguous free holes in the memory, say 1KB each.
will be equal to b. If OS wants to allocate RAM to a process of 2KB, in contiguous allocation, it is not possible, as we
frame size always
must have contiguous memory space available of 2KB. (External Fragmentation)
c. What if we divide the process into 1KB-1KB blocks?
MMU page table
use krega to map 3. Paging
jisme logical page a. Paging is a memory-management scheme that permits the physical address space of a
no. and physical process to be non-contiguous.
frame no. ki b. It avoids external fragmentation and the need of compaction.
columns hogihar c. Idea is to divide the physical memory into fixed-sized blocks called Frames, along with divide

p
process ka apna
logical memory into blocks of same size called Pages. (# Page size = Frame size)
page table hota hai
d. Page size is usually determined by the processor architecture. Traditionally, pages in a system had
uniform size, such as 4,096 bytes. However, processor designs often allow two or more, sometimes

el
25-> 011001 simultaneous, page sizes due to its benefits.
first two bits -> page e. Page Table
base i. A Data structure stores which page is mapped to which frame.
last 4 bits -> offset ii. The page table contains the base address of each page in the physical memory.
f. Every address generated by CPU (logical address) is divided into two parts: a page number (p) and
eH
p-> page no. agar 8
hai toh 3 bits chahiye a page offset (d). The p is used as an index into a page table to get base address the corresponding
hongi frame in physical memory.
d-> offset
first 16kb OS acquire
krta hai in OS

so last ki 4 bits toh


same rhegi but page
od

no. ko kaise frame no.


main translate krun ye
dekhna hai

g.Page table is stored in main memory at the time of process creation and its base address is stored
C

in process control block (PCB). dusri process ayi


h. A page table base register (PTBR) is present in the system that points to the current page table. toh pt bhi change
hona chahiye so it
Changing page tables requires only this one register, at the time of context-switching. will be done
4. How Paging avoids external fragmentation? through ptbr
a. Non-contiguous allocation of the pages of the process is allowed in the random free frames of the
physical memory.
5. Why paging is slow and how do we make it fast?
a. There are too many memory references to access the desired location in physical memory.
pehle page table ki help leni pad rhi hai fir offset add krna
6. Translation Look-aside buffer (TLB) pad rha hai fir mil pa rha so isse bhi overhead agya toh hum
a. A Hardware support to speed-up paging process. caching use kr skte hain
b. It’s a hardware cache, high speed memory.
c. TBL has key and value.

jab first bar adress aya toh page table use ki but ek entry tlb main
bhi dal di ki ki page num ke corresponding kya frame tha toh next
time directly mil jayega
d. Page table is stores in main memory & because of this when the memory references is made the
translation is slow.
e. When we are retrieving physical address using page table, after getting frame address
corresponding to the page number, we put an entry of the into the TLB. So that next time, we can
get the values from TLB directly without referencing actual page table. Hence, make paging process
faster.

agar context switching hui toh


tlb main se bhi info htani pdegi
toh tlb flush kr skte hain but
context switching is very fast
toh ye sab krna shi ni rhega as
har bar flush horha and store
hohra toh hum ASID dal dete
hain jisse pta chal jata hai ki

p
konse adress space ka hai
(process)

el
eH
f. TLB hit, TLB contains the mapping for the requested logical address.
g. Address space identifier (ASIDs) is stored in each entry of TLB. ASID uniquely identifies each
process and is used to provide address space protection and allow to TLB to contain entries for
several different processes. When TLB attempts to resolve virtual page numbers, it ensures that
od

the ASID for the currently executing process matches the ASID associated with virtual page. If it
doesn’t match, the attempt is treated as TLB miss.
page table har process ka different hota hai but tlb (used for doing fast paging) is same for all processes and
diffrentitaed with the help of ASID (adress space identifier)
C
2kb main division kr rhe toh 4 kb ka fn aya toh bhi os ne use 2 pages main tod diya aur non contiguos alloca ho skti hai even if contiguos available
ho; slow hogya (overhead agya) chota sa issue hai as paging usse bhi bda solve krti hai external fragm.

variable partitioning of logical adress


space
LEC-27: Segmentation | Non-Contiguous Memory Allocation
Logical adress-> segment no.+offset (bits)
01111
s->01
1. An important aspect of memory management that become unavoidable with paging is separation of user’s
d->111 view of memory from the actual physical memory.
if d<limit then yes then2. Segmentation is memory management technique that supports the user view of memory.
base
(physical adress space) 3. A logical address space is a collection of segments, these segments are based on user view of logical
+ d = adress memory.
4. Each segment has segment number and offset, defining a segment.
<segment-number, offset> {s,d}
5. Process is divided into variable segments based on user view.
6. Paging is closer to the Operating system rather than the User. It divides all the processes into the form of
pages although a process can have some relative parts of functions which need to be loaded in the same
page.
7. Operating system doesn't care about the User's view of the process. It may divide the same function into
different pages and those pages may or may not be loaded at the same time into the memory. It

p
decreases the efficiency of the system.
8. It is better to have segmentation which divides the process into the segments. Each segment contains the
same type of functions such as the main function can be included in one segment and the library functions

el
can be included in the other segment.
eH
od

9.
C

10. Advantages: segment table is small as


a. No internal fragmentation. jitna segment tha utna hi uthake allocate krdiya same type ke
b. One segment has a contiguous allocation, hence efficient working within segment. pages/function same
segment main rkhe huye
c. The size of segment table is generally less than the size of page table. hai
d. It results in a more efficient system because the compiler keeps the same type of functions in one
segment.
agar kuch segment ki itni jrurat ni hai toh unhe swap area main bhej do agar different
11. Disadvantages: size ke hai toh jo bda hai use jada time lgega kisi main kam toh time ke estimations
a. External fragmentation. shi ni rhnege thoda inefficient rhega
b. The different size of segment is not good that the time of swapping.
12. Modern System architecture provides both segmentation and paging implemented in some hybrid
approach.
LEC-28: What is Virtual Memory? || Demand Paging || Page Faults

1. Virtual memory is a technique that allows the execution of processes that are not completely in the
memory. It provides user an illusion of having a very big main memory. This is done by treating a part of
secondary memory as the main memory. (Swap-space)
2. Advantage of this is, programs can be larger than physical memory.
3. It is required that instructions must be in physical memory to be executed. But it limits the size of a
program to the size of physical memory. In fact, in many cases, the entire program is not needed at the
same time. So, we want an ability to execute a program that is only partially in memory would give
many benefits:
a. A program would no longer be constrained by the amount of physical memory that is
available.
b. Because each user program could take less physical memory, more programs could be run at
the same time, with a corresponding increase in CPU utilization and throughput.
c. Running a program that is not entirely in memory would benefit both the system and the

p
user.
4. Programmer is provided very large virtual memory when only a smaller physical memory is available.
5. Demand Paging is a popular method of virtual memory management.

el
6. In demand paging, the pages of a process which are least used, get stored in the secondary memory.
7. A page is copied to the main memory when its demand is made, or page fault occurs. There are various
page replacement algorithms which are used to determine the pages which will be replaced.
8. Rather than swapping the entire process into memory, we use Lazy Swapper. A lazy swapper never
swaps a page into memory unless that page will be needed.
eH
9. We are viewing a process as a sequence of pages, rather than one large contiguous address space, using
the term Swapper is technically incorrect. A swapper manipulates entire processes, whereas a Pager is
concerned with individual pages of a process.
10. How Demand Paging works?
a. When a process is to be swapped-in, the pager guesses which pages will be used.
b. Instead of swapping in a whole process, the pager brings only those pages into memory. This,
it avoids reading into memory pages that will not be used anyway.
od

c. Above way, OS decreases the swap time and the amount of physical memory needed.
d. The valid-invalid bit scheme in the page table is used to distinguish between pages that are
in memory and that are on the disk.
i. Valid-invalid bit 1 means, the associated page is both legal and in memory.
ii. Valid-invalid bit 0 means, the page either is not valid (not in the LAS of the process)
or is valid but is currently on the disk.
C
p
e.
f. If a process never attempts to access some invalid bit page, the process will be executed
successfully without even the need pages present in the swap space.

el
g. What happens if the process tries to access a page that was not brought into memory, access
to a page marked invalid causes page fault. Paging hardware noticing invalid bit for a
demanded page will cause a trap to the OS.
h. The procedure to handle the page fault:
eH
i. Check an internal table (in PCB of the process) to determine whether the reference
was valid or an invalid memory access.
ii. If ref. was invalid process throws exception.
If ref. is valid, pager will swap-in the page.
iii. We find a free frame (from free-frame list)
iv. Schedule a disk operation to read the desired page into the newly allocated frame.
v. When disk read is complete, we modify the page table that, the page is now in
memory.
od

vi. Restart the instruction that was interrupted by the trap. The process can now access
the page as through it had always been in memory.
C
p
i.
j. Pure Demand Paging

el
i. In extreme case, we can start executing a process with no pages in memory. When
eH
OS sets the instruction pointer to the first instruction of the process, which is not in
the memory. The process immediately faults for the page and page is brought in the
memory.
ii. Never bring a page into memory until it is required.
k. We use locality of reference to bring out reasonable performance from demand paging.
11. Advantages of Virtual memory
a. The degree of multi-programming will be increased.
b. User can run large apps with less real physical memory.
od

12. Disadvantages of Virtual Memory


a. The system can become slower as swapping takes time.
b. Thrashing may occur.
C
LEC-29: Page Replacement Algorithms

1. Whenever Page Fault occurs, that is, a process tries to access a page which is not currently present in a
frame and OS must bring the page from swap-space to a frame.
2. OS must do page replacement to accommodate new page into a free frame, but there might be a possibility
the system is working in high utilization and all the frames are busy, in that case OS must replace one of the
pages allocated into some frame with the new page.
3. The page replacement algorithm decides which memory page is to be replaced. Some allocated page is
swapped out from the frame and new page is swapped into the freed frame.
4. Types of Page Replacement Algorithm: (AIM is to have minimum page faults)
a. FIFO
i. Allocate frame to the page as it comes into the memory by replacing the oldest page.
ii. Easy to implement.
iii. Performance is not always good
1. The page replaced may be an initialization module that was used long time ago

p
(Good replacement candidate)
2. The page may contain a heavily used variable that was initialized early and is in
content use. (Will again cause page fault)

el
iv. Belady’s anomaly is present.
1. In the case of LRU and optimal page replacement algorithms, it is seen that
the number of page faults will be reduced if we increase the number of
frames. However, Balady found that, In FIFO page replacement algorithm, the
number of page faults will get increased with the increment in number of
eH
frames.
2. This is the strange behavior shown by FIFO algorithm in some of the cases.
b. Optimal page replacement
i. Find if a page that is never referenced in future. If such a page exists, replace this page
with new page.
If no such page exists, find a page that is referenced farthest in future. Replace this page
with new page.
od

ii. Lowest page fault rate among any algorithm.


iii. Difficult to implement as OS requires future knowledge of reference string which is
kind of impossible. (Similar to SJF scheduling)
c. Least-recently used (LRU)
i. We can use recent past as an approximation of the near future then we can replace the
page that has not been used for the longest period.
ii. Can be implemented by two ways
C

1. Counters
a. Associate time field with each page table entry.
b. Replace the page with smallest time value.
2. Stack
a. Keep a stack of page number.
b. Whenever page is referenced, it is removed from the stack & put on
the top.
c. By this, most recently used is always on the top, & least recently used
is always on the bottom.
d. As entries might be removed from the middle of the stack, so Doubly
linked list can be used.
d. Counting-based page replacement – Keep a counter of the number of references that have been
made to each page. (Reference counting)
i. Least frequently used (LFU)
1. Actively used pages should have a large reference count.
2. Replace page with the smallest count.
ii. Most frequently used (MFU)
1. Based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.
iii. Neither MFU nor LFU replacement is common.

p
el
eH
od
C
LEC-30: Thrashing

1. Thrashing
a. If the process doesn’t have the number of frames it needs to support pages in active use, it will
quickly page-fault. At this point, it must replace some page. However, since all its pages are in active
use, it must replace a page that will be needed again right away. Consequently, it quickly faults
again, and again, and again, replacing pages that it must bring back in immediately.
b. This high paging activity is called Thrashing.
c. A system is Thrashing when it spends more time servicing the page faults than executing
processes.

p
el
eH
d. Technique to Handle Thrashing
i. Working set model
1. This model is based on the concept of the Locality Model.
2. The basic principle states that if we allocate enough frames to a process to
od

accommodate its current locality, it will only fault whenever it moves to some
new locality. But if the allocated frames are lesser than the size of the current
locality, the process is bound to thrash.
ii. Page Fault frequency
1. Thrashing has a high page-fault rate.
2. We want to control the page-fault rate.
3. When it is too high, the process needs more frames. Conversely, if the page-fault
C

rate is too low, then the process may have too many frames.
4. We establish upper and lower bounds on the desired page fault rate.
5. If pf-rate exceeds the upper limit, allocate the process another frame, if pf-rate
fails falls below the lower limit, remove a frame from the process.
6. By controlling pf-rate, thrashing can be prevented.

You might also like