0% found this document useful (0 votes)
9 views48 pages

Operating System

The document outlines various concepts related to operating systems, including the roles of different types of schedulers (long-term, medium-term, short-term), the behavior of processes during system calls, and the implications of deadlock and starvation. It also discusses memory management techniques such as paging and segmentation, as well as thread management and synchronization mechanisms. Additionally, it highlights the importance of context switching and the impact of different scheduling algorithms on system performance.

Uploaded by

Believer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views48 pages

Operating System

The document outlines various concepts related to operating systems, including the roles of different types of schedulers (long-term, medium-term, short-term), the behavior of processes during system calls, and the implications of deadlock and starvation. It also discusses memory management techniques such as paging and segmentation, as well as thread management and synchronization mechanisms. Additionally, it highlights the importance of context switching and the impact of different scheduling algorithms on system performance.

Uploaded by

Believer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

OPERATING SYSTEM

● If a privileged instruction is executed in user mode then hardware does


not execute the instruction but rather treats it as illegal and traps it to
the operating system.
● When a system call is executed, it is treated by the hardware as a
software interrupt.
● Software interrupt is synchronous (whenever you execute a program)
but the hardware interrupt is asynchronous(can occur any time in
between the execution of a program eg. user presses a key).
● Medium term scheduler : It swaps out suspended processes from main
memory. It reduces Degree of Multiprogramming.

● Long term scheduler :Brings the process to main memory. It increases


Degree of Multiprogramming.

● Short term scheduler : It selects the process from ready state to be


scheduled on CPU.

● Preemption and context switching are two different things. Inside the
critical section, context switch can happen but preemption cannot
happen. https://gateoverflow.in/210027/my-doubt

● EMAT = p* page fault service time + (1 – p) * Main memory access time

● At compile time we cannot know about where the program will be loaded
at which address, so we cannot assign any real address to the
instruction, we assign some relocatable code/address i.e., relocation.

● The pointer overhead for indexed allocation is greater than linked


allocation.
● Fork( ) copies the exact same code to the child process along with the
same stack pointer and program counter values.
● https://man7.org/linux/man-pages/man2/fork.2.html
● Exec( ) never returns to the process who called it. Once the process call
another process using exec(process2) then only process2 will continue
further and the control never return back to the calling process.
● Starvation means there is a long waiting time . Bounded waiting means
no process should wait for a resource for an infinite amount of time.
Deadlock means no progress and progress not related to Bounded Wait.
Deadlock not related to BW . If there is a deadlock , bounded waiting is
possible. But progress is not possible.
● Both deadlock and livelock are mutually exclusive – at any point of time only
one can happen in a system. But both of them imply no progress for the system
and hence starvation for the processes involved.
● The OS runs only in the kernel mode.
● The mode switching takes very less time compared to process
switching.
● PCB will be implemented by using doubly linked lists.
● When the process is in the ready, running or the wait state, it is residing
in the main memory.
● When the process is in the suspended ready, then it is residing in the
Backing store i.e. secondary memory.
● Long term scheduler : Brings new process to memory. Controls the
degree of multiprogramming.
● Short term scheduler : Short term scheduler : Selects one of the
processes in the ready state for scheduling on to the CPU.
● Mid term scheduler : Responsible for suspending and resuming the
processes (Swapping).
● Dispatcher : Responsible for loading the selected job on the CPU. It is
also responsible for context switching.
● Convoy Effect : If the first process is CPU bound followed by many I/O
bound process, then it will have a major effect on average waiting time
of the processes.
● Preemptive scheduling needs hardware support to manage context
switch which includes saving the execution state of the current process
and then loading the next process.
● Thrashing is a condition or a situation when the system is spending a
major portion of its time in servicing the page faults, but the actual
processing done is very negligible.
● Two process reading from the same physical address will access the
same content.
● The OS may manipulate the contents of the MMU. The OS changes the
content of MMU on a context switch across the processes.
● A multilevel page table typically reduces the amount of memory needed
to store page tables, compared to linear page tables. With multiple
levels, the portions of the page table that correspond to invalid entries
may not need to be allocated at all.
● TLB reach : (number of TLB entries) * ( size of a page )
● TLB’s are more beneficial with multilevel paging than with linear page
tables because the cost of walking a multilevel page table is higher than
walking a single level.
● Dirty bit 0, 1?
● With kernel level threads, multiple threads from the same process can
be scheduled on multiple CPU’s simultaneously.This is the benefit of
kernel level threads. We could not do this with user level threads.
● Locks do not prevent the operating system scheduler from performing a
context switch during a critical section. The OS scheduler can still
perform context switches whenever it wants. There is no coordination
between the scheduler and the lock implementation. Locks simply
ensure that if the scheduler schedules a second thread that also wants
to enter the same critical section as the first thread, then the second
thread can not enter the same critical section until the thread 1 leaves
the critical section.
● A lock that performs spin-waiting can provide fairness across threads
(i.e., threads receive the lock in the order they requested the lock). True
– the ticket lock implementation we looked at in lecture is fair and can
use spin-waiting.
● A thread can hold/acquire multiple locks at a time.
● While interacting with a fast device, it can be better to spin wait than to
use interrupts. If the device responds very quickly, the response might
come back faster than the time required for context switch to another
process and back again.
● Progress does not imply bounded waiting.
● Bounded waiting does not imply progress.
● Bounded waiting does not imply starvation freedom. Bounded waiting
does not says if a process can actually enter. It only says that there is a
bound.
● Progress + bounded waiting implies starvation freedom.
● Convoy effect
● Belady's anomaly
● Thrashing
● Elevator algorithm == SCAN scheduling algorithm
● Paging : Suffers from internal fragmentation but free from external
fragmentation.
● Segmentation and dynamic allocation : Free from internal fragmentation
but suffers from external fragmentation.
● Fixed partition : Each partition can only accommodate only one process.
● Variable partition : Each partition can be partitioned into more smaller
partitions.
http://www2.cs.uregina.ca/~hamilton/courses/330/notes/allocate/
allocate.html
Advantage of FAT over linked allocation : We can bring the whole FAT to
the MM hence the disk accesses will be reduced.
No of entries in FAT = No of blocks in the system
Size of each entry = address size

READ THIS FOR OPERATING SYSTEM CONCEPTS :


https://pages.cs.wisc.edu/~dusseau/Classes/CS537/Fall2016/Exams/f16-e2-answers.pdf

Test series good questions:


https://gateoverflow.in/384388/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384387/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384368/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384518/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384501/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384499/go-classes-test-series-2023-operating-systems-test-question
https://gateoverflow.in/384497/go-classes-test-series-2023-operating-systems-test-question
I only counted for the first fork() inside if… When fork executes, 1 is returned back to the
parent process. As inside if condition with OR, if first operand is true then second operand is
not checked. For the child process created, I should have checked the second operand too,
and that in fact creates one more child process.

The correct answer is 8.


Parameters of the system calls can be passed into the register.

First Power-On-Self Test to verify all hardware is working fine, then BIOS is activated, then it
checks the setting for which device, Operating System etc. to Boot, then loads Operating
System to RAM.

Some differences between the child and parent process are:

● different pids
● in the parent, fork( ) returns the pid of the child process if a child process is created
● in the child, fork( ) always returns 0
● separate copies of all data, including variables with their current values and the stack
● separate program counter (PC) indicating where to execute next; originally both have
the same value but they are thereafter separate
● after fork, the two processes do not share variables
Child and parent process run parallel, so, no specific order. If the scheduler first prints child
then parent we will get output as 65 .If the scheduler first prints parent then child we will get
output as 56.After fork the child and parent process has separate copies of all data, including
variables with their current values and the stack thats why 66 is not possible.

Must Read deadlock question: GATE CSE 2021 Set 2 | Question: 43 - GATE Overflow for
GATE CSE
GATE IT 2004 | Question: 63 - GATE Overflow for GATE CSE

(C)
Scheduler Process is the correct answer.
The reason being, A scheduler process only schedules or we can say selects a process from the ready
queue to be run on CPU. So no interrupt is generated in that case. When there is power failure, surely
an interrupt "power-off signal" will be generated.

Concept of Ready queue should be applied here :


GATE CSE 2012 | Question: 31 - GATE Overflow for GATE CSE

It is "beginning of 1st millisecond". Since 1st millisecond in the interval 0 - 1, so the


beginning should be 0.
https://gateoverflow.in/8330/gate-cse-2015-set-1-question-46

Good point about malloc :

GATE CSE 2021 Set 1 | Question: 14 - GATE Overflow for GATE CSE

malloc – This is a function defined in the standard C library and it does not always
invoke the system call. When a process is created, a certain amount of heap memory is
already allocated to it, when required to expand or shrink that memory, it internally uses
sbrk/brk system call on Unix/Linux. i.e., not every malloc call needs a system call but if
the current allocated size is not enough, it’ll do a system call to get more memory.
In the case of Computing, transparency means functioning without being aware.

GATE CSE 2004 | Question: 11 - GATE Overflow for GATE CSE

Learn all thread concepts :


GATE CSE 2007 | Question: 17 - GATE Overflow for GATE CSE
GATE CSE 2011 | Question: 16, UGCNET-June2013-III: 65 - GATE Overflow for GATE CSE
GATE CSE 2014 Set 1 | Question: 20 - GATE Overflow for GATE CSE
GATE CSE 2017 Set 1 | Question: 18 - GATE Overflow for GATE CSE
GATE CSE 2017 Set 2 | Question: 07 - GATE Overflow for GATE CSE
GATE CSE 2021 Set 2 | Question: 42 - GATE Overflow for GATE CSE : global variables are
not shared across C files, but then one has to use extern for that.
Threads share:

● Address space
● Heap
● Static data
● Code segments
● File descriptors
● Global variables
● Child processes
● Pending alarms
● Signals and signal handlers
● Accounting information

Threads have their own:

● Program counter
● Registers
● Stack
● State
V and Y are the logical/addresses that maps to different physical location [different copies of a here]
after if either parent/child writes ..so these are same logical addresses , mapping to different
physical pages ..hence v = y.

Here &a means logical address bcoz of security matters.

Full clarity :

Let's say your process has a var name X that has a virtual address 100 and
physical address 200. The PTE is holding a mapping of addresses from virtual 100
to physical 200.
After the fork, each process (parent and child) will have his private PTE. at this
point both PTEs will map virtual 100 to physical 200.
as long as both processes just read from there they both will read from physical
address 200.
When the first one tries to write there, the data from the physical address will be
copied to a new physical space, let's say 300, and his (and only his) PTE will be
updated so virtual 100 will be mapped to physical 300. That way it's transparent to
the process because he is still using the same (virtual) address.
Redirection simply means capturing output from a file, command, program, script, or even
code block within a script and sending it as input to another file, command, program, or
script. Answer :B

Didn't understand why Interrupt processing is mapped to LIFO :\

(C) is the answer. Interrupt processing is LIFO because when we are processing an
interrupt, we

disable the interrupts originating from lower priority devices so lower priority interrupts
can not be raised. If an interrupt is detected then it means that it has higher priority
than currently executing interrupt so this new interrupt will preempt the current
interrupt so, LIFO. Other matches are easy.

Option C ekdum correct lag raha tha yaar :


Whether the interrupted process will complete execution or some other process would
execute is decided by the process scheduler.For instance if the interrupt signaled an IO
completion event,that caused a high priority process to transition from blocked to
ready state, the OS might preempt the interrupted process and dispatch the high
priority process.

Hence answer is D.

A
nswer is (A).

Spooling(simultaneous peripheral operations online) is a technique in which an


intermediate device such as disk is interposed between process and low speed i/o
device. For ex. in a printer if a process attempt to print a document but printer is busy
printing another document, the process, instead of waiting for printer to become
available,write its output to disk. When the printer become available the data on disk is
printed. Spooling allows process to request operation from peripheral device without
requiring that the device be ready to service the request.

(C) is the correct answer. We can use one Interrupt line for all the devices connected
and pass it through OR gate. On receiving by the CPU, it executes the corresponding
ISR and after exec INTA is sent via one line. For Vectored Interrupts it is always
possible if we implement in daisy chain mechanism.
GATE CSE 2005 | Question: 20 - GATE Overflow for GATE CSE

Confused here : Does High CPU bandwidth means that, CPU is least idle or CPU is very busy?

Also, does high I/O bandwidth mean that, I/O device can transfer very high amount of data in a fixed
time?

Read the whole discussions : GATE IT 2006 | Question: 8 - GATE Overflow for GATE
CSE

In part B, Either you allocate or increase in demand? We don't increase the max need,
instead we allocate and then check. That means increase the current allocation and
decrease in available.

GATE CSE 1996 | Question: 22 - GATE Overflow for GATE CSE


Very good question :
GATE CSE 2010 | Question: 46 - GATE Overflow for GATE CSE

Notice in the ques : “There could be a deadlock not must be a deadlock.”

GATE CSE 2015 Set 2 | Question: 23 - GATE Overflow for GATE CSE

GATE CSE 2016 Set 1 | Question: 50 - GATE Overflow for GATE CSE

When there is a deadlock possibility, progress is definitely not satisfied, because progress states
that in a finite amount of time a decision must be taken which process will get to execute CS
next.When a deadlock is present, bounded waiting may or may not be satisfied.

Blindly chosen option B :

GATE CSE 2008 | Question: 63 - GATE Overflow for GATE CSE

Must try to solve it once again before reading the answer.

Didn't understand anything in the question :

GATE IT 2006 | Question: 57 - GATE Overflow for GATE CSE

GATE CSE 1997 | Question: 3.9 - GATE Overflow for GATE CSE
GATE CSE 2004 | Question: 21, ISRO2007-44 - GATE Overflow for GATE CSE

Optimal page replacement policy me did a mistake (20th page will be a hit at the end):

GATE CSE 2016 Set 1 | Question: 49 - GATE Overflow for GATE CSE

A page replacement algorithm suffers from Belady's anomaly when it is not a stack
algorithm. Eg: LRU and optimal page replacements are stack based.

A stack algorithm is one that satisfies the inclusion property.

GATE CSE 2017 Set 1 | Question: 40 - GATE Overflow for GATE CSE
Very good question : Didn't understand why 2GB is given in the question :

GATE CSE 2021 Set 2 | Question: 48 - GATE Overflow for GATE CSE

Good question :

GATE IT 2007 | Question: 58 - GATE Overflow for GATE CSE

Check the discussion once, didn't understand the point : GATE CSE 1990 | Question:
1-v - GATE Overflow for GATE CSE

Didn't understand the question properly :

Simply remember the formula of efficiency = useful time / total time

So the useful time here will be 3x10^8.

Aage solution dekh lo :

GATE CSE 1990 | Question: 7-b - GATE Overflow for GATE CSE
GATE CSE 1991 | Question: 03-xi - GATE Overflow for GATE CSE

GATE CSE 1994 | Question: 1.21 - GATE Overflow for GATE CSE

I assumed that there are only 3 physical frames available by looking at the figure
but you have to calculate no of frames first by the given information in the question

GATE CSE 1996 | Question: 7 - GATE Overflow for GATE CSE


Very good question on multilevel paging : MUST TRY

GATE CSE 1999 | Question: 19 - GATE Overflow for GATE CSE

Answer should be both (A) and (C)

Address translation is needed to provide memory protection so that a given process


does not interfere with another. Otherwise we must fix the number of processes to
some limit and divide the memory space among them -- which is not an "efficient"
mechanism.

We also need at least 2 modes of execution to ensure user processes share resources
properly and the OS maintains control. This is not required for a single user OS like the
early version of MS-DOS.

Demand paging and DMA enhances the performances- not a strict necessity.

GATE CSE 1999 | Question: 2.11 - GATE Overflow for GATE CSE

Whether to take the page tables cached or not is confusing !!!

VERY GOOD QUESTION

GATE CSE 2003 | Question: 78 - GATE Overflow for GATE CSE

GOOD QUESTION

Did a silly mistake :

GATE CSE 2003 | Question: 79 - GATE Overflow for GATE CSE


Good concept to be revised :

GATE CSE 2006 | Question: 62, ISRO2016-50 - GATE Overflow for GATE CSE

Please study what is the relation between Multiuser support and virtual memory :

GATE CSE 2006 | Question: 63, UGCNET-June2012-III: 45 - GATE Overflow for GATE
CSE

VERY GOOD QUESTION ON MULTILEVEL PAGING : try this by your own


GATE CSE 2013 | Question: 52 - GATE Overflow for GATE CSE

I blindly guessed that each entry of any page table contains the frame no… and as
frame no is of 24 bits…I chose option B .

But it is wrong !
We should not assume that page tables are page aligned (page table size
need not be same as page size unless told so in the question and different
level page tables can have different sizes.
GATE CSE 2008 | Question: 67 - GATE Overflow for GATE CSE

R/W bit : If the bit is set, the page is read/write. Otherwise when it is not set, the page is
read-only.

GATE IT 2008 | Question: 56 - GATE Overflow for GATE CSE

Question says "single sequential user process". So, all the requests to the disk
scheduler will be in sequence and each one will be blocking the execution and hence
there is no use of any disk scheduling algorithm. Any disk scheduling algorithm gives
the same input sequence and hence the improvement will be 0% for SSTF over
FCFS.Correct Answer: D

VERY GOOD QUESTION :

GATE IT 2007 | Question: 83 - GATE Overflow for GATE CSE

Such a good question to let you make a silly mistake.

Question me formatted disk ka size diya pehle phir unformatted ka size puch liya :)

GATE CSE 1995 | Question: 14 - GATE Overflow for GATE CSE


Didn't understand the second part !

GATE CSE 1996 | Question: 23 - GATE Overflow for GATE CSE

TOUGH AND GOOD QUESTION including pipeline concepts and disk scheduling !

MUST TRY.

GATE CSE 1997 | Question: 74 - GATE Overflow for GATE CSE

Formatting ka matlab data udana nahi hota 😂


The formatted disk capacity is always less than the "raw"
unformatted capacity specified by the disk's manufacturer, because
some portion of each track is used for sector identification and for
gaps (empty spaces) between sectors and at the end of the track.

GATE CSE 1998 | Question: 2-9 - GATE Overflow for GATE CSE

RAID : RAID, short for redundant array of independent (originally inexpensive) disks is
a disk subsystem that stores your data across multiple disks to either increase the
performance or provide fault tolerance to your system (some levels provide both).

GATE CSE 1999 | Question: 2-18, ISRO2008-46 - GATE Overflow for GATE CSE
GATE CSE 2001 | Question: 1.22 - GATE Overflow for GATE CSE

Confusing : GATE CSE 2001 | Question: 20 - GATE Overflow for GATE CSE

Interrupt overhead counted in CPU utilisation time only not in transfer time !

GATE CSE 2001 | Question: 8 - GATE Overflow for GATE CSE

Confused !

Answer is (A). Larger block size means less number of blocks to fetch and hence
better throughput. But larger block size also means space is wasted when only small
size is required.
GATE CSE 2008 | Question: 32 - GATE Overflow for GATE CSE

Question me 64 sectors per cylinder diya hai aur solution me 64 sectors per track le
liya, ye kya baat hui :\

GATE CSE 2013 | Question: 29 - GATE Overflow for GATE CSE

Rotational latency seek time question :

GATE CSE 2015 Set 1 | Question: 48 - GATE Overflow for GATE CSE

VERY RARE AND GOOD CONCEPT OF LINEAR and ANGULAR VELOCITY in disk
scheduling :
CAV: In this, each track consist of equal capacity & the density varies such that the
innermost track have the maximum density & the outermost track have the minimum
density. In CAV mode, the spindle motor turns at a constant speed, which makes the
medium pass by the read/write head faster when the head is positioned at the outside of
the disk.

CLV: In this, each track consist of equal density & the capacity varies such that the
innermost track have the minimum capacity & the outermost track have the maximum
capacity. In CLV mode, the spindle motor speed varies so that the medium passes by the
head at the same speed regardless of where on the disk the head is positioned.

GATE IT 2005 | Question: 81-a - GATE Overflow for GATE CSE

MOST PROBABLE SILLY MISTAKE YOU WILL do :

Disk is constantly Rotating so when head moved from innermost track to outermost track
total movement of disk =(3.5/0.5)=7 sectors

Jab seek time k liye head move hoga us beech me rotations continue rahenge to no of
sectors to cross calculate karte time rotation ka bhi dhyan rakho…na samajh aaay?
Solution dekh lo !

GATE IT 2005 | Question: 81-b - GATE Overflow for GATE CSE

In questions like this, you should calculate the average seek time by your own.
GOOD QUESTION

Avg Seek time =(∑0+1+2+3+...+499)/500

GATE IT 2007 | Question: 44, ISRO2015-34 - GATE Overflow for GATE CSE

GATE CSE 2002 | Question: 2.22 - GATE Overflow for GATE CSE

GOOD QUESTION !
GATE CSE 2021 Set 1 | Question: 15 - GATE Overflow for GATE CSE

VERY GOOD QUESTION : window bana k karo

GATE CSE 1992 | Question: 12-b - GATE Overflow for GATE CSE

VERY GOOD QUESTION. The reasoning is more of an intuition rather than any
formula. GATE CSE 1996 | Question: 2.18 - GATE Overflow for GATE CSE

Clear the concept of overlays :


GATE CSE 1998 | Question: 2.16 - GATE Overflow for GATE CSE

GOOD QUESTION :

GATE CSE 2014 Set 2 | Question: 55 - GATE Overflow for GATE CSE

VERY GOOD Question ! Will clear all doubts in paging, segmentation and
segmented paging.

GATE IT 2006 | Question: 56 - GATE Overflow for GATE CSE


New Concept :
https:/
/gateoverflow.in/18743/scheduling-algorithm
Gate applied topic test3 You have a doubt here in bounded waiting. Yaha while loop hai
iska matlab ye nahi ki bounded waiting satisfy ho hi jayegi….. because yaha har process
khud hi apna turn variable set kr rhi hai to vo ek baar chalne k baad dobara phir se set kr
legi …. See the difference in this ques and its following ques.
Ab ye neeche wale question me dekho… jese process p0 chalegi pehle to vo
exit krte time turn =1 kr degi dusri process ko chance dene k liye… hence
bounded waiting is satisfied here.

One more twist in this question is to check for progress. If we made a


decision based on the turn variable, this seems like strict alteration but when
you check the code properly “if the flag of another process is false then it is
not going to check the turn variable”. Whenever the flag[other] = false then it
simply enters into the critical section .

Therefore it satisfies : Mutual Exclusion, Progress, Bounded wait


And No deadlock is there.
Nahi samajh aaya ye !
http://www.csl.mtu.edu/cs3331.ck/common/05-Sync-Basics.pdf

Overlay concept :
See here the 20KB wali process , if some partition has extra 20KB space then
also the 20KB process doesn't go inside that partition…it will have its own
fixed partition.

Notice here only double indirect pointers are there :

You might also like