0% found this document useful (0 votes)
12 views20 pages

OS Mini

notes

Uploaded by

siddharth prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

OS Mini

notes

Uploaded by

siddharth prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

OS

1. Explain the concept of a process in an opera ng system.


What are the different states that a process can be in.
In an opera ng system, a process is a program that is being
executed. During its execu on, a process goes through
different states. Understanding these states helps us see how
the opera ng system manages processes, ensuring that the
computer runs efficiently. During its execu on, a process goes
through different states. Understanding these states helps us
see how the opera ng system manages processes, ensuring
that the computer runs efficiently.
States of a Process
A process can exist in several different states during its
life me. The most common states are:
 New: This state represents a newly created process that
hasn’t started running yet. It has not been loaded into
the main memory, but its process control block (PCB)
has been created, which holds important informa on
about the process.
 Ready: A process in this state is ready to run as soon as
the CPU becomes available. It is wai ng for the
opera ng system to give it a chance to execute.
 Running: This state means the process is currently being
executed by the CPU. Since we’re assuming there is only
one CPU, at any me, only one process can be in this
state.
1
 Blocked/Wai ng: This state means the process cannot
con nue execu ng right now. It is wai ng for some
event to happen, like the comple on of an input/output
opera on (for example, reading data from a disk).
 Exit/Terminate: A process in this state has finished its
execu on or has been stopped by the user for some

reason. At this point, it is released by the opera ng


system and removed from memory.

2. Describe the process scheduling algorithms used in


opera ng systems, such as FCFS, SJF, Round Robin, etc.
Compare their characteris cs and suitability for different
scenarios.
Process scheduling is a cri cal component of an
opera ng system. It ensures efficient CPU u liza on by
determining the order of execu on for processes wai ng

2
in the ready queue. Proper scheduling not only increases
system performance but also enhances user experience
by minimizing wai ng me and response delays. Here's
a detailed look at some widely-used scheduling
algorithms, their characteris cs, and their applicability in
different scenarios.

1. First-Come, First-Served (FCFS)


The FCFS scheduling algorithm is the simplest. Processes are
executed in the order they arrive in the ready queue.
 Characteristics:
o It is a non-preemptive algorithm, meaning a

process runs to completion once started.


o Simplicity in implementation: the ready queue

operates like a standard FIFO (First-In, First-Out)


queue.
o Prone to the convoy effect, where long processes

can block shorter ones, leading to inefficiencies.


 Advantages:
o Easy to implement and understand.

o Works well in batch processing systems where

process arrival times are predictable.


 Disadvantages:
o Poor responsiveness for interactive systems.

o Long average waiting times due to the convoy

effect.
 Suitability:
o Best suited for batch systems or applications that

don’t require high responsiveness.


2. Shortest Job First (SJF)
SJF selects the process with the smallest CPU burst time for

3
execution. It can operate in preemptive mode (Shortest
Remaining Time First, SRTF) or non-preemptive mode.
 Characteristics:
o Minimizes average waiting time, making it optimal

in theory.
o It requires knowledge of process burst times

beforehand, which may not always be feasible.


o At risk of starvation, as longer processes may be

delayed indefinitely when shorter ones keep


arriving.
 Advantages:
o Reduces average waiting time.

o Efficient for batch processing with predictable

workloads.
 Disadvantages:
o Difficult to implement in real-world scenarios due

to the need for precise burst-time prediction.


o Starvation can occur without additional handling

mechanisms.
 Suitability:
o Ideal for systems with batch jobs or predictable

CPU burst times.


3. Round Robin (RR)
Round Robin ensures fairness by assigning each process a
fixed time slice (quantum) and cycling through all processes
in the ready queue.
 Characteristics:
o Preemptive: A process is interrupted after its time

quantum expires and placed at the end of the queue.


o The performance is highly dependent on the size of

the time quantum:

4
 A small quantum increases context-switching
overhead.
 A large quantum behaves like FCFS.

 Advantages:
o Prevents starvation, ensuring all processes get CPU

time.
o Ideal for interactive systems, providing a sense of

responsiveness to users.
 Disadvantages:
o Overhead due to frequent context switching if the

quantum is too small.


 Suitability:
o Perfect for time-sharing systems and interactive

environments where responsiveness is key.


4. Priority Scheduling
In this algorithm, each process is assigned a priority. The CPU
is allocated to the highest-priority process.
 Characteristics:
o Can be preemptive or non-preemptive.

o Risk of starvation for lower-priority processes.

 Advantages:
o Allows prioritization of critical or time-sensitive

tasks.
 Disadvantages:
o Starvation issues can arise unless aging is

implemented, where process priority increases over


time.
 Suitability:
o Effective in real-time systems and scenarios where

certain tasks are more urgent than others.


5. Multilevel Queue Scheduling

5
Processes are divided into multiple queues based on
characteristics like priority or type (foreground vs.
background). Each queue can use a different scheduling
algorithm.
 Characteristics:
o A hybrid approach combining multiple scheduling

techniques.
o Queues are managed separately, with priorities

determining how they interact.


 Advantages:
o Flexible and tailored to different process types.

 Disadvantages:
o Complex implementation and management.

 Suitability:
o Useful in mixed environments with diverse

workloads (e.g., interactive, batch, real-time tasks).

3. Explain the concept of virtual memory and its


importance in modern opera ng systems.
Virtual memory is a cri cal mechanism in modern
opera ng systems, enabling efficient use of system
resources by abstrac ng physical memory into a larger
virtual memory space. It is designed to overcome the
limita ons of physical memory (RAM) and allows
processes to execute as though they have access to
more memory than is physically available. This concept
revolu onized compu ng by improving mul tasking,
system stability, and overall performance.

Importance of Virtual Memory


6
Virtual memory has several key benefits:

 Efficient Use of Memory:


o Allows the execution of processes larger than the
available physical RAM.
o Optimizes memory usage by allocating only the
memory that is actively required.
 Multitasking:
o Enables multiple applications to run concurrently by
isolating their memory spaces.
o Prevents applications from interfering with each
other's memory, enhancing system stability.
 Protection and Isolation:
o Each process operates in its own virtual memory
space, preventing unauthorized access to other
processes' data.
o Protects the operating system kernel from user
processes.
 Flexibility:
o Provides an abstraction layer that simplifies
memory management for programmers.
o Allows dynamic allocation and deallocation of
memory during program execution.
 Cost-Effectiveness:
o Reduces the need for excessive physical RAM by
leveraging disk storage.
4. Discuss the paging and segmenta on memory
management schemes. Compare their advantages and
disadvantages

Efficient memory management is crucial for modern


operating systems, ensuring optimal utilization of resources
and smooth multitasking. Paging and segmentation are two
distinct techniques for managing memory, each addressing
7
specific challenges and offering unique advantages. Here's a
detailed discussion of both schemes, their differences, and a
comparison of their pros and cons.
Paging
Paging is a memory management scheme that divides a
program's logical address space into fixed-size blocks called
pages and physical memory into equally sized blocks called
frames. Each page is mapped to a frame in physical memory
using a page table, which stores the mapping between virtual
and physical addresses.
How Paging Works:
 Logical memory is split into pages (e.g., 4 KB or 8 KB in
size).
 Physical memory is divided into frames of the same size
as pages.
 When a process is executed, its pages can be loaded into
any available frames.
 The page table handles address translation, converting
virtual addresses to physical addresses.
Advantages of Paging:
1. Efficient Memory Utilization:
o Pages can be loaded into any available frame,

reducing memory wastage and fragmentation.


2. Simplified Allocation:
o Fixed-size pages simplify memory allocation and

deallocation.
3. Supports Virtual Memory:
o Paging enables the use of disk storage as an

extension of RAM, allowing large programs to run


on limited physical memory.
4. Protection:
8
o Memory protection is easier to implement since
pages belonging to one process cannot access
another process's pages.
Disadvantages of Paging:
1. Internal Fragmentation:
o Pages that do not completely fill a frame result in

wasted space.
2. Translation Overhead:
o Address translation using page tables introduces

additional overhead, potentially slowing memory


access.
3. Complexity:
o The use of page tables requires additional memory

and hardware support (like the Translation


Lookaside Buffer, or TLB).
Segmentation
Segmentation divides a program’s address space into variable-
sized blocks called segments, which correspond to logical
divisions of the program, such as code, data, stack, or heap.
Each segment is mapped to a specific location in physical
memory using a segment table.
How Segmentation Works:
 A program's logical memory is divided into segments
based on functional components.
 Each segment has a base address (starting location in
physical memory) and a limit (size of the segment).
 The segment table maps segment numbers to physical
memory addresses.
Advantages of Segmentation:

9
1. Logical Representation:
o Segments align with the logical structure of

programs, making it easier to manage and debug.


2. No Internal Fragmentation:
o Segments can vary in size, ensuring efficient use of

memory.
3. Protection and Sharing:
o Different segments can have different access

permissions, enabling protection and controlled


sharing of memory.
Disadvantages of Segmentation:
1. External Fragmentation:
o Variable-sized segments can create gaps in physical

memory, making it harder to allocate new segments.


2. Complex Management:
o Managing variable-sized segments and handling

fragmentation adds complexity.


3. Limited Virtual Memory Support:
o Segmentation alone does not efficiently support

virtual memory without additional techniques like


paging.
5. Define a file system. Describe the file alloca on methods
used in file systems, including con guous alloca on,
linked alloca on, and indexed alloca on. Discuss the
advantages and disadvantages of each method.

A file system is a method and data structure that an operating


system uses to manage files on a storage device like a hard
drive, SSD, or USB. It provides a way to organize, store,
retrieve, and manage data efficiently. The file system includes
metadata (like file names, sizes, and permissions) and the

10
mechanisms to allocate and access the actual data stored on
the disk. Examples include FAT32, NTFS, EXT4, and APFS.
A critical aspect of a file system is its file allocation method,
which determines how files are stored on the disk. Efficient
allocation is crucial for fast access, optimal disk space usage,
and minimizing fragmentation.
File Allocation Methods
The most common file allocation methods are contiguous
allocation, linked allocation, and indexed allocation. Each
has its own approach to organizing file data on storage, along
with associated benefits and drawbacks.
1. Contiguous Allocation
In contiguous allocation, a file is stored in consecutive
blocks on the disk. The file system needs to know the starting
block and the file's size for access.
Advantages:
 Fast Access: Since the blocks are stored sequentially,
reading a file is very efficient, as the disk head doesn’t
need to move much.
 Simplicity: Easy to implement as it requires minimal
metadata—just the starting block and file size.
Disadvantages:
 External Fragmentation: Over time, free blocks can
become scattered, making it difficult to find contiguous
space for new files.
 Dynamic File Sizes: If a file grows beyond its allocated
blocks, it may need to be relocated entirely, causing
overhead.

11
 Poor Space Utilization: Due to fragmentation, disk
space may be wasted despite having free blocks.
Use Case: Contiguous allocation is suitable for systems where
file sizes are predictable and do not change, like CD-ROMs or
multimedia storage.
2. Linked Allocation
In linked allocation, each file is a linked list of disk blocks.
Each block contains a pointer to the next block in the
sequence. The directory entry only stores the address of the
first block.
Advantages:
 No External Fragmentation: As files do not need
contiguous storage, any free block on the disk can be
used.
 Flexible File Sizes: Files can easily grow or shrink as
needed, with new blocks simply appended to the chain.
Disadvantages:
 Slow Random Access: To access a specific block, the
system must traverse the pointers sequentially, which is
inefficient for large files.
 Pointer Overhead: Each block requires space for a
pointer, reducing the amount of usable disk space.
 Reliability Issues: If a pointer becomes corrupted, the
rest of the file may become inaccessible.
Use Case: Linked allocation works well for sequential access
patterns, such as log files or backup systems.
3. Indexed Allocation

12
In indexed allocation, a separate block, called an index
block, stores pointers to all the blocks containing the file's
data. The directory entry stores the address of the index block.
Advantages:
 Direct Access: Allows for fast random access, as the
system can directly retrieve any block by looking up the
index.
 No External Fragmentation: Blocks do not need to be
contiguous, and any free block can be used.
 Efficient Growth: Files can grow dynamically without
relocation.
Disadvantages:
 Overhead of Index Blocks: Storing the index block for
every file consumes additional disk space.
 Limited File Size: The number of blocks a file can use is
limited by the size of the index block (though this can be
mitigated by multi-level indexing).
Use Case: Indexed allocation is ideal for systems requiring
random access and efficient file growth, like general-purpose
operating systems (e.g., Linux EXT4 or NTFS).
---------------------------------------------------------------------------
-------------------------------------
6. Difference between program and Process.

Program Process

The process is an
The program contains a
instance of an executing
set of instructions
program.
13
Program Process

designed to complete a
specific task.

The process is an active


A program is a passive entity as it is created
entity as it resides in the during execution and
secondary memory. loaded into the main
memory.

Program exists at a The process exists for a


single place and limited period as it gets
continues to exist until it terminated after the
is deleted. completion of the task.

A program is a static The process is a dynamic


entity. entity.

The program does not The process has a high


have any resource resource requirement, it
requirement, it only needs resources like
requires memory space CPU, memory address,
for storing the and I/O during its
instructions. lifetime.

Process has its control


The program does not
block called Process
have any control block.
Control Block.

14
Program Process

In addition to program
data, a process also
The program has two
requires additional
logical components: code
information required for
and data.
the management and
execution.

Many processes may


execute a single program.
The program does not Their program code may
change itself. be the same but program
data may be different.
these are never the same.

The process is a
Program contains
sequence of instruction
instructions
execution.

7. Explain the working principle of KERNEL. Show layered


structure implementa on of opera ng system.

Working Principle of a Kernel


The kernel is the core component of an operating system
(OS) that acts as a bridge between software (applications) and
hardware (CPU, memory, I/O devices). It manages system
resources and enables communication between hardware and
software efficiently and securely.

15
Key Functions of the Kernel
1. Process Management:
o Manages the creation, scheduling, and termination

of processes.
o Ensures fair CPU time allocation using scheduling

algorithms.
2. Memory Management:
o Allocates memory to processes and deallocates it

after use.
o Implements virtual memory and paging mechanisms

to optimize memory usage.


3. Device Management:
o Acts as an interface between the hardware (e.g.,

disk drives, printers) and software.


o Uses device drivers to ensure smooth I/O

operations.
4. File System Management:
o Handles file operations like reading, writing, and

directory creation.
o Organizes files in structured formats like FAT,

NTFS, or EXT4.
5. System Security and Access Control:
o Ensures that unauthorized programs do not access

critical system resources.


o Implements user authentication and permissions.

6. Interrupt Handling:
o Responds to hardware or software-generated

interrupts.
o Ensures the system remains responsive under high

load or events like I/O completion.

16
8. What is scheduling policies? FCFS and round robin
policies.

Scheduling Policies
Scheduling policies are strategies used by operating systems
to manage the execution of processes by the CPU. These
policies are crucial in determining the order of process
execution, optimizing CPU utilization, and ensuring fairness,
responsiveness, and efficiency. A good scheduling policy
aims to minimize waiting time, turnaround time, and response
time while maximizing throughput.
Two commonly used scheduling policies are First-Come,
First-Served (FCFS) and Round Robin (RR). Both have
distinct characteristics, advantages, and drawbacks, making
them suitable for different scenarios.
First-Come, First-Served (FCFS) Scheduling Policy
Definition
FCFS is the simplest scheduling policy where processes are
executed in the order of their arrival in the ready queue. The

17
process that arrives first is executed first and runs to
completion before the next process begins.
Characteristics:
1. Non-preemptive: Once a process starts execution, it
cannot be interrupted until it finishes.
2. Queue-Based: Operates like a standard FIFO (First-In-
First-Out) queue.
Advantages:
1. Simplicity:
o Easy to implement as it requires minimal system

overhead.
2. Predictable Behavior:
o Processes are executed in the order they arrive,

making it straightforward to understand.


Disadvantages:
1. Convoy Effect:
o Long processes can block shorter processes waiting

in the queue, leading to inefficient CPU utilization.


2. High Average Waiting Time:
o Processes arriving later may experience long delays,

particularly in systems with varying process lengths.


3. Poor Responsiveness:
o Not suitable for interactive systems as it does not

prioritize user-centric tasks.


Suitability:
FCFS is best suited for batch processing systems where
process arrival times and lengths are predictable, and system
responsiveness is not critical.
Round Robin (RR) Scheduling Policy
18
Definition
Round Robin is a preemptive scheduling policy where each
process is assigned a fixed time slice, known as the quantum.
The CPU cycles through all processes in the ready queue,
allocating each process its quantum. If a process doesn’t finish
within its quantum, it is preempted and placed at the end of
the queue.
Characteristics:
1. Preemptive by Design:
o Ensures all processes share CPU time fairly.

2. Cyclic Execution:
o Processes are executed in a cyclic manner, giving

each one an equal opportunity to run.


3. Quantum Size:
o The size of the quantum is critical:

 A small quantum ensures responsiveness but

increases overhead due to frequent context


switching.
 A large quantum reduces overhead but may

lead to behavior similar to FCFS.


Advantages:
1. Fairness:
o Prevents starvation as all processes get CPU time

regardless of their length or priority.


2. Responsive:
o Well-suited for time-sharing systems where user

interaction and responsiveness are key.


3. Efficient for Short Tasks:
o Short processes are often completed in a single

quantum or after a few cycles.


Disadvantages:
19
1. Context-Switching Overhead:
o Frequent preemptions can lead to excessive context

switching, reducing CPU efficiency.


2. Performance Depends on Quantum:
o Selecting an inappropriate quantum size can result

in either poor responsiveness or high overhead.


3. Not Optimal for Long Processes:
o Processes with significant execution times may

experience delays as the CPU cycles through other


processes.
Suitability:
Round Robin is ideal for interactive systems like operating
systems, time-sharing environments, and multitasking
systems, where maintaining responsiveness is critical.

20

You might also like