0% found this document useful (0 votes)
18 views55 pages

Os 2019

os

Uploaded by

TUSHAR AHUJA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views55 pages

Os 2019

os

Uploaded by

TUSHAR AHUJA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Tushar Ahuja

Tushar Ahuja
Tushar Ahuja

ANS. (A) Operating System (OS) is essential


software that manages hardware resources, facilitates user
interaction & ensures efficient system operation. Its key roles:
1. Resource Management: Allocates CPU, memory, and
storage to processes and applications.
2. Process Management: Handles multitasking, process
scheduling, and inter-process communication.
3. Security: Ensures data and system security through
access control and malware protection.
4. User Interface: Provides interfaces like CLI or GUI for
user interaction.
5. File and Storage Management: Organizes and controls
file access on storage devices.
6. Network Management: Manages device communication
and data exchange over networks.
7. Performance Monitoring: Tracks system performance
and optimizes resource utilization.
8. Error Handling: Detects and resolves system errors to
maintain stability.
9. Application Support: Provides a platform for running
software and utilities.
Tushar Ahuja

1. Batch Operating System:


Example: IBM OS/360
Used in early mainframes to process batches of jobs
sequentially without user interaction.
2. Time-Sharing Operating System:
Example: UNIX
Allows multiple users to use the system simultaneously by
sharing processor time.
3. Distributed Operating System:
Example: Apache Hadoop
Manages a group of independent computers to work as a
single system for processing distributed tasks.
4. Network Operating System:
Example: Microsoft Windows Server
Enables resource sharing and communication across
networked computers.
5. Real-Time Operating System (RTOS):
Example: VxWorks
Used in embedded systems like automotive, robotics, and
medical devices where timing is critical.
6. Embedded Operating System:
Example: FreeRTOS
Designed for devices like microwaves, smartwatches, and
IoT systems.
7. Mobile Operating System:
Example: Android and iOS
Optimized for mobile devices like smartphones and
tablets.
8. Desktop Operating System:
Example: Windows 10, macOS, Linux (Ubuntu)
Designed for personal computers to support general-
purpose computing tasks.
Tushar Ahuja

(B) Thrashing in an operating system (OS) occurs when a


computer's virtual memory subsystem is overused, causing a
significant slowdown in system performance. It happens when
the system spends more time swapping data between the main
memory (RAM) and the disk (swap space or page file) than
executing actual processes.

Causes of Thrashing:
1. Overcommitment of Memory: When too many processes
are running, and their combined memory demands exceed
the physical RAM, the system starts swapping pages in
and out frequently.
2. Insufficient RAM: Not enough physical memory to handle
the running processes.
3. High Degree of Multiprogramming: Running too many
processes simultaneously can lead to excessive context
switching and paging.

Symptoms of Thrashing:
• The system becomes extremely slow.
• High disk activity (ex: hard drive or SSD is constantly on).
• Low CPU utilization despite a heavy load of processes.

How to Prevent or Mitigate Thrashing:


1. Reduce the Number of Running Processes: Close
unnecessary applications.
2. Increase Physical Memory (RAM): More RAM reduces
reliance on virtual memory.
Tushar Ahuja

3. Adjust Virtual Memory Settings: Increase size of swap


space.
4. Use Efficient Scheduling Algorithms: Algorithms like
LRU (Least Recently Used) help in better page
replacement.

Example:
Suppose a system has 4GB of RAM and is running multiple
applications. If these applications require 6GB of memory, the
OS will use 2GB of virtual memory from the disk. If the working
set of active processes keeps changing, pages are frequently
swapped in and out, leading to thrashing.

(C) Process States in Operating System


The states of a process are as follows:
• New State: In this step, the process is about to be created
but not yet created. It is program that is present in
secondary memory that will be picked up by OS to create
process.
• Ready State: New -> Ready to run. After the creation of a
process, process enters ready state i.e. process is loaded
into the main memory. The process here is ready to run
and is waiting to get CPU time for its execution. Processes
that are ready for execution by CPU are maintained in a
queue called a ready queue for ready processes.
• Run State: The process is chosen from the ready queue
by the OS for execution and the instructions within the
process are executed by any one of available processors.
Tushar Ahuja

• Blocked or Wait State: Whenever the process requests


access to I/O needs input from the user or needs access
to a critical region(the lock for which is already acquired) it
enters the blocked or waits state. The process continues
to wait in the main memory and does not require CPU.
Once the I/O operation is completed the process goes to
the ready state.
• Terminated or Completed State: Process is killed as well
as PCB is deleted. The resources allocated to the process
will be released or deallocated.

(D) Access Matrix in System Protection


The Access Matrix is a model used to define and enforce
protection mechanisms in computer systems. It represents the
rights and permissions of subjects (users or processes) over
objects (resources like files, devices, etc.).
Tushar Ahuja

Use of Access Matrix:


1. Protection: Ensures that resources are accessed only by
authorized users with defined rights.
2. Access Control: Helps in implementing access control
mechanisms for files, devices, and other resources.
3. Flexibility: Supports dynamic adjustments by
adding/removing subjects, objects, or permissions.
4. Security: Prevents unauthorized access, ensuring data
integrity and confidentiality.
5. Audit: Provides a clear framework to review and verify
access permissions for security audits.

(E) Disk Reliability in Computing


• Disk reliability refers to the ability of a storage disk (e.g.,
HDD or SSD) to perform its intended functions without
failure over a specified period.
• It is a crucial factor in ensuring data integrity, system
stability, and uninterrupted access to stored data.
• Disk reliability refers to a disk system's ability to remain
available to users even if one or more disks fail. It's
important to ensure disk reliability because data stored on
disks can become corrupted over time, causing errors that
go undetected.
• Other problems that can occur include downtime, data
loss, and hardware failures.
• Disk reliability metrics and failure rates are important for
determining storage durations and data reliability levels.
• One way to measure a hardware product's reliability is by
using the mean time between failures (MTBF). For
example, a hard disk drive might have an MTBF of
300,000 hours.
Tushar Ahuja

(F) Processor Affinity in Multi-Processor


Systems
Processor Affinity, also known as CPU pinning, refers to the
ability of a process or thread to execute on a specific processor
or set of processors in a multi-processor system. This
mechanism helps optimize performance by leveraging the
cache memory of a particular processor.

Example of Processor Affinity


• A video encoding application can be pinned to a specific
processor to ensure smooth performance without
interference from other processes.
• In Linux, the taskset command can be used to set
processor affinity.

ANS. (A) Continuous and Non-Continuous


Memory Allocation
Memory allocation refers to the way in which memory is
assigned to programs and processes during execution. The two
main types of memory allocation are Continuous Memory
Allocation and Non-Continuous Memory Allocation.
Tushar Ahuja

1. Continuous Memory Allocation


In Continuous Memory Allocation, each process is allocated
a single contiguous block of memory. The entire memory space
required by a process is allocated at once, and the process can
access the memory locations in that block without interruption.

2. Non-Continuous Memory Allocation


In Non-Continuous Memory Allocation, memory is divided
into chunks, and processes can be assigned memory blocks
that are scattered throughout the available memory space. This
approach makes use of a more flexible allocation method,
where each process can have multiple segments or pages
scattered in memory.
Types of Non-Continuous Allocation:
• Paged Allocation: Memory is divided into fixed-size
blocks called "pages." A process is divided into pages, and
each page can be allocated anywhere in memory.
• Segmented Allocation: Memory is divided into segments
of varying sizes based on the needs of the process. Each
segment can be allocated anywhere in memory.

(B)
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja

ANS. (A) Fragmentation in Memory


Allocation
Fragmentation occurs when memory is allocated and
deallocated dynamically, leading to inefficient utilization of
memory. It is categorized into two types:

1. External Fragmentation
• Definition: Occurs when free memory is scattered
throughout the system, but none of the free blocks is large
enough to satisfy a request.
• Cause: Arises due to continuous memory allocation when
small gaps of unusable memory are left between allocated
blocks.
• Example: If 100 MB of memory is free but split into two
blocks of 60 MB and 40 MB, a process requiring 70 MB
cannot be allocated memory.
Solution:
• Compaction: Rearranges memory to consolidate free
spaces into a single large block.
• Use Non-Continuous Memory Allocation: Techniques
like paging or segmentation reduce external
fragmentation.
Tushar Ahuja

2. Internal Fragmentation
• Definition: Occurs when the allocated memory block is
larger than the memory required by a process, leaving
unused space inside the block.
• Cause: Arises due to fixed-sized memory partitions or
page sizes, where the size of the memory allocated to a
process does not exactly match its requirement.
• Example: If a process requires 28 KB but is allocated a 32
KB block, the remaining 4 KB is wasted.
Solution:
• Dynamic Partitioning: Allocate exactly the amount of
memory needed for a process.
• Use Smaller Page Sizes: Reducing page size can help
minimize wasted space.

Memory Allocation Strategies


1. Fixed Partitioning
• Description: Memory is divided into fixed-size partitions,
and each partition can hold exactly one process.
• Advantage: Simple to implement.
• Disadvantage: Leads to internal fragmentation and
inefficient use of memory.

2. Dynamic Partitioning
• Description: Memory is divided into partitions of variable
sizes based on the process requirements.
Tushar Ahuja

• Advantage: Reduces internal fragmentation.


• Disadvantage: Causes external fragmentation.

3. Paging
• Description: Divides memory into fixed-size blocks called
pages. Processes are divided into pages and mapped to
frames in memory.
• Advantage: Eliminates external fragmentation.
• Disadvantage: May cause internal fragmentation if page
sizes are large.

4. Segmentation
• Description: Divides a process into segments based on
logical divisions (e.g., code, data, stack). Each segment is
stored in a different part of memory.
• Advantage: Provides logical division and more efficient
memory use.
• Disadvantage: Can lead to external fragmentation.

5. Paging with Segmentation


• Description: Combines paging and segmentation, where
segments are divided into pages.
• Advantage: Minimizes both internal and external
fragmentation.
• Disadvantage: More complex to implement.
Tushar Ahuja

Comparison Between Fragmentation Types

(B) Segmentation in Memory Management


Segmentation is a memory management scheme that divides
the memory into variable-sized segments based on logical
divisions of a program, such as code, data, stack, etc. Each
segment represents a specific type of functionality, making it
easier to manage and access memory in a way that reflects the
logical structure of the program.

Hardware Required for Segmentation


Hardware implemetation
of segmentation requires
following components:
1. Segment Table:
o Stores base &
limit of each
segment.
o Base: Starting
address of
segment in
physical
memory.
Tushar Ahuja

o Limit: Length of segment.


2. Segment Table Base Register (STBR):
o Holds starting address of segment table in memory.
3. Segment Table Length Register (STLR):
o Indicates the number of segments in table, ensuring
the program does not access undefined segments.
4. Address Translation Logic:
o Converts logical addresses (segment number +
offset) into physical addresses using segment table.
o Checks if the offset is within the segment limit;
otherwise, a segmentation fault occurs.
Tushar Ahuja

ANS. (A) Reader-Writer Problem


The Reader-Writer Problem is a classical synchronization
problem where:
• Readers: Multiple reader processes can read the shared
resource simultaneously without causing conflicts.
• Writers: Writer processes require exclusive access to the
shared resource, meaning no other readers or writers can
access it while writing.

The problem arises when multiple readers and writers try to


access a shared resource concurrently. The solution ensures:
• Writers get exclusive access.
• Multiple readers can access the resource simultaneously,
but no writer can access it while readers are reading.

Reader-Writer Problem using Semaphores


Semaphores are used to solve this problem by controlling
access to the shared resource. We use three semaphores:
Tushar Ahuja

• mutex: Controls mutual exclusion for updating shared


variables like the count of readers.
• write: Ensures exclusive access for writers.
• rw_mutex: Controls reader access, ensuring no readers
can access the resource when a writer is writing.

Pseudo Code for Solving Reader-Writer


Problem
Shared Variables:
• mutex: Semaphore initialized to 1, ensuring mutual
exclusion for the reader count variable.
• write: Semaphore initialized to 1, ensuring exclusive
access for writers.
• read_count: A shared integer initialized to 0, keeping track
of the number of readers currently accessing the resource.

Pseudo Code for Reader Process:


Tushar Ahuja

Explanation (Reader Process):


• Entry Section: The first reader process arriving blocks any
writer by acquiring the write semaphore. Every other reader
can enter as long as there is no writer active.
• Reading Section: Multiple readers can perform the reading
operation simultaneously since no writer is allowed when at
least one reader is active.
• Exit Section: The last reader to exit (i.e., when read_count
becomes 0) signals the write semaphore, allowing any
waiting writer to access the resource.

Pseudo Code for Writer Process

Explanation (Writer Process):


1. Entry Section: A writer tries to acquire write semaphore,
blocking any further readers or writers accessing resource.
2. Writing Section: Only one writer can access the shared
resource during the writing operation.

3. Exit Section: Once the writing is done, the writer releases


the write semaphore, allowing other readers or writers to
proceed.
Tushar Ahuja

Semaphore Operations Used:


• wait(semaphore): This operation decreases the
semaphore value. If the value is negative, the process is
blocked (waiting for the resource).
• signal(semaphore): This operation increases the
semaphore value and wakes up waiting processes, if any.

How This Solves the Problem:


• Readers: Multiple readers can access the resource
simultaneously, as long as no writer is writing. The first
reader blocks writers, and the last reader releases the lock
for writers.
• Writers: Writers have exclusive access to the resource.
They wait for readers to finish, and once they get access,
no other readers or writers can proceed until the writer is
done.

(B)
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja

ANS. (A). What is a Scheduler?


• A scheduler is a system component in an operating
system (OS) responsible for determining which process or
thread to execute next.
• It plays a crucial role in process management by allocating
CPU time to different processes or tasks in an efficient
and fair manner, ensuring that each process gets an
appropriate share of the CPU.

Types of Process Schedulers


There are three types of process schedulers:

1. Long Term or Job Scheduler


• It brings the new process to the ‘Ready State’. It controls
the Degree of Multi-proqramminq, i.e., the number of
processes present in a ready state at any point in time.
• It is important that the long-term scheduler make a careful
selection of both I/O and CPU-bound processes.
• I/O-bound tasks are which use much of their time in input
and output operations while CPU-bound processes are
which spend their time on the CPU.
• The job scheduler increases efficiency by maintaining a
balance between the two.
• They operate at a high level and are typically used in
batch-processing systems.
Tushar Ahuja

2. Short-Term or CPU Scheduler


It is responsible for
selecting one process
from the ready state for
scheduling it on the
running state. Note: Short-
term scheduler only
selects the process to
schedule it doesn’t load
the process on running. Here is when all the scheduling
algorithms are used. The CPU scheduler is responsible for
ensuring no starvation due to high burst time processes.

3. Medium-Term Scheduler
It is responsible for
suspending and resuming
process. It mainly
does swapping (moving
processes from main
memory to disk and vice
versa). Swapping may be
necessary to improve the
process mix or because a
change in memory requirements has overcommitted available
memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the
CPU bound. It reduces the degree of multiprogramming.
Tushar Ahuja

(B) Producer-Consumer Problem and


Semaphore Solution
The Producer-Consumer problem is a classic synchronization
problem in operating systems. It involves two types of
processes:
1. Producer: Generates data and places it in a shared
buffer.
2. Consumer: Consumes data from the shared buffer.

The goal is to ensure proper synchronization such that:


• The producer does not produce into a full buffer.
• The consumer does not consume from an empty buffer.

Semaphores in the Solution


The solution uses three semaphores:
1. mutex: Ensures mutual exclusion while accessing the
shared buffer.
2. empty: Counts the number of empty slots in the buffer.
3. full: Counts the number of filled slots in the buffer.

Semaphore Initialization
• mutex = 1: Ensures only one process accesses the buffer
at a time.
• empty = n: Represents the number of empty slots in the
buffer (initially all slots are empty).
• full = 0: Represents the number of filled slots in the buffer
(initially none).
Tushar Ahuja

Pseudocode
Producer Process

Consumer Process
Tushar Ahuja
Tushar Ahuja

ANS. (A) What is a Deadlock?


A deadlock is a situation in a multi-processing system where
two or more processes are unable to proceed because each is
waiting for resources held by the other processes. In essence,
the processes are stuck in a circular wait, leading to a system
halt unless external intervention occurs.

Necessary Conditions for Deadlock


A deadlock occurs when a group of processes is blocked,
waiting for resources that are held by other processes in the
group. For a deadlock to occur, four necessary conditions must
hold simultaneously:
1. Mutual Exclusion
• At least one resource must be held in a non-shareable
mode, meaning only one process can use resource at a
time.
• If another process requests the resource, it must wait until
the resource is released.
Tushar Ahuja

Example: A printer can be used by only one process at a time.

2. Hold and Wait


• A process holding at least one resource is waiting to
acquire additional resources that are currently held by
other processes.
Example:
• Process A holds Resource 1 and requests Resource 2.
• Process B holds Resource 2 and requests Resource 1.

3. No Preemption
• Resources cannot be forcibly taken away from a process.
They can only be released voluntarily by the process
holding them, after the process has completed its task.
Example:
• If a process is holding a resource, the OS cannot force the
process to release it; it must wait until the process is done.

4. Circular Wait
• A set of processes is waiting in a circular chain, where
each process is waiting for a resource that the next
process in the chain holds.
Example:
• Process A waits for Resource B.
• Process B waits for Resource C.
• Process C waits for Resource A.
Tushar Ahuja

Deadlock Prevention
This method ensures that at least one of the necessary
conditions for deadlock cannot occur. The four conditions for
deadlock are:
1. Mutual Exclusion
o Resources cannot be shared simultaneously.
2. Hold and Wait
o Processes hold allocated resources while waiting for
additional ones.
3. No Preemption
o Resources cannot be forcibly taken away from
processes.
4. Circular Wait
o A circular chain of processes exists, where each
process waits for a resource held by the next.
Approach:
• Mutual Exclusion: Allow sharing of some resources
wherever possible (e.g., read-only files).
Tushar Ahuja

• Hold and Wait: Require processes to request all


resources at once, or release held resources before
requesting new ones.
• No Preemption: Allow preemption; if a process holding
resources is blocked, the OS can preempt its resources
and allocate them to other processes.
• Circular Wait: Impose a strict ordering on resource
acquisition. Processes must request resources in a
predefined order.

(B)
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja

ANS. (A) Swap-Space Management


Swap-space is a portion of the disk used by the operating
system as an extension of main memory (RAM). When the
system runs out of physical memory, inactive pages from
memory are moved to the swap-space, allowing active
processes to continue running without interruption.
Tushar Ahuja

Tasks of Swap-Space Management


1. Paging:
The operating system swaps pages between physical
memory and swap-space when needed, a process called
paging. This helps to run programs that require more
memory than is physically available.
2. Allocation:
Swap-space can be preallocated during system
initialization or dynamically allocated during runtime as
memory demands increase.
3. Deallocation:
When memory pages are no longer needed, they are
removed from the swap-space to free up space for other
processes.
4. Performance Optimization:
Swap-space management aims to minimize I/O overhead.
Efficient algorithms are used to decide which memory
pages should be swapped out.
5. Storage Management:
The operating system maintains metadata about the
location and status of swap-space pages, ensuring
consistent and efficient data access.
6. Support for Virtual Memory:
Swap-space is a key component of virtual memory, which
enables systems to use more memory than physically
available by combining RAM with disk storage.
Tushar Ahuja

Concept of Raw Partition


A raw partition is a dedicated section of a disk that is not
formatted with a file system. It is used for low-level operations,
often for swap-space or database management systems. In the
context of swap-space, a raw partition allows for direct disk
access without the overhead of a file system.
Advantages of Raw Partitions for Swap-Space:
1. Performance:
Direct disk access bypasses the file system layer, resulting
in faster read/write operations.
2. Efficiency:
Data management is simpler since the operating system
does not have to manage file system metadata.
3. Reliability:
Reduces fragmentation and ensures consistent
performance, as there is no file system overhead.

How Swap-Space Uses Raw Partitions


• Dedicated Swap Partition: The system reserves a raw
partition solely for swap-space. Since there is no file
system, it provides predictable performance for memory
paging.
• File-based Swap-Space: Swap-space can also exist as a
file within a file system, but this method introduces slight
overhead compared to a raw partition.
Tushar Ahuja

Comparison: Swap-Space on Raw Partition


vs. File

(B) Given Data


• Number of cylinders: 300 (0 to 299)
• Current head position: 150
• Pending requests: 69, 12, 196, 202, 144, 218, 256, 123,
165, 81
• Algorithms: FCFS, SSTF, SCAN, and LOOK
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja
Tushar Ahuja

ANS. (A)
• Application Programs: This is
the topmost layer where users
interact with files through
applications. It provides the user
interface for file operations like
creating, deleting, reading,
writing, and modifying files.
Examples include text editors, file
browsers, & command-line
interfaces.
• Logical File system – It
manages metadata information
about a file i.e includes all details
about a file except the actual
contents of the file. It also
maintains via file control blocks. File control block (FCB)
has information about a file – owner, size, permissions,
and location of file contents.
• File Organization Module – It has information about files,
the location of files and their logical and physical blocks.
Physical blocks do not match with logical numbers of
logical blocks numbered from 0 to N. It also has a free
space that tracks unallocated blocks.
• Basic File system – It Issues general commands to the
device driver to read and write physical blocks on disk. It
manages the memory buffers and caches. A block in the
buffer can hold the contents of the disk block and the
cache stores frequently used file system metadata.
Tushar Ahuja

• I/O Control level – Device drivers act as an interface


between devices and OS, they help to transfer data
between disk and main memory. It takes block number as
input and as output, it gives low-level hardware-specific
instruction.
• Devices Layer: The bottommost layer, consisting of the
actual hardware devices. It performs the actual reading &
writing of data to physical storage medium. This includes
hard drives, SSD’s, optical disks, & other storage devices.

(B) Program Threats and System Threats in


cybersecurity refer to different categories of vulnerabilities or
risks that can harm software applications or the underlying
systems they run on. Here's a breakdown of program threats
and system threats along with examples for each:

Program Threats
Program threats are vulnerabilities or malicious activities
targeting software or applications. These often exploit
weaknesses in application design, code, or logic.
Examples of Program Threats
1. Trojan Horses
o Malicious programs disguised as legitimate software.
o Example: A fake antivirus program that actually
installs malware.
2. Logic Bombs
o Malicious code that triggers harmful actions when
specific conditions are met.
Tushar Ahuja

o Example: Deleting files on a certain date or when a


particular user logs in.
3. Trapdoors (Backdoors)
o Hidden entry points in a program allowing
unauthorized access.
o Example: Developers leaving hidden administrative
access in software.
4. Viruses
o Malicious code that attaches to legitimate programs
and spreads by executing infected programs.
o Example: File-infecting viruses that corrupt files.
5. Worms
o Standalone malicious programs that spread through
networks without user intervention.
o Example: WannaCry ransomware worm.
6. Buffer Overflow Attacks
o Exploiting poorly managed memory buffers in
programs, causing execution of malicious code.
o Example: Injecting shellcode into a vulnerable buffer
to gain control.

System Threats
System threats target the hardware, operating system, or
underlying infrastructure, potentially compromising the entire
system. These often focus on resource manipulation,
unauthorized access, or system failure.
Tushar Ahuja

Examples of System Threats


1. Denial-of-Service (DoS) Attacks
o Overloading system resources to make them
unavailable to legitimate users.
o Example: Flooding a server with requests to cause
downtime.
2. Distributed Denial-of-Service (DDoS) Attacks
o A distributed attack where multiple systems
overwhelm a single system.
o Example: Botnet attacks on websites or online
services.
3. Man-in-the-Middle (MITM) Attacks
o Intercepting communication between two systems to
steal or alter information.
o Example: Hijacking a secure session between a user
and a bank.
4. Ransomware Attacks
o Encrypting system files and demanding ransom for
decryption.
o Example: LockBit ransomware targeting system data.
5. Privilege Escalation
o Gaining unauthorized elevated access to system
resources.
o Example: Exploiting system vulnerabilities to become
an admin.
6. Rootkits
Tushar Ahuja

o Malicious programs that hide their presence while


allowing attackers to maintain control over the
system.
o Example: Kernel-level rootkits that modify the OS
kernel.
7. Session Hijacking
o Taking control of an active session between a user
and a system.
o Example: Stealing session cookies to impersonate a
user.
8. Phishing-Based System Access
o Gaining access to the system by tricking users into
providing credentials.
o Example: Fake system login pages stealing admin
passwords.

ANS. (A) File Allocation Methods in


Operating Systems
File allocation methods determine how files are stored on disk
and how their data blocks are managed. The primary goal is to
ensure efficient storage utilization and quick data access. Here
are the various methods:
Tushar Ahuja

1. Contiguous Allocation
In contiguous allocation, each file is stored in a contiguous
set of disk blocks.
How It Works:
• The file is allocated a sequence of consecutive blocks on
the disk.
• The starting block and the length of the file (number of
blocks) are stored in the directory.
Advantages:
• Fast Access: Sequential and random access are quick
due to contiguous storage.
• Simplicity: Easy to implement and manage.
Disadvantages:
• External Fragmentation: Free space is scattered, making
it hard to find contiguous blocks.
• Difficulty in File Growth: If a file grows beyond its
allocated space, relocation or defragmentation is needed.

2. Linked Allocation
In linked allocation, each file is a linked list of disk blocks,
which may be scattered anywhere on the disk.
How It Works:
• Each block contains a pointer to the next block of the file.
• The directory stores the starting block and size of the file.
Advantages:
• No External Fragmentation: Blocks can be scattered
across the disk.
Tushar Ahuja

• Dynamic File Size: Files can easily grow by linking


additional blocks.
Disadvantages:
• Slow Access: Random access is slow as it requires
traversing the pointers.
• Overhead: Space is wasted in each block to store
pointers.
• Reliability Issues: If a pointer is lost or corrupted, the rest
of the file becomes inaccessible.

3. Indexed Allocation
In indexed allocation, an index block is used to keep pointers
to all the blocks of a file.
How It Works:
• Each file has an index block containing pointers to its data
blocks.
• The directory stores the address of the index block.
Advantages:
• No External Fragmentation: Blocks can be scattered.
• Fast Random Access: Direct access to any block via the
index block.
Disadvantages:
• Overhead of Index Block: Additional disk space is
required for the index.
• Size Limitation: The number of blocks a file can have is
limited by the size of the index block.
Tushar Ahuja

(B) File Access Methods in Operating


Systems
File access methods define how data is read from and written
to files. Different methods are used depending on the file
structure and the system's requirements. Here are the primary
file access methods:

1. Sequential Access

Description:
• Data is accessed in a specific order, from the beginning of
the file to the end, one record at a time.
• It is the simplest and most common access method.
Operations:
• Read Next: Reads the next block of data.
• Write Next: Appends data at the end of the file.
Advantages:
• Simple to implement.
Tushar Ahuja

• Efficient for reading and writing large files in sequence.


Disadvantages:
• Not suitable for scenarios where random access is
needed.
• Time-consuming to access data in the middle of a large
file.
Use Cases:
• Text files, log files.

2. Direct Access (Random Access)

Description:
• Data blocks can be accessed directly without reading
through other blocks.
• Each block has a unique address, and access is based on
this address.
Operations:
Tushar Ahuja

• Read Block n: Directly accesses the nth block.


• Write Block n: Directly writes to the nth block.
Advantages:
• Fast access to data regardless of its position.
• Suitable for large files where specific data needs to be
accessed quickly.
Disadvantages:
• More complex to implement.
• May waste storage space due to block-level addressing.
Use Cases:
• Databases, large spreadsheets.

3. Indexed Access
Description:
• Uses an index to keep track of where data blocks are
located. The index maps logical data positions to physical
storage locations.
How It Works:
• A separate index file is maintained.
• The index points to data blocks, enabling efficient access.
Advantages:
• Fast access to specific data.
• Efficient for files with a large number of records.
Disadvantages:
• Overhead of maintaining the index.
• Extra storage space is required for the index.
Tushar Ahuja

Use Cases:
• Large databases, file systems with frequent random
access needs.

4. Indexed Sequential Access Method (ISAM)


Description:
• Combines sequential and indexed access. Data is stored
sequentially, but an index is used for faster access.
How It Works:
• First, the index is searched to find the approximate
location.
• Then, sequential access is used from that point.
Advantages:
• Balances fast access with efficient storage.
• Suitable for applications needing both sequential and
random access.
Disadvantages:
• More complex than pure sequential or indexed access.
• Requires additional storage for the index.
Use Cases:
• Banking systems, inventory management.

5. Clustered Access
Description:
• Data that is frequently accessed together is stored in
clusters for efficiency.
Tushar Ahuja

Advantages:
• Reduces disk seek time.
• Improves access time for related data.
Disadvantages:
• Data clustering might require complex data management.
• Potential overhead in rearranging data as usage patterns
change.
Use Cases:
• Relational databases with joined tables.

Comparison of File Access Methods

You might also like