OS Answers StarkFile
OS Answers StarkFile
5. What are the levels of operating System and describe a system call in brief?
Levels of Operating System
Operating systems are typically structured in various levels or layers, each with distinct responsibilities and
functionalities. Here is an overview of the typical levels of an operating system:
1. Hardware Level:
Description: This is the physical hardware, including the CPU, memory, I/O devices, and other peripherals.
Function: Provides the raw resources that the operating system and applications utilize.
2. Kernel Level:
Description: The core part of the operating system that has direct control over the hardware.
Function: Manages CPU, memory, and device drivers. Provides essential services such as process management,
memory management, and interrupt handling.
Description: Includes specific software modules that communicate with hardware devices.
Function: Translates high-level I/O operations into device-specific operations, enabling the kernel and applications
to interact with hardware without needing to know the details of the hardware.
Description: A programming interface that allows user-space applications to request services from the kernel.
Function: Provides a controlled means for applications to access hardware and kernel services, ensuring security and
stability.
Function: Executes various user tasks and applications, such as word processors, browsers, and games, leveraging
the services provided by the underlying levels.
System Call
A system call is a programmatic way in which a computer program requests a service from the kernel of the operating
system. It provides an essential interface between a process and the operating system. Here's a brief description of a
system call:
1. Purpose:
System calls provide the means by which a program can interact with the operating system to perform tasks such as
reading or writing to a file, allocating memory, or communicating with hardware devices.
2. Mechanism:
When a system call is made, the execution context switches from user mode to kernel mode, where the requested
operation is performed. Once completed, control is returned to the user mode process.
3. Examples:
If the service crashes then there is no If the process/service crashes, the whole system crashes
Security
effect on working on the microkernel. as both user and OS were in the same address space.
Basics Micro Kernel Monolithic Kernel
More secure because only essential Susceptible to security vulnerabilities due to the amount
Security
services run in kernel mode of code running in kernel mode
A virtual machine also known as a Virtual Machine is defined as a computer resource that functions like a physical
computer and makes use of software resources only instead of using any physical computer for functioning, running
programs, and deploying the apps. While using Virtual Machine the experience of end-user is the same as that of when
using a physical device. Every virtual machine has its own operating system and it functions differently as compared
to other Virtual Machine even if they all run on the same host system. A virtual machine has its own CPU, storage, and
memory and can connect to the internet whenever it is required. A virtual machine can be implemented through
firmware, hardware, and software or can be a combination of all of them. Virtual machine is used in cloud environments
as well as in on-premise environments.
Types of Virtual Machine
There are two different types of virtual machines. They are:
• Process Virtual Machine
• System Virtual Machin
Examples
Below are the examples of most widely used virtual machine software:
1. Parallels Desktops
2. Citrix Hypervisor
3. Red Hat Virtualization
4. VMware Workstation Player
8. Compare between Batch processing system and Real Time Processing System?
SR.NO. Batch Processing System Real Time Processing System
In batch processing processor only needs to busy In real time processing processor needs to very
1
when work is assigned to it. responsive and active all the time.
SR.NO. Batch Processing System Real Time Processing System
3 Completion time is not critical in batch processing. Time to complete the task is very critical in real-time
Normal computer specification can also work with Real-time processing needs high computer architecture
5
batch processing. and high hardware specification.
Examples of batch processing are transactions of Examples of real-time processing are bank ATM
10 credit cards, generation of bills, processing of input transactions, customer services, radar system, weather
and output in the operating system etc. forecasts, temperature measurement etc.
Process large volumes of data in batches, typically Process data as it arrives, in real-time or near-real-
11
overnight or on a schedule. time.
Higher latency, as data is processed in batches after a Lower latency, as data is processed immediately or
12
delay. with minimal delay.
Lower cost per unit of data, as processing is done in Higher cost per unit of data, as processing must be
13
batches. done in real-time or near-real-time.
Ideal for tasks such as nightly data backups, report Ideal for tasks such as fraud detection, sensor data
14
generation, and large-scale data analysis. analysis, and real-time monitoring
9. What are the objectives of operating system? What are the advantages of peer-to-peer systems over client-server
systems?
Objectives of an Operating System
An operating system (OS) is crucial for the efficient functioning of computer systems. Its primary objectives include:
1. Resource Management: Efficiently manage the computer's hardware resources, such as the CPU, memory, disk
space, and I/O devices, to ensure optimal performance and utilization.
2. Process Management: Handle the creation, scheduling, and termination of processes. This involves managing the
execution of multiple processes, ensuring that they do not interfere with each other and that system resources are
allocated fairly.
3. Memory Management: Oversee and allocate memory space to processes as needed, managing both the physical and
virtual memory. This includes memory allocation, swapping, and paging.
4. File System Management: Provide a structured way to store, retrieve, and organize data on storage devices. The OS
handles file creation, deletion, reading, writing, and access control.
5. Security and Protection: Ensure that the system is protected against unauthorized access and that user data is kept
secure. This involves user authentication, access control, and protection against malware and other security threats.
6. User Interface: Provide a user interface, such as command-line interfaces (CLI) or graphical user interfaces (GUI),
that allows users to interact with the system easily.
7. Device Management: Manage device communication via drivers, ensuring efficient and correct operation of
hardware peripherals like printers, monitors, and network cards.
8. Networking: Enable and manage network communications, allowing computers to connect and share resources over
a network.
9. Error Detection and Handling: Detect errors in both hardware and software, and provide mechanisms to handle these
errors gracefully to maintain system stability.
Peer-to-peer (P2P) systems and client-server systems are two different network models. P2P systems have several
advantages over traditional client-server systems:
1.Scalability: P2P systems can easily scale as each additional peer adds resources to the network, both in terms of
bandwidth and storage. In contrast, client-server systems may require significant investment in server infrastructure to
handle increased loads.
2.Cost Efficiency: P2P systems generally have lower costs since they do not require dedicated servers. Each peer
contributes to the network's resources, reducing the need for centralized infrastructure.
3. Robustness and Reliability: P2P networks are more resilient to failures. If one peer goes down, others can take over
its functions, whereas in a client-server model, the failure of a central server can disrupt the entire network.
4.Load Distribution: In P2P systems, the workload is distributed among many peers, preventing bottlenecks that are
common in client-server architectures where the server can become a single point of congestion.
5.Decentralization: P2P networks operate without a central authority, promoting equal sharing of resources and
responsibilities. This decentralization can lead to more democratic and fair network usage
6.Enhanced Privacy and Anonymity: P2P networks can offer better privacy and anonymity since data is often
distributed across multiple peers, making it harder to track and monitor specific users' activities.
7.Resource Utilization: P2P systems make better use of the aggregate bandwidth and storage of all connected peers,
which can lead to more efficient utilization of available resources.
APIs (Application Programming Interfaces) and system calls both play crucial roles in software development, providing
mechanisms for interaction with underlying system resources and services. However, they serve different purposes and
are used in different contexts. Analyzing the necessity of APIs in place of system calls involves understanding their
distinctions, advantages, and specific use cases.
- APIs abstract the complexity of system calls, making it easier for developers to perform complex tasks without
needing deep knowledge of the underlying system architecture.
- This abstraction allows for quicker development and reduces the likelihood of errors.
2.Portability:
- APIs can be designed to be platform-independent. Using an API ensures that an application can run on different
operating systems with minimal modifications.
- System calls, in contrast, are often specific to an operating system, requiring changes to the code when porting
applications.
3.Security:
- APIs can enforce additional security checks and constraints, reducing the risk of improper use of system resources.
- Direct system calls can expose the system to vulnerabilities if not handled correctly by the application.
- APIs provide a structured way to manage and organize code, making it easier to maintain and scale applications.
- system calls, being lower-level, can lead to more complex and harder-to-maintain codebases.
- APIs offer consistent interfaces for performing tasks, which promotes standardization across applications and
systems.
- System calls vary between different operating systems, leading to inconsistencies in application behavior.
6.Performance Considerations:
- While system calls are often faster due to their lower-level nature, modern APIs are designed to minimize overhead
and provide efficient access to system resources.
- In many cases, the performance difference is negligible compared to the benefits of using an API.
12. What is bootstrap program? Identify the difference between mainframe and desktop operating system.
Bootstrap Program
A bootstrap program, also known as the bootloader, is a small, specialized piece of software responsible for loading the
operating system into the computer's memory when the system is powered on or restarted. It performs the following key
functions:
1.Initialization:
- It initializes the hardware components of the computer, such as the CPU, memory, and peripheral devices.
2.Self-Test:
- It conducts a Power-On Self-Test (POST) to check if the hardware components are working correctly.
3.Loading the OS:
- It locates the operating system kernel (usually stored on a hard drive, SSD, or other bootable media), loads it into
memory, and then transfers control to the OS.
4.Configuration:
- It may read configuration settings from firmware (like BIOS or UEFI) and set up the system environment
accordingly.
13. Illustrate the use of fork and exec system calls.
The Fork system call is used for creating a new process in Linux, and Unix systems, which is called the child
process, which runs concurrently with the process that makes the fork () call (parent process). After a new child
process is created, both processes will execute the next instruction following the fork () system call.
The child process uses the same pc (program counter), same CPU registers, and same open files which use in the
parent process. It takes no parameters and returns an integer value.
Below are different values returned by fork ().
• Negative Value: The creation of a child process was unsuccessful.
• Zero: Returned to the newly created child process.
• Positive value: Returned to parent or caller. The value contains the process ID of the newly created child
process.
exec ()
exec () system call loads a binary file into memory (destroying the memory image of the program containing the
exec () system call) and starts its execution.
14. Describe the functionalities of system call?
Services Provided by System Calls
• Process creation and management
• Main memory management
• File Access, Directory, and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
Process control: end, abort, create, terminate, allocate, and free memory.
File management: create, open, close, delete, read files,s, etc.
Device management
Information maintenance
Communication
Process Windows Unix
CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()
Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()
SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()
GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
Process Windows Unix
CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()
If a process having high priority If a process with a long burst time is running
Starvation frequently arrives in the ready queue, a CPU, then later coming process with less CPU
low priority process may starve. burst time may starve.
Decisions are made by the scheduler and Decisions are made by the process itself and the
Decision making are based on priority and time slice OS just follows the process’s instructions
allocation
3. It takes more time for creation. It takes less time for creation.
Multiprogramming holds the concepts of We don’t need multi programs in action for multiple threads because a
6. multi-process. single process consists of multiple threads.
The process is called the heavyweight A Thread is lightweight as each thread in a process shares code, data,
8. process. and resources.
Process switching uses an interface in an Thread switching does not require calling an operating system and
9. operating system. causes an interrupt to the kernel.
If one process is blocked then it will not If a user-level thread is blocked, then all other user-level threads are
10. affect the execution of other processes blocked.
The process has its own Process Control Thread has Parents’ PCB, its own Thread Control Block, and Stack and
11. Block, Stack, and Address Space. common Address space.
Since all threads of the same process share address space and other
Changes to the parent process do not
resources so any changes to the main thread may affect the behavior of
affect child processes.
12. the other threads of the process.
13. A system call is involved in it. No system call is involved, it is created using APIs.
If the kernel is time shared, then user-level threads are better than kernel-level threads, because in time shared systems
context switching takes place frequently. Context switching between kernel level threads has high overhead, almost the
same as a process whereas context switching between user-level threads has almost no overhead as compared to kernel
level threads.
23. Apply the concept of multiple instance RAG (Resource allocation Graph) with an example in case of a deadlock
situation.
A resource allocation graphs shows which resource is held by which process and which process is waiting for a
resource of a specific kind. It is amazing and straight – forward tool to outline how interacting processes can
deadlock. Therefore, resource allocation graph describes what the condition of the system as far as process and
resources are concern like what number of resources are allocated and what is the request of each process.
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one instance, then the
processes will be in deadlock. For example, if process P1 holds resource R1, process P2 holds resource R2 and process
P1 is waiting for R2 and process P2 is waiting for R1, then process P1 and process P2 will be in deadlock.
24. Describe the Dining Philosopher problem. Describe how the problem can be solved by using semaphore.
The Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick
between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can
pick up the two chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers but
not both.
The steps for the Dining Philosopher Problem solution using semaphores are as follows
1. Initialize the semaphores for each fork to 1 (indicating that they are available).
2. Initialize a binary semaphore (mutex) to 1 to ensure that only one philosopher can attempt to pick up a fork at a time.
3. For each philosopher process, create a separate thread that executes the following code:
• While true:
• Think for a random amount of time.
• Acquire the mutex semaphore to ensure that only one philosopher can attempt to pick up a fork at a
time.
• Attempt to acquire the semaphore for the fork to the left.
• If successful, attempt to acquire the semaphore for the fork to the right.
• If both forks are acquired successfully, eat for a random amount of time and then release both semaphores.
• If not successful in acquiring both forks, release the semaphore for the fork to the left (if acquired) and then
release the mutex semaphore and go back to thinking.
4. Run the philosopher threads concurrently.
25. Describe the process synchronization producer consumer problem?
The producer-consumer problem is an example of a multi-process synchronization problem. The problem describes
two processes, the producer and the consumer that share a common fixed-size buffer and use it as a queue.
• The producer’s job is to generate data, put it into the buffer, and start again.
• At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Solution of Producer-Consumer Problem
The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from
the buffer, it notifies the producer, who starts to fill the buffer again. In the same manner, the consumer can go to sleep if
it finds the buffer to be empty. The next time the producer transfer data into the buffer, it wakes up the sleeping consumer.
Producer code
Consumer code
1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when readcnt is updated i.e.
when any reader enters or exit from the critical section and semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical section, initially 0
Writer process:
1. Writer requests the entry to critical section.
2. If allowed i.e. wait () gives a true value, it enters and performs the write. If not allowed, it keeps on waiting.
3. It exits the critical section.
Reader process:
1. Reader requests the entry to critical section.
2. If allowed:
• it increments the count of number of readers inside the critical section. If this reader is the first reader
entering, it locks the wrt semaphore to restrict the entry of writers if any reader is inside.
• It then, signals mutex as any other reader is allowed to enter while others are already reading.
• After performing reading, it exits the critical section. When exiting, it checks if no more reader is
inside, it signals the semaphore “wrt” as now, writer can enter the critical section.
3. If not allowed, it keeps on waiting.
29. Describe the readers writers problem using the concept of critical section with all possible cases?
Same as Q no. 25
30. Analyze the dining philosophy problem in the context of critical section?
Same as Q no.21
31. Compare between CPU bounded, I/O bounded processes?
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running state.
An I/O-bound process requires more I/O time and less CPU time. An I/O-bound process spends more time in the
waiting state.
32. Illustrate the important to scale up system bus and device speed as CPU speed increases?
Consider a system which performs 50% I/O and 50% computes. Doubling the CPU performance on this system would
increase total system performance by only 50%. Doubling both system aspects would increase performance by 100%.
Generally, it is important to remove the current system bottleneck, and to increase overall system performance, rather
than blindly increasing the performance of individual system components.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty.Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
When producer produces an item then the value of “empty” is reduced by 1 because one slot will be filled now. The
value of mutex is also reduced to prevent consumer to access the buffer. Now, the producer has placed the item and
thus the value of “full” is increased by 1. The value of mutex is also increased by 1 because the task of producer has
been completed and consumer can access the buffer.
Solution for Consumer –
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1 and the value is mutex
is also reduced so that the producer cannot access the buffer at this moment. Now, the consumer has consumed the
item, thus increasing the value of “empty” by 1. The value of mutex is also increased so that producer can access the
buffer now.
34. State the different types of process synchronization techniques.
Same as Q no. 23
35. Classify the different memory management technique and describe the variable Partitioning with an example?
In operating systems, Memory Management is the function responsible for allocating and managing a computer’s main
memory. The memory Management function keeps track of the status of each memory location, either allocated or free
to ensure effective and efficient use of Primary Memory.
Below are Memory Management Techniques.
• Contiguous
• Non-Contiguous
In the Contiguous Technique, the executing process must be loaded entirely in the main memory. The contiguous
Technique can be divided into:
• Fixed (static) partitioning
• Variable (dynamic) partitioning
Variable Partitioning
It is a part of the Contiguous allocation technique. It is used to alleviate the problem faced by Fixed Partitioning. In
contrast with fixed partitioning, partitions are not made before the execution or during system configuration.
Various features associated with variable Partitioning-
• Initially, RAM is empty and partitions are made during the run-time according to the process’s need instead of
partitioning during system configuration.
• The size of the partition will be equal to the incoming process.
• The partition size varies according to the need of the process so that internal fragmentation can be avoided to
ensure efficient utilization of RAM.
• The number of partitions in RAM is not fixed and depends on the number of incoming processes and the Main
Memory’s size.
36. What are the modes of operation in Hardware Protection and explain it briefly?
1. CPU Protection:
CPU protection ensures that, a process does not monopolize the CPU indefinetely, as it would prevent other
processes from being executed. Each process should get a limited time, so that every process gets time to execute it’s
instructions. To address this, a timer is used to limit the amount of time, which a process can occupy from the CPU.
After the timer expires, a signal is sent to the process for relenquishing the CPU. Hence one process cannot hold the
CPU forever.
2. Memory Protection:
In memory protection, we are talking about that situation when two or more processes are in memory and one process
may access the other process memory. To prevent this situation we use two registers which are known as:
1. Base register
2. Limit register
So basically Base register store the starting address of program and limit register store the size of the process. This is
done to ensure that whenver a process wants to access the memory, the OS can check that – Is the memory area
which the process wants to access is previliged to be accessed by that process or not.
3. I/O Protection:
With I/O protection, an OS ensures that following can be never done by a processes:
1. Termination I/O of other process – This means one process should not be able to terminate I/O operation of
othe processes.
2. View I/O of other process – One process should not be able to access the data being read/written by other
processes from/to the Disk(s).
3. Giving priority to a particular process I/O – No process must be able to priorotize itself or other processes
which are doing I/O operations, over other processes.
37. Define Spooling?
Spooling is an acronym for simultaneous peripheral operation online. Spooling is the process of temporary storage
of data for use and execution by a device, program, or system. Data is sent to and stored in main memory or other
volatile storage until it is requested for execution by a program or computer. Spooling makes use of the disc as a large
buffer to send data to printers and other devices. It can also be used as an input, but it is more commonly used as an
output. Its primary function is to prevent two users from printing on the same page at the same time, resulting in their
output being completely mixed together. It prevents this because it uses the FIFO (First In First Out) strategy to
retrieve the stored jobs in the spool, and that creates a synchronization preventing the output to be completely mixed
together.
38. Compare between paging and segmentation with an example?
S.NO Paging Segmentation
In paging, the program is divided into fixed or In segmentation, the program is divided into variable size
1.
mounted size pages. sections.
S.NO Paging Segmentation
2. For the paging operating system is accountable. For segmentation compiler is accountable.
3. Page size is determined by hardware. Here, the section size is given by the user.
5. Paging could result in internal fragmentation. Segmentation could result in external fragmentation.
In paging, the logical address is split into a page Here, the logical address is split into section number and
6.
number and page offset. section offset.
Paging comprises a page table that encloses the base While segmentation also comprises the segment table
7.
address of every page. which encloses segment number and segment offset.
In paging, the operating system must maintain a free In segmentation, the operating system maintains a list of
9.
frame list. holes in the main memory.
In paging, the processor needs the page number, and In segmentation, the processor uses segment number, and
11.
offset to calculate the absolute address. offset to calculate the full address.
14. This protection is hard to apply. Easy to apply for protection in segmentation.
17. Paging results in a less efficient system. Segmentation results in a more efficient system.
39. Compare between internal and external fragmentation?
S.NO Internal fragmentation External fragmentation
In internal fragmentation fixed-sized memory, blocks In external fragmentation, variable-sized memory blocks
1. square measure appointed to process. square measure appointed to the method.
Internal fragmentation happens when the method or External fragmentation happens when the method or
2. process is smaller than the memory. process is removed.
The solution of internal fragmentation is the best-fit The solution to external fragmentation is compaction
3. block. and paging.
The difference between memory allocated and The unused spaces formed between non-contiguous
required space or memory is called Internal memory fragments are too small to serve a new process,
5. fragmentation. which is called External fragmentation.
Internal fragmentation occurs with paging and fixed External fragmentation occurs with segmentation
6. partitioning. and dynamic partitioning.
It occurs on the allocation of a process to a partition It occurs on the allocation of a process to a partition
greater than the process’s requirement. The leftover greater which is exactly the same memory space as it is
7. space causes degradation system performance. required.
40. Illustrate the performance of demand paging and deduce the expression for Effective Memory access time
(EAT).
Demand Paging
Demand paging is a memory management scheme that loads pages into memory only when they are needed, rather than
loading the entire program into memory at once. This technique uses a page table to keep track of where pages are
stored in physical memory.
Effective Memory Access Time (EAT)
The Effective Memory Access Time is the average time it takes to access a memory location, accounting for both
successful memory accesses and page faults.
To derive the EAT, consider the following scenarios
1. No Page Fault (1 - p):
- The time taken is simply the memory access time, \( m \).
2. Page Fault (p):
- When a page fault occurs, additional time is needed to service the page fault. This includes:
- The time to access the page on disk.
- The time to transfer the page to memory.
- The time to update the page table and restart the instruction.
- This total page fault service time is denoted as (s).
41. What is byte addressable? Explain the Little Endian and Big-Endian addressable memory organization?
Byte Addressable Memory
Byte addressable memory refers to a memory organization where each unique memory address identifies a single byte
(8 bits) of data. This allows fine-grained access and manipulation of data at the byte level, which is particularly useful
for dealing with characters, small data types, and specific bits within larger data structures.
Little Endian
In Little Endian byte ordering, the least significant byte (LSB) is stored at the lowest memory address, and the most
significant byte (MSB) is stored at the highest memory address. This is often described as storing the "little end" first.
For example, consider the 32-bit hexadecimal value `0x12345678`. The byte representation in Little Endian order would
be:
- Memory Address 0x00: 0x78 (least significant byte)
- Memory Address 0x01: 0x56
- Memory Address 0x02: 0x34
- Memory Address 0x03: 0x12 (most significant byte)
Big Endian
In Big Endian byte ordering, the most significant byte (MSB) is stored at the lowest memory address, and the least
significant byte (LSB) is stored at the highest memory address. This is often described as storing the "big end" first.
For the same 32-bit hexadecimal value `0x12345678`, the byte representation in Big Endian order would be:
- Memory Address 0x00: 0x12 (most significant byte)
- Memory Address 0x01: 0x34
- Memory Address 0x02: 0x56
- Memory Address 0x03: 0x78 (least significant byte)
42. Describe the concept of virtual memory in brief?
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of
the main memory. The addresses a program may use to reference memory are distinguished from the addresses the
memory system uses to identify physical storage sites and program-generated addresses are translated automatically to
the corresponding machine addresses.
A memory hierarchy, consisting of a computer system’s memory and a disk, that enables a process to operate with
only some portions of its address space in memory. A virtual memory is what its name indicates- it is an illusion of a
memory that is larger than the real memory. We refer to the software component of virtual memory as a virtual
memory manager. The basis of virtual memory is the noncontiguous memory allocation model. The virtual memory
manager removes some components from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer system and the amount of secondary
memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in computer memory.
43. Compare between logical address and physical address of a CPU?
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS
Visibility User can view the logical address of a program. User can never view physical address of program.
The user can use the logical address to access the The user can indirectly access physical address but
Access
physical address. not directly.
Editable Logical address can be change. Physical address will not change.
Case-2: If the system has 4 frames, the given reference string using the FIFO page replacement algorithm yields a
total of 10 page faults. The diagram below illustrates the pattern of the page faults occurring in the example.
It can be seen from the above example that on increasing the number of frames while using the FIFO page
replacement algorithm, the number of page faults increased from 9 to 10.
47. Explain the concept of demand paging.
Demand paging can be described as a memory management technique that is used in operating systems to improve
memory usage and system performance. Demand paging is a technique used in virtual memory systems where pages
enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a program into memory at runtime, instead
of loading the entire program into memory at the start. A page fault occurred when the program needed to access a
page that is not currently in memory. The operating system then loads the required pages from the disk into memory
and updates the page tables accordingly. This process is transparent to the running program and it continues to run as
if the page had always been in memory.
48. Illustrate the expression for effective memory access time during page fault.
Same as Q no. 37
49. Explain the C-SCAN Algorithm with an example?
Algorithm
Step 1: Let the Request array represents an array storing indexes of tracks that have been requested in ascending
order of their time of arrival. ‘head’ is the position of the disk head.
Step 2: The head services only in the right direction from 0 to the disk size.
Step 3: While moving in the left direction do not service any of the tracks.
Step 4: When we reach the beginning(left end) reverse the direction.
Step 5: While moving in the right direction it services all tracks one by one.
Step 6: While moving in the right direction calculate the absolute distance of the track from the head.
Step 7: Increment the total seek count with this distance.
Step 8: Currently serviced track position now becomes the new head position.
Step 9: Go to step 6 until we reach the right end of the disk.
Step 9: If we reach the right end of the disk reverse the direction and go to step 3 until all tracks in the request array
have not been serviced.
Example:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right(We are moving from left to right)
Output:
Initial position of head: 50
Total number of seek operations = 389
Seek Sequence: 60, 79, 92, 114, 176, 199, 0, 11, 34, 41
The following chart shows the sequence in which requested tracks are serviced using SCAN.
Advantages:
• This is very flexible in terms of file size. File size can be increased easily since the system does not have to look
for a contiguous chunk of memory.
• This method does not suffer from external fragmentation. This makes it relatively better in terms of memory
utilization.
Disadvantages:
• Because the file blocks are distributed randomly on the disk, a large number of seeks are needed to access every
block individually. This makes linked allocation slower.
• It does not support random or direct access. We can not directly access the blocks of a file. A block k of a file can
be accessed by traversing k blocks sequentially (sequential access ) from the starting block of the file via block
pointers.
• Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file.
Each file has its own index block. The ith entry in the index block contains the disk address of the ith file block. The
directory entry contains the address of the index block as shown in the image:
Advantages:
• This supports direct access to the blocks occupied by the file and therefore provides fast access to the file blocks.
• It overcomes the problem of external fragmentation.
Disadvantages:
• The pointer overhead for indexed allocation is greater than linked allocation.
• For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep one entire block
(index block) for the pointers which is inefficient in terms of memory utilization. However, in linked allocation
we lose the space of only 1 pointer per block.