0% found this document useful (0 votes)
18 views30 pages

OS Answers StarkFile

Uploaded by

Rahul Prodhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views30 pages

OS Answers StarkFile

Uploaded by

Rahul Prodhan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

OPERATING SYSTEM ANSWERS

1. What are the functions of operating system?

2. Compare between a multitasking and multiprocessing?


3. Describe a process in Operating System using State Diagram?

4. What is a kernel explain its importance in operating system.


Kernel is central component of an operating system that manages operations of computer and hardware. It basically
manages operations of memory and CPU time. It is core component of an operating system. Kernel acts as a bridge
between applications and data processing performed at hardware level using inter-process communication and system
calls.
Kernel loads first into memory when an operating system is loaded and remains into memory until operating system is
shut down again. It is responsible for various tasks such as disk management, task management, and memory
management.
Ex. Monolithic kernel, Micro kernel, Exo kernel, Nano kernel etc.
Objectives of Kernel:
• To establish communication between user level application and hardware.
• To decide state of incoming processes.
• To control disk management.
• To control memory management.
• To control task management.

5. What are the levels of operating System and describe a system call in brief?
Levels of Operating System

Operating systems are typically structured in various levels or layers, each with distinct responsibilities and
functionalities. Here is an overview of the typical levels of an operating system:

1. Hardware Level:

Description: This is the physical hardware, including the CPU, memory, I/O devices, and other peripherals.

Function: Provides the raw resources that the operating system and applications utilize.

2. Kernel Level:

Description: The core part of the operating system that has direct control over the hardware.

Function: Manages CPU, memory, and device drivers. Provides essential services such as process management,
memory management, and interrupt handling.

3. Device Driver Level:

Description: Includes specific software modules that communicate with hardware devices.

Function: Translates high-level I/O operations into device-specific operations, enabling the kernel and applications
to interact with hardware without needing to know the details of the hardware.

4. System Call Interface Level:

Description: A programming interface that allows user-space applications to request services from the kernel.

Function: Provides a controlled means for applications to access hardware and kernel services, ensuring security and
stability.

5. System Libraries Level:

Description: Libraries that provide standard functions for applications.


Function: Facilitate common tasks such as file operations, network communications, and data manipulation, allowing
developers to avoid writing code from scratch.

6. User Application Level:

Description: The level at which user-space applications and programs run.

Function: Executes various user tasks and applications, such as word processors, browsers, and games, leveraging
the services provided by the underlying levels.

System Call

A system call is a programmatic way in which a computer program requests a service from the kernel of the operating
system. It provides an essential interface between a process and the operating system. Here's a brief description of a
system call:

1. Purpose:

System calls provide the means by which a program can interact with the operating system to perform tasks such as
reading or writing to a file, allocating memory, or communicating with hardware devices.

2. Mechanism:

When a system call is made, the execution context switches from user mode to kernel mode, where the requested
operation is performed. Once completed, control is returned to the user mode process.

3. Examples:

File Manipulation: `open()`, `read()`, `write()`, `close()`.

Process Control: `fork()`, `exec()`, `exit()`, `wait()`.

Memory Management: `mmap()`, `munmap()`.

Communication: `pipe()`, `socket()`, `connect()`.

6. Compare between monolithic kernel and micro kernel?

Basics Micro Kernel Monolithic Kernel

Larger as OS and user both lie in the same address


Size Smaller in
space.

Execution Slower Faster

Extendible Easily extendible Complex to extend

If the service crashes then there is no If the process/service crashes, the whole system crashes
Security
effect on working on the microkernel. as both user and OS were in the same address space.
Basics Micro Kernel Monolithic Kernel

More code is required to write a


Code Less code is required to write a monolithic kernel.
microkernel.

Examples L4Linux, macOS Windows, Linux BSD

More secure because only essential Susceptible to security vulnerabilities due to the amount
Security
services run in kernel mode of code running in kernel mode

Platform More portable because most drivers and


Less portable due to direct hardware access
independence services run in user space

Message passing between user-space


Communication Direct function calls within kernel
servers

Lower due to message passing and


Performance High due to direct function calls and less overhead
more overhead

7. Apply the concept of Virtual Machine with an Example?

A virtual machine also known as a Virtual Machine is defined as a computer resource that functions like a physical
computer and makes use of software resources only instead of using any physical computer for functioning, running
programs, and deploying the apps. While using Virtual Machine the experience of end-user is the same as that of when
using a physical device. Every virtual machine has its own operating system and it functions differently as compared
to other Virtual Machine even if they all run on the same host system. A virtual machine has its own CPU, storage, and
memory and can connect to the internet whenever it is required. A virtual machine can be implemented through
firmware, hardware, and software or can be a combination of all of them. Virtual machine is used in cloud environments
as well as in on-premise environments.
Types of Virtual Machine
There are two different types of virtual machines. They are:
• Process Virtual Machine
• System Virtual Machin
Examples
Below are the examples of most widely used virtual machine software:
1. Parallels Desktops
2. Citrix Hypervisor
3. Red Hat Virtualization
4. VMware Workstation Player

8. Compare between Batch processing system and Real Time Processing System?
SR.NO. Batch Processing System Real Time Processing System

In batch processing processor only needs to busy In real time processing processor needs to very
1
when work is assigned to it. responsive and active all the time.
SR.NO. Batch Processing System Real Time Processing System

In this system, events mostly external to computer


Jobs with similar requirements are batched together
2 system are accepted and processed within certain
and run through the computer as a group.
deadlines.

3 Completion time is not critical in batch processing. Time to complete the task is very critical in real-time

Complex and costly processing requires unique


It provides most economical and simplest processing
4 hardware and software to handle complex operating
method for business applications.
system programs.

Normal computer specification can also work with Real-time processing needs high computer architecture
5
batch processing. and high hardware specification.

It has to handle a process within the specified time


6 In this processing there is no time limit.
limit otherwise the system fails.

7 It is measurement oriented. It is action or event oriented.

8 In this system sorting is performed before processing. No sorting is required.

In this system data is collected for defined period of


9 Supports random data input at random time.
time and is processed in batches.

Examples of batch processing are transactions of Examples of real-time processing are bank ATM
10 credit cards, generation of bills, processing of input transactions, customer services, radar system, weather
and output in the operating system etc. forecasts, temperature measurement etc.

Process large volumes of data in batches, typically Process data as it arrives, in real-time or near-real-
11
overnight or on a schedule. time.

Higher latency, as data is processed in batches after a Lower latency, as data is processed immediately or
12
delay. with minimal delay.

Lower cost per unit of data, as processing is done in Higher cost per unit of data, as processing must be
13
batches. done in real-time or near-real-time.

Ideal for tasks such as nightly data backups, report Ideal for tasks such as fraud detection, sensor data
14
generation, and large-scale data analysis. analysis, and real-time monitoring

9. What are the objectives of operating system? What are the advantages of peer-to-peer systems over client-server
systems?
Objectives of an Operating System

An operating system (OS) is crucial for the efficient functioning of computer systems. Its primary objectives include:

1. Resource Management: Efficiently manage the computer's hardware resources, such as the CPU, memory, disk
space, and I/O devices, to ensure optimal performance and utilization.

2. Process Management: Handle the creation, scheduling, and termination of processes. This involves managing the
execution of multiple processes, ensuring that they do not interfere with each other and that system resources are
allocated fairly.

3. Memory Management: Oversee and allocate memory space to processes as needed, managing both the physical and
virtual memory. This includes memory allocation, swapping, and paging.

4. File System Management: Provide a structured way to store, retrieve, and organize data on storage devices. The OS
handles file creation, deletion, reading, writing, and access control.

5. Security and Protection: Ensure that the system is protected against unauthorized access and that user data is kept
secure. This involves user authentication, access control, and protection against malware and other security threats.

6. User Interface: Provide a user interface, such as command-line interfaces (CLI) or graphical user interfaces (GUI),
that allows users to interact with the system easily.

7. Device Management: Manage device communication via drivers, ensuring efficient and correct operation of
hardware peripherals like printers, monitors, and network cards.

8. Networking: Enable and manage network communications, allowing computers to connect and share resources over
a network.

9. Error Detection and Handling: Detect errors in both hardware and software, and provide mechanisms to handle these
errors gracefully to maintain system stability.

Advantages of Peer-to-Peer Systems over Client-Server Systems

Peer-to-peer (P2P) systems and client-server systems are two different network models. P2P systems have several
advantages over traditional client-server systems:

1.Scalability: P2P systems can easily scale as each additional peer adds resources to the network, both in terms of
bandwidth and storage. In contrast, client-server systems may require significant investment in server infrastructure to
handle increased loads.

2.Cost Efficiency: P2P systems generally have lower costs since they do not require dedicated servers. Each peer
contributes to the network's resources, reducing the need for centralized infrastructure.

3. Robustness and Reliability: P2P networks are more resilient to failures. If one peer goes down, others can take over
its functions, whereas in a client-server model, the failure of a central server can disrupt the entire network.

4.Load Distribution: In P2P systems, the workload is distributed among many peers, preventing bottlenecks that are
common in client-server architectures where the server can become a single point of congestion.

5.Decentralization: P2P networks operate without a central authority, promoting equal sharing of resources and
responsibilities. This decentralization can lead to more democratic and fair network usage
6.Enhanced Privacy and Anonymity: P2P networks can offer better privacy and anonymity since data is often
distributed across multiple peers, making it harder to track and monitor specific users' activities.

7.Resource Utilization: P2P systems make better use of the aggregate bandwidth and storage of all connected peers,
which can lead to more efficient utilization of available resources.

10. Analyse the necessity of API’s in replace of system call?

APIs (Application Programming Interfaces) and system calls both play crucial roles in software development, providing
mechanisms for interaction with underlying system resources and services. However, they serve different purposes and
are used in different contexts. Analyzing the necessity of APIs in place of system calls involves understanding their
distinctions, advantages, and specific use cases.

Necessity and Advantages of APIs over System Calls

1.Abstraction and Ease of Use:

- APIs abstract the complexity of system calls, making it easier for developers to perform complex tasks without
needing deep knowledge of the underlying system architecture.

- This abstraction allows for quicker development and reduces the likelihood of errors.

2.Portability:

- APIs can be designed to be platform-independent. Using an API ensures that an application can run on different
operating systems with minimal modifications.

- System calls, in contrast, are often specific to an operating system, requiring changes to the code when porting
applications.

3.Security:

- APIs can enforce additional security checks and constraints, reducing the risk of improper use of system resources.

- Direct system calls can expose the system to vulnerabilities if not handled correctly by the application.

4.Maintainability and Scalability:

- APIs provide a structured way to manage and organize code, making it easier to maintain and scale applications.

- system calls, being lower-level, can lead to more complex and harder-to-maintain codebases.

5.Consistency and Standardization:

- APIs offer consistent interfaces for performing tasks, which promotes standardization across applications and
systems.

- System calls vary between different operating systems, leading to inconsistencies in application behavior.

6.Performance Considerations:
- While system calls are often faster due to their lower-level nature, modern APIs are designed to minimize overhead
and provide efficient access to system resources.

- In many cases, the performance difference is negligible compared to the benefits of using an API.

11. Compare between batch systems and time-sharing systems.

12. What is bootstrap program? Identify the difference between mainframe and desktop operating system.
Bootstrap Program
A bootstrap program, also known as the bootloader, is a small, specialized piece of software responsible for loading the
operating system into the computer's memory when the system is powered on or restarted. It performs the following key
functions:
1.Initialization:
- It initializes the hardware components of the computer, such as the CPU, memory, and peripheral devices.
2.Self-Test:
- It conducts a Power-On Self-Test (POST) to check if the hardware components are working correctly.
3.Loading the OS:
- It locates the operating system kernel (usually stored on a hard drive, SSD, or other bootable media), loads it into
memory, and then transfers control to the OS.
4.Configuration:
- It may read configuration settings from firmware (like BIOS or UEFI) and set up the system environment
accordingly.
13. Illustrate the use of fork and exec system calls.
The Fork system call is used for creating a new process in Linux, and Unix systems, which is called the child
process, which runs concurrently with the process that makes the fork () call (parent process). After a new child
process is created, both processes will execute the next instruction following the fork () system call.
The child process uses the same pc (program counter), same CPU registers, and same open files which use in the
parent process. It takes no parameters and returns an integer value.
Below are different values returned by fork ().
• Negative Value: The creation of a child process was unsuccessful.
• Zero: Returned to the newly created child process.
• Positive value: Returned to parent or caller. The value contains the process ID of the newly created child
process.
exec ()
exec () system call loads a binary file into memory (destroying the memory image of the program containing the
exec () system call) and starts its execution.
14. Describe the functionalities of system call?
Services Provided by System Calls
• Process creation and management
• Main memory management
• File Access, Directory, and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
Process control: end, abort, create, terminate, allocate, and free memory.
File management: create, open, close, delete, read files,s, etc.
Device management
Information maintenance
Communication
Process Windows Unix

CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()

Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()

SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()

GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
Process Windows Unix

CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

15. Compare between Pre-emptive and Non-Pre-emptive Scheduling with an example?

Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated to a


In this resources(CPU Cycle) are
Basic process, the process holds it till it completes its
allocated to a process for a limited time.
burst time or switches to waiting state.

Process can not be interrupted until it terminates


Interrupt Process can be interrupted in between.
itself or its time is up.

If a process having high priority If a process with a long burst time is running
Starvation frequently arrives in the ready queue, a CPU, then later coming process with less CPU
low priority process may starve. burst time may starve.

It has overheads of scheduling the


Overhead It does not have overheads.
processes.

Flexibility flexible rigid

Cost cost associated no cost associated

In preemptive scheduling, CPU


CPU Utilization It is low in non preemptive scheduling.
utilization is high.

Preemptive scheduling waiting time is


Waiting Time Non-preemptive scheduling waiting time is high.
less.

Preemptive scheduling response time is Non-preemptive scheduling response time is


Response Time
less. high.
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Decisions are made by the scheduler and Decisions are made by the process itself and the
Decision making are based on priority and time slice OS just follows the process’s instructions
allocation

The OS has less control over the scheduling of


The OS has greater control over the
Process control processes
scheduling of processes

Lower overhead since context switching is less


Higher overhead due to frequent context
Overhead frequent
switching

Examples of preemptive scheduling are


Examples of non-preemptive scheduling are First
Examples Round Robin and Shortest Remaining
Come First Serve and Shortest Job First.
Time First.

16. Compare between process and thread in operating system?


Difference between Process and Thread:

S.NO Process Thread

Process means any program is in


Thread means a segment of a process.
1. execution.

The process takes more time to


The thread takes less time to terminate.
2. terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for context


It takes less time for context switching.
4. switching.

The process is less efficient in terms of


Thread is more efficient in terms of communication.
5. communication.

Multiprogramming holds the concepts of We don’t need multi programs in action for multiple threads because a
6. multi-process. single process consists of multiple threads.

7. The process is isolated. Threads share memory.


S.NO Process Thread

The process is called the heavyweight A Thread is lightweight as each thread in a process shares code, data,
8. process. and resources.

Process switching uses an interface in an Thread switching does not require calling an operating system and
9. operating system. causes an interrupt to the kernel.

If one process is blocked then it will not If a user-level thread is blocked, then all other user-level threads are
10. affect the execution of other processes blocked.

The process has its own Process Control Thread has Parents’ PCB, its own Thread Control Block, and Stack and
11. Block, Stack, and Address Space. common Address space.

Since all threads of the same process share address space and other
Changes to the parent process do not
resources so any changes to the main thread may affect the behavior of
affect child processes.
12. the other threads of the process.

13. A system call is involved in it. No system call is involved, it is created using APIs.

The process does not share data with


Threads share data with each other.
14. each other.

17. Classify the real time scheduling with an example


Real-time scheduling refers to the methods and algorithms used to ensure that tasks in a real-time system are completed
within their specified deadlines. In real-time systems, tasks are usually characterized by deadlines and timing
constraints. There are two main categories of real-time scheduling: hard real-time scheduling and soft real-time
scheduling.
Hard Real-Time Scheduling
In hard real-time systems, missing a deadline can lead to catastrophic consequences. Therefore, it is critical that all tasks
meet their deadlines. Examples include embedded systems in medical devices, automotive control systems, and
avionics.
Example:
Consider an automotive airbag system. In this system, the sensor detecting a collision must trigger the airbag
deployment within a strict deadline (e.g., within 20 milliseconds). The scheduling algorithm must ensure that this task is
given the highest priority and completes on time without fail.
Algorithm:
Rate Monotonic Scheduling (RMS), Earliest Deadline First (EDF)
Soft Real-Time Scheduling
In soft real-time systems, missing a deadline is undesirable but not catastrophic. These systems can tolerate occasional
deadline misses, although they may degrade the quality of service. Examples include multimedia applications, video
streaming, and online gaming.
Example:
Consider a video streaming application. The system aims to process and display video frames in real-time. While
occasional frame drops might reduce the video quality and user experience, they are not critical.
Algorithm:
Round-Robin Scheduling, Proportional Share Scheduling
18. Illustrate the different states of a process?
Same as Q no. 3.
19. Analyse a multiprocessing environment with an example?
operating systems, to improve the performance of more than one CPU can be used within one computer system called
Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them for faster execution. When a job finishes,
results from all CPUs are collected and compiled to give the final output. Jobs needed to share main memory and they
may also share other system resources among themselves. Multiple CPUs can also be used to run multiple jobs
simultaneously.
For Example: UNIX Operating system is one of the most widely used multiprocessing systems.
The basic organization of a typical multiprocessing system is shown in the given figure.

Types of multiprocessing systems

o Symmetrical multiprocessing operating system


o Asymmetric multiprocessing operating system

20. Apply the concept of context switching in process/tasks with an example?


Context switching is the process by which a computer's operating system switches the CPU from one process or task to
another. This is essential for multitasking, allowing a single CPU to handle multiple tasks by rapidly switching between
them. Context switching involves saving the state of the currently running process and loading the state of the next process
to be executed.
Example: Managing Multiple Applications
Imagine you are using a computer to run multiple applications simultaneously, such as a web browser, a word processor,
and a music player. Here’s how context switching works in this scenario:
1. Running the Web Browser:
2. Switching to the Word Processor:
3. Playing Music in the Background:
Technical Steps in Context Switching
1. Save State:
- The operating system saves the state of the currently running process, including:
- CPU registers
- Program counter (indicating the next instruction to execute)
- Stack pointer (indicating the top of the current stack)
- Memory management information
2. Update Process Control Block (PCB):
- The PCB for the current process is updated with the saved state information.
3. Select New Process:
- The operating system scheduler selects the next process to run based on scheduling algorithms (e.g., round-robin,
priority-based).
4. Restore State:
- The state of the selected process is loaded from its PCB.
- The CPU registers, program counter, and stack pointer are restored.
5. Execute New Process:
- The CPU begins executing instructions from the new process.
21. Under what circumstances user level threads are better than the kernel level threads?

Circumstances where user-level threads are better than kernel-level threads:

If the kernel is time shared, then user-level threads are better than kernel-level threads, because in time shared systems
context switching takes place frequently. Context switching between kernel level threads has high overhead, almost the
same as a process whereas context switching between user-level threads has almost no overhead as compared to kernel
level threads.

22. Illustrate the various method to handle deadlock?


Methods of handling deadlocks: There are four approaches to dealing with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance (Banker's Algorithm)
3. Deadlock detection & recovery
4. Deadlock Ignorance (Ostrich Method)

23. Apply the concept of multiple instance RAG (Resource allocation Graph) with an example in case of a deadlock
situation.
A resource allocation graphs shows which resource is held by which process and which process is waiting for a
resource of a specific kind. It is amazing and straight – forward tool to outline how interacting processes can
deadlock. Therefore, resource allocation graph describes what the condition of the system as far as process and
resources are concern like what number of resources are allocated and what is the request of each process.
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one instance, then the
processes will be in deadlock. For example, if process P1 holds resource R1, process P2 holds resource R2 and process
P1 is waiting for R2 and process P2 is waiting for R1, then process P1 and process P2 will be in deadlock.

24. Describe the Dining Philosopher problem. Describe how the problem can be solved by using semaphore.
The Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick
between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can
pick up the two chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers but
not both.

The steps for the Dining Philosopher Problem solution using semaphores are as follows
1. Initialize the semaphores for each fork to 1 (indicating that they are available).
2. Initialize a binary semaphore (mutex) to 1 to ensure that only one philosopher can attempt to pick up a fork at a time.
3. For each philosopher process, create a separate thread that executes the following code:
• While true:
• Think for a random amount of time.
• Acquire the mutex semaphore to ensure that only one philosopher can attempt to pick up a fork at a
time.
• Attempt to acquire the semaphore for the fork to the left.
• If successful, attempt to acquire the semaphore for the fork to the right.
• If both forks are acquired successfully, eat for a random amount of time and then release both semaphores.
• If not successful in acquiring both forks, release the semaphore for the fork to the left (if acquired) and then
release the mutex semaphore and go back to thinking.
4. Run the philosopher threads concurrently.
25. Describe the process synchronization producer consumer problem?
The producer-consumer problem is an example of a multi-process synchronization problem. The problem describes
two processes, the producer and the consumer that share a common fixed-size buffer and use it as a queue.
• The producer’s job is to generate data, put it into the buffer, and start again.
• At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Solution of Producer-Consumer Problem
The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from
the buffer, it notifies the producer, who starts to fill the buffer again. In the same manner, the consumer can go to sleep if
it finds the buffer to be empty. The next time the producer transfer data into the buffer, it wakes up the sleeping consumer.
Producer code

Consumer code

26. What are the different types of process synchronization techniques?


Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that
they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions
and other synchronization issues in a concurrent system.
1. Mutual exclusion
2. Bound waiting
3. Semaphores
4. Progress
5. Synchronization hardware
6. Peterson’s solution
7. Condition variable
8. Mutex locks
9. Monitors
10. Deadlocks
27. Describe the algorithm for dining philosophy problem?
Same as Q no. 21
28. What is a binary semaphore, explain the algorithm for readers-writers problem?
Binary Semaphores
Binary Semaphore provides mutual synchronization between the processes in an operating system. It has an integer
range of values from 0 to 1.
Basically, Binary Semaphores have two operations namely wait(P) and signal(V) operations. Both operations are
atomic. Semaphore(s) can be initialized to zero or one. Here atomic means that the variable on which read, modify, and
update happens at the same time/moment with no pre-emption.
Reader’s-Writer’s Problem
There are four types of cases that could happen here.
Case Process 1 Process 2 Allowed/Not Allowed

Case 1 Writing Writing Not Allowed


Case Process 1 Process 2 Allowed/Not Allowed

Case 2 Writing Reading Not Allowed

Case 3 Reading Writing Not Allowed

Case 4 Reading Reading Allowed

Three variables are used: mutex, wrt, readcnt to implement solution

1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when readcnt is updated i.e.
when any reader enters or exit from the critical section and semaphore wrt is used by both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical section, initially 0
Writer process:
1. Writer requests the entry to critical section.
2. If allowed i.e. wait () gives a true value, it enters and performs the write. If not allowed, it keeps on waiting.
3. It exits the critical section.
Reader process:
1. Reader requests the entry to critical section.
2. If allowed:
• it increments the count of number of readers inside the critical section. If this reader is the first reader
entering, it locks the wrt semaphore to restrict the entry of writers if any reader is inside.
• It then, signals mutex as any other reader is allowed to enter while others are already reading.
• After performing reading, it exits the critical section. When exiting, it checks if no more reader is
inside, it signals the semaphore “wrt” as now, writer can enter the critical section.
3. If not allowed, it keeps on waiting.

29. Describe the readers writers problem using the concept of critical section with all possible cases?
Same as Q no. 25
30. Analyze the dining philosophy problem in the context of critical section?
Same as Q no.21
31. Compare between CPU bounded, I/O bounded processes?
CPU-Bound vs I/O-Bound Processes

A CPU-bound process requires more CPU time or spends more time in the running state.
An I/O-bound process requires more I/O time and less CPU time. An I/O-bound process spends more time in the
waiting state.

32. Illustrate the important to scale up system bus and device speed as CPU speed increases?

Consider a system which performs 50% I/O and 50% computes. Doubling the CPU performance on this system would
increase total system performance by only 50%. Doubling both system aspects would increase performance by 100%.
Generally, it is important to remove the current system bottleneck, and to increase overall system performance, rather
than blindly increasing the performance of individual system components.

33. Describe the algorithm to solve producer-consumer problem using semaphore.

Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty.Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –

When producer produces an item then the value of “empty” is reduced by 1 because one slot will be filled now. The
value of mutex is also reduced to prevent consumer to access the buffer. Now, the producer has placed the item and
thus the value of “full” is increased by 1. The value of mutex is also increased by 1 because the task of producer has
been completed and consumer can access the buffer.
Solution for Consumer –

As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1 and the value is mutex
is also reduced so that the producer cannot access the buffer at this moment. Now, the consumer has consumed the
item, thus increasing the value of “empty” by 1. The value of mutex is also increased so that producer can access the
buffer now.
34. State the different types of process synchronization techniques.
Same as Q no. 23
35. Classify the different memory management technique and describe the variable Partitioning with an example?
In operating systems, Memory Management is the function responsible for allocating and managing a computer’s main
memory. The memory Management function keeps track of the status of each memory location, either allocated or free
to ensure effective and efficient use of Primary Memory.
Below are Memory Management Techniques.
• Contiguous
• Non-Contiguous
In the Contiguous Technique, the executing process must be loaded entirely in the main memory. The contiguous
Technique can be divided into:
• Fixed (static) partitioning
• Variable (dynamic) partitioning
Variable Partitioning
It is a part of the Contiguous allocation technique. It is used to alleviate the problem faced by Fixed Partitioning. In
contrast with fixed partitioning, partitions are not made before the execution or during system configuration.
Various features associated with variable Partitioning-
• Initially, RAM is empty and partitions are made during the run-time according to the process’s need instead of
partitioning during system configuration.
• The size of the partition will be equal to the incoming process.
• The partition size varies according to the need of the process so that internal fragmentation can be avoided to
ensure efficient utilization of RAM.
• The number of partitions in RAM is not fixed and depends on the number of incoming processes and the Main
Memory’s size.

36. What are the modes of operation in Hardware Protection and explain it briefly?
1. CPU Protection:
CPU protection ensures that, a process does not monopolize the CPU indefinetely, as it would prevent other
processes from being executed. Each process should get a limited time, so that every process gets time to execute it’s
instructions. To address this, a timer is used to limit the amount of time, which a process can occupy from the CPU.
After the timer expires, a signal is sent to the process for relenquishing the CPU. Hence one process cannot hold the
CPU forever.
2. Memory Protection:
In memory protection, we are talking about that situation when two or more processes are in memory and one process
may access the other process memory. To prevent this situation we use two registers which are known as:
1. Base register
2. Limit register
So basically Base register store the starting address of program and limit register store the size of the process. This is
done to ensure that whenver a process wants to access the memory, the OS can check that – Is the memory area
which the process wants to access is previliged to be accessed by that process or not.
3. I/O Protection:
With I/O protection, an OS ensures that following can be never done by a processes:
1. Termination I/O of other process – This means one process should not be able to terminate I/O operation of
othe processes.
2. View I/O of other process – One process should not be able to access the data being read/written by other
processes from/to the Disk(s).
3. Giving priority to a particular process I/O – No process must be able to priorotize itself or other processes
which are doing I/O operations, over other processes.
37. Define Spooling?
Spooling is an acronym for simultaneous peripheral operation online. Spooling is the process of temporary storage
of data for use and execution by a device, program, or system. Data is sent to and stored in main memory or other
volatile storage until it is requested for execution by a program or computer. Spooling makes use of the disc as a large
buffer to send data to printers and other devices. It can also be used as an input, but it is more commonly used as an
output. Its primary function is to prevent two users from printing on the same page at the same time, resulting in their
output being completely mixed together. It prevents this because it uses the FIFO (First In First Out) strategy to
retrieve the stored jobs in the spool, and that creates a synchronization preventing the output to be completely mixed
together.
38. Compare between paging and segmentation with an example?
S.NO Paging Segmentation

In paging, the program is divided into fixed or In segmentation, the program is divided into variable size
1.
mounted size pages. sections.
S.NO Paging Segmentation

2. For the paging operating system is accountable. For segmentation compiler is accountable.

3. Page size is determined by hardware. Here, the section size is given by the user.

4. It is faster in comparison to segmentation. Segmentation is slow.

5. Paging could result in internal fragmentation. Segmentation could result in external fragmentation.

In paging, the logical address is split into a page Here, the logical address is split into section number and
6.
number and page offset. section offset.

Paging comprises a page table that encloses the base While segmentation also comprises the segment table
7.
address of every page. which encloses segment number and segment offset.

The page table is employed to keep up the page


8. Section Table maintains the section data.
data.

In paging, the operating system must maintain a free In segmentation, the operating system maintains a list of
9.
frame list. holes in the main memory.

10. Paging is invisible to the user. Segmentation is visible to the user.

In paging, the processor needs the page number, and In segmentation, the processor uses segment number, and
11.
offset to calculate the absolute address. offset to calculate the full address.

It is hard to allow sharing of procedures between


12. Facilitates sharing of procedures between the processes.
processes.

In paging, a programmer cannot efficiently handle


13 It can efficiently handle data structures.
data structure.

14. This protection is hard to apply. Easy to apply for protection in segmentation.

The size of the page needs always be equal to the


15. There is no constraint on the size of segments.
size of frames.

A page is referred to as a physical unit of


16. A segment is referred to as a logical unit of information.
information.

17. Paging results in a less efficient system. Segmentation results in a more efficient system.
39. Compare between internal and external fragmentation?
S.NO Internal fragmentation External fragmentation

In internal fragmentation fixed-sized memory, blocks In external fragmentation, variable-sized memory blocks
1. square measure appointed to process. square measure appointed to the method.

Internal fragmentation happens when the method or External fragmentation happens when the method or
2. process is smaller than the memory. process is removed.

The solution of internal fragmentation is the best-fit The solution to external fragmentation is compaction
3. block. and paging.

External fragmentation occurs when memory is divided


Internal fragmentation occurs when memory is divided
into variable size partitions based on the size of
into fixed-sized partitions.
4. processes.

The difference between memory allocated and The unused spaces formed between non-contiguous
required space or memory is called Internal memory fragments are too small to serve a new process,
5. fragmentation. which is called External fragmentation.

Internal fragmentation occurs with paging and fixed External fragmentation occurs with segmentation
6. partitioning. and dynamic partitioning.

It occurs on the allocation of a process to a partition It occurs on the allocation of a process to a partition
greater than the process’s requirement. The leftover greater which is exactly the same memory space as it is
7. space causes degradation system performance. required.

It occurs in best fit and first fit memory allocation


It occurs in worst fit memory allocation method.
8. method.

40. Illustrate the performance of demand paging and deduce the expression for Effective Memory access time
(EAT).
Demand Paging
Demand paging is a memory management scheme that loads pages into memory only when they are needed, rather than
loading the entire program into memory at once. This technique uses a page table to keep track of where pages are
stored in physical memory.
Effective Memory Access Time (EAT)
The Effective Memory Access Time is the average time it takes to access a memory location, accounting for both
successful memory accesses and page faults.
To derive the EAT, consider the following scenarios
1. No Page Fault (1 - p):
- The time taken is simply the memory access time, \( m \).
2. Page Fault (p):
- When a page fault occurs, additional time is needed to service the page fault. This includes:
- The time to access the page on disk.
- The time to transfer the page to memory.
- The time to update the page table and restart the instruction.
- This total page fault service time is denoted as (s).

41. What is byte addressable? Explain the Little Endian and Big-Endian addressable memory organization?
Byte Addressable Memory
Byte addressable memory refers to a memory organization where each unique memory address identifies a single byte
(8 bits) of data. This allows fine-grained access and manipulation of data at the byte level, which is particularly useful
for dealing with characters, small data types, and specific bits within larger data structures.
Little Endian
In Little Endian byte ordering, the least significant byte (LSB) is stored at the lowest memory address, and the most
significant byte (MSB) is stored at the highest memory address. This is often described as storing the "little end" first.
For example, consider the 32-bit hexadecimal value `0x12345678`. The byte representation in Little Endian order would
be:
- Memory Address 0x00: 0x78 (least significant byte)
- Memory Address 0x01: 0x56
- Memory Address 0x02: 0x34
- Memory Address 0x03: 0x12 (most significant byte)
Big Endian
In Big Endian byte ordering, the most significant byte (MSB) is stored at the lowest memory address, and the least
significant byte (LSB) is stored at the highest memory address. This is often described as storing the "big end" first.
For the same 32-bit hexadecimal value `0x12345678`, the byte representation in Big Endian order would be:
- Memory Address 0x00: 0x12 (most significant byte)
- Memory Address 0x01: 0x34
- Memory Address 0x02: 0x56
- Memory Address 0x03: 0x78 (least significant byte)
42. Describe the concept of virtual memory in brief?
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of
the main memory. The addresses a program may use to reference memory are distinguished from the addresses the
memory system uses to identify physical storage sites and program-generated addresses are translated automatically to
the corresponding machine addresses.
A memory hierarchy, consisting of a computer system’s memory and a disk, that enables a process to operate with
only some portions of its address space in memory. A virtual memory is what its name indicates- it is an illusion of a
memory that is larger than the real memory. We refer to the software component of virtual memory as a virtual
memory manager. The basis of virtual memory is the noncontiguous memory allocation model. The virtual memory
manager removes some components from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer system and the amount of secondary
memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in computer memory.
43. Compare between logical address and physical address of a CPU?
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set of all logical


Address Physical Address is set of all physical addresses
addresses generated by CPU in reference to a
Space mapped to the corresponding logical addresses.
program.

Visibility User can view the logical address of a program. User can never view physical address of program.

Generation generated by the CPU Computed by MMU

The user can use the logical address to access the The user can indirectly access physical address but
Access
physical address. not directly.

Editable Logical address can be change. Physical address will not change.

Also called virtual address. real address.

44. Classify the different page replacement algorithm?


Page replacement algorithms are critical for efficient memory management in operating systems. When a page fault
occurs and there are no free frames available, the system must decide which page to replace. Different algorithms have
been developed to optimize this decision-making process, and they can be classified as follows:
1. Optimal Page Replacement (OPT)
2. Least Recently Used (LRU
3. First-In-First-Out (FIFO)
4. Least Frequently Used (LFU)
5. Most Frequently Used (MFU)
6. Not Recently Used (NRU)
45. What is Thrashing? What are the main causes of page fault?
Thrashing
Thrashing is a condition in a virtual memory system where the system spends a significant amount of time servicing
page faults rather than executing actual processes. This happens when there is insufficient physical memory to handle
the active working sets of the processes, leading to excessive paging activity.
Causes of Page Faults
A page fault occurs when a program tries to access a page that is not currently in physical memory. The main causes of
page faults include:
1. First Access: The first time a program accesses a page, it will not be in memory. This is known as a **compulsory
miss.
2. Page Eviction: The page was previously in memory but has been evicted (swapped out) to make room for other
pages. This is known as a capacity miss.
3. Invalid Page Access: The program tries to access a page that does not exist in its logical address space, causing an
invalid page fault.
4. Access to Unallocated Memory: The program tries to access memory that has not been allocated to it.
5. Stack Growth: Accessing a new page when the stack grows, such as during deep recursive calls.
6. Heap Growth: Accessing a new page when the heap grows, such as during dynamic memory allocation.

46. Describe the Belady's anomaly with an example.


Bélády’s anomaly is the name given to the phenomenon where increasing the number of page frames results in an
increase in the number of page faults for a given memory access pattern.
This phenomenon is commonly experienced in the following page replacement algorithms:
1. First in first out (FIFO)
2. Second chance algorithm
3. Random page replacement algorithm
Belady’s Anomaly in FIFO –
Assuming a system that has no pages loaded in the memory and uses the FIFO Page replacement algorithm. Consider
the following reference string:
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Case-1: If the system has 3 frames, the given reference string the using FIFO page replacement algorithm yields a total
of 9 page faults. The diagram below illustrates the pattern of the page faults occurring in the example.

Case-2: If the system has 4 frames, the given reference string using the FIFO page replacement algorithm yields a
total of 10 page faults. The diagram below illustrates the pattern of the page faults occurring in the example.

It can be seen from the above example that on increasing the number of frames while using the FIFO page
replacement algorithm, the number of page faults increased from 9 to 10.
47. Explain the concept of demand paging.
Demand paging can be described as a memory management technique that is used in operating systems to improve
memory usage and system performance. Demand paging is a technique used in virtual memory systems where pages
enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a program into memory at runtime, instead
of loading the entire program into memory at the start. A page fault occurred when the program needed to access a
page that is not currently in memory. The operating system then loads the required pages from the disk into memory
and updates the page tables accordingly. This process is transparent to the running program and it continues to run as
if the page had always been in memory.
48. Illustrate the expression for effective memory access time during page fault.
Same as Q no. 37
49. Explain the C-SCAN Algorithm with an example?
Algorithm
Step 1: Let the Request array represents an array storing indexes of tracks that have been requested in ascending
order of their time of arrival. ‘head’ is the position of the disk head.
Step 2: The head services only in the right direction from 0 to the disk size.
Step 3: While moving in the left direction do not service any of the tracks.
Step 4: When we reach the beginning(left end) reverse the direction.
Step 5: While moving in the right direction it services all tracks one by one.
Step 6: While moving in the right direction calculate the absolute distance of the track from the head.
Step 7: Increment the total seek count with this distance.
Step 8: Currently serviced track position now becomes the new head position.
Step 9: Go to step 6 until we reach the right end of the disk.
Step 9: If we reach the right end of the disk reverse the direction and go to step 3 until all tracks in the request array
have not been serviced.
Example:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right(We are moving from left to right)
Output:
Initial position of head: 50
Total number of seek operations = 389
Seek Sequence: 60, 79, 92, 114, 176, 199, 0, 11, 34, 41
The following chart shows the sequence in which requested tracks are serviced using SCAN.

Therefore, the total seek count is calculated as:


= (60-50) + (79-60) + (92-79) + (114-92) + (176-114) + (199-176) + (199-0) + (11-0) + (34-11) + (41-34)
= 389
50. Describe the different file allocation methods and explain their pros and cons of the method.
The allocation methods define how the files are stored in the disk blocks. There are three main disk space or file
allocation methods.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
The main idea behind these methods is to provide:
• Efficient disk space utilization.
• Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file requires n blocks and
is given a block b as the starting location, then the blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This
means that given the starting block address and the length of the file (in terms of blocks required), we can determine
the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
• Address of starting block
• Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore, it occupies 19, 20,
21, 22, 23, 24 blocks.
Advantages:
• Both the Sequential and Direct Accesses are supported by this. For direct access, the address of the kth block of
the file which starts at block b can easily be obtained as (b+k).
• This is extremely fast since the number of seeks are minimal because of contiguous allocation of file blocks.
Disadvantages:
• This method suffers from both internal and external fragmentation. This makes it inefficient in terms of memory
utilization.
• Increasing file size is difficult because it depends on the availability of contiguous memory at a particular
instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk blocks can be
scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block contains a pointer to the
next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block (25) contains -1
indicating a null pointer and does not point to any other block.

Advantages:
• This is very flexible in terms of file size. File size can be increased easily since the system does not have to look
for a contiguous chunk of memory.
• This method does not suffer from external fragmentation. This makes it relatively better in terms of memory
utilization.
Disadvantages:
• Because the file blocks are distributed randomly on the disk, a large number of seeks are needed to access every
block individually. This makes linked allocation slower.
• It does not support random or direct access. We can not directly access the blocks of a file. A block k of a file can
be accessed by traversing k blocks sequentially (sequential access ) from the starting block of the file via block
pointers.
• Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file.
Each file has its own index block. The ith entry in the index block contains the disk address of the ith file block. The
directory entry contains the address of the index block as shown in the image:
Advantages:
• This supports direct access to the blocks occupied by the file and therefore provides fast access to the file blocks.
• It overcomes the problem of external fragmentation.
Disadvantages:
• The pointer overhead for indexed allocation is greater than linked allocation.
• For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep one entire block
(index block) for the pointers which is inefficient in terms of memory utilization. However, in linked allocation
we lose the space of only 1 pointer per block.

51. What is a directory and list the different file attribute?


Directory
A directory in a file system is a special type of file that contains a list of files and other directories. Directories organize
files into a hierarchical structure, allowing users to manage, access, and navigate the file system efficiently. Directories
can contain:
Files: Regular files containing data.
Subdirectories: Other directories, which can themselves contain files and more subdirectories.
Directories help manage the file system by providing a way to group related files, making it easier to find and manage
files.

Different File Attributes


Files in a file system are associated with various attributes that provide information about the file and control how the
file can be accessed and manipulated. The common file attributes include:
1. Name: The human-readable identifier for the file. It typically includes a file name and an extension (e.g.,
`document.txt`).
2. Identifier: A unique identifier, often called an inode number in Unix-based systems, which is used by the file system
to refer to the file.
3. Type: The kind of file, such as a regular file, directory, symbolic link, or special file (e.g., device file).
4. Location: The address or pointer to the location of the file's data on the storage device. This could be a list of block
addresses or a more complex structure.
5. Size: The size of the file in bytes. This attribute indicates how much data the file contains.
6. Protection/Permissions: Information about who can read, write, or execute the file. In Unix-based systems, this is
represented by read (r), write (w), and execute (x) permissions for the owner, group, and others.
7. Timestamps: Various times associated with the file, including:
- Creation Time: The time when the file was created.
- Last Access Time: The time when the file was last read.
- Last Modification Time: The time when the file was last modified.
- Last Status Change Time: The time when the file's metadata (such as permissions or ownership) was last changed.
8. Ownership: Information about the owner of the file, typically including the user ID (UID) and group ID (GID) of the
file owner.
9. Attributes/Flags: Additional metadata that can affect how the file is used. For example:
- Read-Only Flag: Indicates whether the file is read-only.
- Hidden Flag: Indicates whether the file is hidden from normal directory listings.
- System Flag: Indicates whether the file is a system file.
- Archive Flag: Indicates whether the file needs to be archived.
10. File System: The type of file system on which the file resides (e.g., NTFS, ext4, FAT32).

52. Illustrate the degree of multiprogramming?


Degree of Multiprogramming
The degree of multiprogramming refers to the number of processes that are loaded into memory and are ready to be
executed simultaneously by the CPU. This concept is crucial in determining the efficiency and performance of a
multitasking operating system.
Illustration of Degree of Multiprogramming
1. Low Degree of Multiprogramming
Description: In systems with a low degree of multiprogramming, only a few processes are loaded into memory at a time.
Characteristics:
- Low CPU Utilization: The CPU may often be idle if the running process waits for I/O operations.
- Lower Throughput: Fewer processes are completed in a given time period.
- Simpler Memory Management: Easier to manage due to fewer processes.
- Reduced Context Switching: Less overhead from switching between processes.
Example:
- Only 2-3 processes are in memory.
- When one process waits for I/O, the CPU may have limited choices for another process to execute.
2. High Degree of Multiprogramming
Description: In systems with a high degree of multiprogramming, many processes are loaded into memory
simultaneously.
Characteristics:
High CPU Utilization: The CPU has multiple processes to execute, reducing idle time.
Higher Throughput: More processes are completed in a given time period.
Complex Memory Management: Requires sophisticated techniques to manage memory efficiently.
Increased Context Switching: More overhead from frequent switching between processes.
Risk of Thrashing: If too many processes compete for memory, thrashing can occur.
-Example:
- 10-20 or more processes are in memory.
- When one process waits for I/O, the CPU can quickly switch to another ready process, maximizing CPU usage.

You might also like