0% found this document useful (0 votes)
48 views16 pages

OS Question Bank Solution

The document discusses the Process Control Block (PCB) in operating systems, detailing its structure, functions, and significance in managing processes. It explains the five-state process model, round robin scheduling, and the critical section problem, emphasizing the importance of mutual exclusion and synchronization in concurrent programming. Additionally, it outlines the advantages and disadvantages of PCB, as well as various synchronization techniques to ensure data consistency and prevent issues like deadlocks.

Uploaded by

purvikajagtap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views16 pages

OS Question Bank Solution

The document discusses the Process Control Block (PCB) in operating systems, detailing its structure, functions, and significance in managing processes. It explains the five-state process model, round robin scheduling, and the critical section problem, emphasizing the importance of mutual exclusion and synchronization in concurrent programming. Additionally, it outlines the advantages and disadvantages of PCB, as well as various synchronization techniques to ensure data consistency and prevent issues like deadlocks.

Uploaded by

purvikajagtap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

OS question bank

1. Explain the structure of pcb in os

A Process Control Block (PCB) is used by the operating system to manage information about a
process. The process control keeps track of crucial data needed to manage processes
efficiently. A process control block (PCB) contains information about the process, i.e. registers,
quantum, priority, etc. The process table is an array of PCBs, which logically contains a PCB for
all of the current processes in the system.

What is a Process Control Block(PCB)?


A Process Control Block (PCB) is a data structure that is used by an Operating System to
manage and regulate how processes are carried out. In operating systems, managing the
process and scheduling them properly play the most significant role in the efficient usage of
memory and other system resources. In the process control block, all the details regarding the
process corresponding to it like its current status, its program counter, its memory use, its open
files, and details about CPU scheduling are stored.

With the creation of a process, a PCB is created which controls how that process is being
carried out. The PCB is created with the aim of helping the OS to manage the enormous
amounts of tasks that are being carried out in the system. PCB is helpful in doing that as it
helps the OS to actively monitor the process and redirect system resources to each process
accordingly. The OS creates a PCB for every process which is created, and it contains all the
important information about the process. All this information is afterward used by the OS to
manage processes and run them efficiently.

OS question bank 1
Primary Terminologies Related to Process Control
Block

Process State: The state of the process is stored in the PCB which helps to manage the
processes and schedule them. There are different states for a process which are “running,”
“waiting,” “ready,” or “terminated.”

Process ID: The OS assigns a unique identifier to every process as soon as it is created
which is known as Process ID, this helps to distinguish between processes.

Program Counter: While running processes when the context switch occurs the last
instruction to be executed is stored in the program counter which helps in resuming the
execution of the process from where it left off.

CPU Registers: The CPU registers of the process helps to restore the state of the process
so the PCB stores a copy of them.

Memory Information: The information like the base address or total memory allocated to a
process is stored in PCB which helps in efficient memory allocation to the processes.

Process Scheduling Information: The priority of the processes or the algorithm of


scheduling is stored in the PCB to help in making scheduling decisions of the OS.

Accounting Information: The information such as CPU time, memory usage, etc helps the
OS to monitor the performance of the process.

Operations Carried out through PCB

OS question bank 2
Process Scheduling: The different information like Process priority, process state, and
resources used can be used by the OS to schedule the process on the execution stack. The
scheduler checks the priority and other information to set when the process will be
executed.

Multitasking: Resource allocation, process scheduling, and process synchronization


altogether helps the OS to multitask and run different processes simultaneously.

Context Switching: When context switching happens in the OS the process state is saved
in the CPU register and a copy of it is stored in the PCB. When the CPU switches to another
process and then switches back to that process the CPU fetches that value from the PCB
and restores the previous state of the process.

Resources Sharing: The PCB stores information like the resources that a process is using,
such as files open and memory allocated. This information helps the OS to let a new
process use the resources which are being used by any other process to execute sharing
of the resources.

Location of The Process Control Block


The Process Control Block (PCB) is stored in a special part of memory that normal users can’t
access. This is because it holds important information about the process. Some operating
systems place the PCB at the start of the kernel stack for the process, as this is a safe and
secure spot.

Advantages of Process Table


Keeps Track of Processes: It helps the operating system know which processes are
running, waiting, or completed.

Helps in Scheduling: The process table provides information needed to decide which
process should run next.

Easy Process Management: It organizes all the details about processes in one place,
making it simple for the OS to manage them.

Advantages of PCB:
Efficient process management

Helps in scheduling

Manages resources

Enables process synchronization

Aids in debugging

Disadvantages of PCB:
Consumes memory

Increases system complexity

OS question bank 3
Slows down due to context switching

Can pose security risks

Becomes inefficient with too many processes

2. Explain 5 state processes in OS

Five-State Process Model in OS


The Five-State Process Model is used by modern operating systems to efficiently manage
processes. It helps in scheduling, resource allocation, and process execution. A process
transitions between different states based on execution needs and resource availability.

Process States and Their Explanation

1. New State
A process is created but not yet ready for execution.

The OS initializes its Process Control Block (PCB) and allocates necessary resources.

Example: A user launches a program, and the OS loads it into memory.

2. Ready State
The process is loaded into RAM and waiting for the CPU.

It is in the ready queue, scheduled for execution.

Example: Multiple programs are waiting for CPU time in a multitasking OS.

3. Running State
The process is executing on the CPU.

Only one process can be in the running state at a time in a single-core system.

The process keeps running until:

It completes execution.

OS question bank 4
It is interrupted (preempted).

It requests I/O.

4. Blocked (Waiting) State


The process pauses because it needs a resource (e.g., waiting for I/O).

It cannot execute until the resource is available.

Example: A program waiting for user input from a keyboard.

5. Terminated State
The process completes execution or is forcibly stopped.

The OS frees allocated resources and removes the PCB.

Example: A program finishes execution or crashes.

State Transitions
1. New → Ready (Process is created and ready for execution)

2. Ready → Running (CPU scheduler picks a process)

3. Running → Blocked (Process requests I/O and waits)

4. Blocked → Ready (I/O completes, process goes back to ready queue)

5. Running → Ready (Process is preempted by OS)

6. Running → Terminated (Process completes or crashes)

This model ensures efficient CPU utilization and smooth process management in an OS.

3. Round Robin Problem

Round Robin Scheduling – Detailed Explanation


The Round Robin (RR) scheduling algorithm is a preemptive CPU scheduling technique mainly
used in time-sharing systems. It assigns a fixed time quantum (or time slice) to each process
in a cyclic manner.

1. Criteria for Round Robin


Type: Preemptive Scheduling

Fairness: Each process gets equal CPU time in a cyclic order.

Response Time: Fast response time as no process waits too long.

Starvation: Not possible, as every process gets CPU time fairly.

Overhead: Frequent context switching can reduce efficiency.

2. Mode of Execution

OS question bank 5
1. Processes are placed in a queue in the order of their arrival.

2. CPU executes a process for a fixed time slice (quantum).

3. If a process doesn’t complete within the quantum, it is preempted and moved to the end
of the queue.

4. If the process completes execution, it terminates and is removed from the queue.

5. This continues until all processes are completed.

3. Table Format for RR Scheduling


Arrival Time Completion Turnaround Waiting Time
Process Burst Time (BT)
(AT) Time (CT) Time (TAT) (WT)

P1 X Y ? ? ?

Note: Values are filled after solving the problem using a Gantt Chart.

4. Important Formulas
Completion Time (CT): Time at which a process finishes execution.

Turnaround Time (TAT):TAT=CT−AT

TAT=CT−ATTAT = CT - AT

Waiting Time (WT):WT=TAT−BT

WT=TAT−BTWT = TAT - BT

Q2(a): Explain the critical section problem.

Answer:
The critical section problem occurs in concurrent programming when multiple processes or
threads need access to shared resources. A critical section is a part of the code where shared
resources, such as memory or files, are accessed. To avoid data inconsistencies and ensure
proper execution, we must control access to the critical section.

The problem arises due to:


1. Race Conditions: If multiple processes access the critical section simultaneously,
unexpected results may occur.

2. Data Corruption: Shared data can become inconsistent.

3. Deadlocks: Processes waiting indefinitely for resource release.

OS question bank 6
Solution to the Critical Section Problem
The Critical Section Problem requires an efficient solution to manage synchronization among
multiple processes. The solution must ensure that processes can safely execute their critical
sections without conflicts or data inconsistencies.

To achieve this, any solution must satisfy the following three essential conditions:

1. Mutual Exclusion
Definition: Only one process should be allowed inside the critical section at any given
time.

Reason: If multiple processes enter the critical section simultaneously, shared data may
become corrupted or inconsistent due to race conditions.

Implementation:

Locks (Mutex) – A process acquires a lock before entering and releases it after exiting.

Semaphores – A binary semaphore (value 0 or 1) can be used to control access.

Test-and-Set or Compare-and-Swap Instructions in hardware.

2. Progress
Definition: If the critical section is free, a process must not be unnecessarily delayed from
entering.

Reason: If processes that are not interested in entering the critical section block others, it
leads to wasted CPU cycles and inefficient execution.

Implementation:

OS question bank 7
Proper process scheduling – The OS must allow a process to enter as soon as the
critical section becomes free.

Fair lock mechanisms – Avoid processes being unfairly prioritized over others.

3. Bounded Waiting
Definition: Every process should get a fair chance to execute within a finite amount of
time. No process should be indefinitely delayed (starvation).

Reason: If a process is forced to wait indefinitely due to repeated access by other


processes, it leads to process starvation and unfair resource allocation.

Implementation:

FIFO-based locking mechanisms – Ensure processes get access in the order they
requested.

Priority Scheduling – Assign fair priority levels to prevent starvation.

Ticket Locks or Fair Semaphores – Prevent long-term waiting by keeping track of


requests.

Q. 2b) What is mutual exclusion? explain its significance

Mutual Exclusion: Definition & Significance

What is Mutual Exclusion?


Mutual Exclusion (Mutex) is a fundamental concept in concurrent programming that ensures
only one process or thread can access a shared resource (such as a critical section) at a
time. It prevents race conditions and ensures data consistency in multi-processing and multi-
threading environments.

In simple terms, mutual exclusion acts as a lock mechanism that prevents multiple processes
from interfering with each other when accessing shared data.

Significance of Mutual Exclusion


Mutual exclusion is crucial for system stability and data integrity. Its significance can be
understood through the following points:

1. Prevents Race Conditions


A race condition occurs when multiple processes access and modify shared data
simultaneously, leading to unpredictable outcomes. Mutual exclusion prevents this by allowing
only one process to enter the critical section at a time.

2. Ensures Data Consistency and Integrity


Without mutual exclusion, concurrent access to shared variables can lead to inconsistent or
corrupt data. Ensuring exclusive access maintains data reliability.

OS question bank 8
3. Avoids Deadlocks and Starvation
Deadlocks occur when two or more processes wait indefinitely for a resource held by each
other. Proper mutual exclusion mechanisms help in deadlock prevention.

Starvation happens when a process is denied access to resources for a long time due to
unfair scheduling. Well-designed mutual exclusion algorithms ensure fairness.

4. Improves Process Synchronization


Operating systems and applications rely on mutual exclusion to synchronize processes
efficiently. It helps in structured execution, avoiding unnecessary delays and conflicts.

5. Enhances System Performance and Stability


By controlling access to shared resources, mutual exclusion ensures that the system operates
smoothly without frequent crashes or inconsistencies.

Methods to Achieve Mutual Exclusion


Several techniques are used to implement mutual exclusion:

1. Hardware-Based Approaches

Disabling Interrupts

Test-and-Set Lock (TSL) Instruction

2. Software-Based Solutions

Peterson’s Algorithm

Bakery Algorithm

3. Synchronization Primitives

Mutex (Locking Mechanism) – Used in multi-threaded applications.

Semaphores – Counting and binary semaphores help manage resource access.

Monitors – High-level abstraction for controlling access to critical sections.

Q. 2c) Explain Synchronization


Process Synchronization is used in a computer system to ensure that multiple processes or
threads can run concurrently without interfering with each other.

The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems. Process
synchronization is an important aspect of modern operating systems, and it plays a crucial role
in ensuring the correct and efficient functioning of multi-process systems.

On the basis of synchronization, processes are categorized as one of the following two types:

OS question bank 9
Independent Process: The execution of one process does not affect the execution of other
processes.

Cooperative Process: A process that can affect or be affected by other processes


executing in the system.

Process synchronization problem arises in the case of Cooperative processes also because
resources are shared in Cooperative processes.

Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner. It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.

Lack of Synchronization in Inter Process Communication Environment leads to following


problems:

1. Inconsistency: When two or more processes access shared data at the same time without
proper synchronization. This can lead to conflicting changes, where one process’s update
is overwritten by another, causing the data to become unreliable and incorrect.

2. Loss of Data: Loss of data occurs when multiple processes try to write or modify the same
shared resource without coordination. If one process overwrites the data before another
process finishes, important information can be lost, leading to incomplete or corrupted
data.

3. Deadlock: Lack of Synchronization leads to Deadlock which means that two or more
processes get stuck, each waiting for the other to release a resource. Because none of the

OS question bank 10
processes can continue, the system becomes unresponsive and none of the processes can
complete their tasks.

Types of Process Synchronization


The two primary type of process Synchronization in an Operating System are:

1. Competitive: Two or more processes are said to be in Competitive Synchronization if and


only if they compete for the accessibility of a shared resource.Lack of Synchronization
among Competing process may lead to either Inconsistency or Data loss.

2. Cooperative: Two or more processes are said to be in Cooperative Synchronization if and


only if they get affected by each other i.e. execution of one process affects the other
process.Lack of Synchronization among Cooperating process may lead to Deadlock.

Example:

Let consider a Linux code:

>>ps/grep "chrome"/wc

ps command produces list of processes running in linux.

grep command find/count the lines form the output of the ps command.

wc command counts how many words are in the output.

Therefore, three processes are created which are ps, grep and wc. grep takes input from ps
and wc takes input from grep.

From this example, we can understand the concept of cooperative processes, where some
processes produce and others consume, and thus work together. This type of problem must be
handled by the operating system, as it is the manager.

Conditions That Require Process Synchronization


Critical Section:

A part of a program where shared resources are used.

Only one process should access it at a time to avoid conflicts.

If no shared resources exist, synchronization isn’t needed.

Race Condition:

Happens when multiple processes try to modify shared data.

The final result depends on execution order, leading to inconsistencies.

Synchronization ensures operations happen in the correct sequence.

Preemption:

When the OS pauses a process to let another run.

OS question bank 11
If a process is preempted while using shared data, another may read incomplete or
incorrect values.

Synchronization prevents such errors by managing access properly.

Chmod question ➖
The
chmod (Change Mode) command in Linux is used to change the permissions of a file or
directory. It controls who can read (r), write (w), or execute (x) a file. Proper understanding of
this command is essential for system security and efficient file management.
The
chmodcommand is essential for file security and access control in Linux. Using the symbolic
or numeric method, you can efficiently set permissions for different users and groups.

File Permissions in Linux

Each file has three types of permissions:

1. Read ( r ) → Allows viewing the file's content.

2. Write ( w ) → Allows modifying the file's content.

3. Execute ( x ) → Allows running the file as a program/script.

Permission Representation
Permissions are assigned to three user groups:

User Groups:
1. Owner (u) – The user who created the file.

2. Group (g) – Other users in the same group.

3. Others (o) – Everyone else (public users).

Permission Types:
Symbol Permission Numeric Value Meaning

r Read 4 Allows reading the file content

w Write 2 Allows modifying the file

x Execute 1 Allows running the file as a program/script

Usage of chmod :-

1. Using Symbolic Mode

OS question bank 12
chmod u+r file.txt # Add read permission for the owner
chmod g-w file.txt # Remove write permission from the group
chmod o+x file.txt # Add execute permission for others
chmod u+x,g+r file.sh # Add execute for user, read for group

2. Using Numeric (Octal) Mode

chmod 755 file.txt # Owner (rwx), Group (r-x), Others (r-x)


chmod 644 file.txt # Owner (rw-), Group (r--), Others (r--)
chmod 777 file.txt # Everyone gets full access (rwx)

Examples:-
1. Giving full permission to the owner, read/execute to others:

chmod 755 script.sh

2. Removing execute permission from all users:

chmod a-x myfile

3. Giving write permission to the group:

chmod g+w document.txt

Difference Between User-Level Thread and


Kernel-Level Thread
Parameters User Level Thread Kernel Level Thread

User threads are implemented by user- Kernel threads are implemented by


Implemented by
level libraries. Operating System (OS).

The operating System doesn’t recognize Kernel threads are recognized


Recognize
user-level threads directly. by Operating System.

Implementation of Kernel-Level thread is


Implementation Implementation of User threads is easy.
complicated.

Context switch
Context switch time is less. Context switch time is more.
time

Hardware No hardware support is required for


Hardware support is needed.
support context switching.

Blocking If one user-level thread performs a If one kernel thread performs a blocking
operation blocking operation then the entire operation then another thread can

OS question bank 13
process will be blocked. continue execution.

Multithreaded applications cannot take


Multithreading Kernels can be multithreaded.
full advantage of multiprocessing.

Creation and User-level threads can be created and Kernel-level level threads take more time
Management managed more quickly. to create and manage.

Any operating system can support user- Kernel-level threads are operating
Operating System
level threads. system-specific.

The application code doesn’t contain


Thread Managed by a thread library at the user
thread management code; it’s an API to
Management level.
the kernel mode.

Example POSIX threads, Mach C-Threads. Java threads, POSIX threads on Linux.

Allows for true parallelism,


Simple and quick to create, more
multithreading in kernel routines, and can
Advantages portable, does not require kernel mode
continue execution if one thread is
privileges for context switching.
blocked.

Cannot fully utilize multiprocessing, entire Requires more time to create/manage,


Disadvantages
process blocked if one thread blocks. involves mode switching to kernel mode.

In kernel-level threads have their own


In user-level threads, each thread has its
Memory stacks and their own separate address
own stack, but they share the same
management spaces, so they are better isolated from
address space.
each other.

User-level threads are less fault-tolerant


Kernel-level threads can be managed
than kernel-level threads. If a user-level
Fault tolerance independently, so if one thread crashes,
thread crashes, it can bring down the
it doesn’t necessarily affect the others.
entire process.

Resource Limited access to system resources, It can access to the system-level


utilization cannot directly perform I/O operations. features like I/O operations.

User-level threads are more portable Less portable due to dependence on OS-
Portability
than kernel-level threads. specific kernel implementations.

Functions of an Operating System (OS)


An Operating System (OS) is system software that manages computer hardware and software
resources while providing essential services for applications. Below are its key functions:

1. Process Management
Handles the creation, scheduling, and termination of processes.

Manages CPU scheduling to ensure fair execution.

Provides process synchronization and inter-process communication (IPC).

🔹 Example: The OS ensures multiple applications (e.g., browser, media player) run smoothly
without conflicts.

2. Memory Management
Allocates and deallocates memory to processes.

OS question bank 14
Uses techniques like paging, segmentation, and virtual memory to optimize memory
usage.

Prevents memory leaks and unauthorized access.

🔹 Example: Running multiple apps without exhausting RAM, thanks to virtual memory.
3. File System Management
Organizes and stores data efficiently.

Provides file operations (create, read, write, delete, rename).

Manages file permissions for security.

🔹 Example: Windows uses NTFS, while Linux uses EXT4 for file management.
4. Device Management
Controls hardware devices (printers, USBs, hard drives).

Uses device drivers to communicate between hardware and software.

Ensures proper input/output operations.

🔹 Example: Plugging in a USB drive automatically detects and allows file access.
5. Security and Protection
Prevents unauthorized access using authentication (passwords, biometrics).

Provides firewalls, encryption, and access control.

Protects system resources from viruses and malware.

🔹 Example: User login and permission settings in Linux and Windows.


6. User Interface (UI) & Command Execution
Provides Graphical User Interface (GUI) and Command-Line Interface (CLI).

Enables users to interact with the system via icons, windows, or commands.

🔹 Example: Windows provides GUI, while Linux has both GUI and CLI (Terminal).
7. Deadlock Prevention & Handling
Manages resources to avoid deadlocks where processes wait indefinitely.

Uses techniques like resource allocation graphs and priority scheduling.

🔹 Example: Preventing two applications from locking the printer simultaneously.


8. Networking & Communication
Enables data transfer between systems via networks (Internet, LAN, WAN).

Supports protocols like TCP/IP for secure communication.

OS question bank 15
🔹 Example: Browsing the internet, downloading files, or cloud computing.

OS question bank 16

You might also like