0% found this document useful (0 votes)
18 views13 pages

AKB - 2023 Answers

Uploaded by

nabadeepdas069
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views13 pages

AKB - 2023 Answers

Uploaded by

nabadeepdas069
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

AKB- 2023

1(d) what is a message passing approach for inter process


communication? Discuss the various design issues of the message passing
approach.
Ans:
Message Passing Approach for IPC
The message passing approach is used in inter-process communication
to exchange information between processes without sharing memory.
Processes communicate by sending and receiving messages, either
directly or indirectly (via mailboxes/queues). This method is widely used in
distributed systems.

Design Issues in Message Passing

1. Synchronization:
○ Communication can be blocking (synchronous) or
non-blocking (asynchronous). Blocking ensures delivery but
may cause delays, while non-blocking is faster but more
complex to manage.
2. Reliability:
○ Ensuring messages are not lost due to system failures.
Mechanisms like acknowledgments, retransmissions, and
timeouts are used.
3. Message Ordering:
○ Messages should be received in the same order they were
sent, which can be challenging in distributed systems.
4. Performance:
○ Managing overhead caused by message creation,
transmission, and processing, especially with large or
frequent messages.
5. Security:
○ Protecting messages from unauthorized access and
tampering using encryption and authentication.

2(a) What is an Operating System? Explain how the operating system


acts as a resource manager.

Ans:
An Operating System (OS) is a system software that acts as an
intermediary between the user and the computer hardware. It manages
computer hardware resources and provides services for application
software. The OS enables users to interact with the computer without
needing to understand the hardware details.

The operating system acts as a


resource manager by efficiently managing and allocating system
resources (CPU, memory, storage, I/O devices) among competing
processes. This ensures that hardware and software resources are used
effectively and fairly.
Process Management: Handles the creation, scheduling, and termination
of processes.
Memory Management: Allocates and manages memory for programs and
data.
File System Management: Manages files and directories on storage
devices.
Device Management: Controls and coordinates the use of hardware
devices.
Security and Access Control: Protects the system from
unauthorized access.
User Interface: Provides a command-line or graphical interface for
interaction.

2(b) What is inter process communication? Illustrate any one classical


interprocess communication problem.

Ans:

Inter-process Communication (IPC) refers to the mechanisms and


methods used by different processes (programs or tasks) to
communicate with each other and exchange data. In modern
operating systems, multiple processes often need to work together,
share data, or synchronize their activities. IPC allows for this
communication, ensuring processes can interact and coordinate
efficiently.

IPC methods include shared memory, message passing, semaphores,


pipes, and sockets, among others.

The Producer-Consumer Problem is a


classic example of an inter-process communication problem, where
two types of processes—producers and consumers—need to
synchronize their actions while sharing a common resource.

● are processes that generate data or resources (e.g.,


Producers

produce items in a factory) and store them in a buffer (also


known as a shared resource or queue).
● Consumers are processes that take data or resources from the
buffer and use them (e.g., consume the produced items).
● The buffer has limited size, so the producer should not add more
items than the buffer can hold, and the consumer should not
attempt to consume an item if the buffer is empty.

The challenge is to manage the synchronization between the


producer and the consumer to avoid race conditions and ensure
that no process exceeds the buffer's capacity or operates on an
empty buffer.

This problem can be solved using semaphores to control access


to the buffer and ensure synchronization:

● Semaphore empty: Keeps track of the number of empty


slots in the buffer. It is initialized to the size of the buffer.
● Semaphore full: Keeps track of the number of filled slots
in the buffer. It is initialized to 0.
● Semaphore mutex: A mutual exclusion semaphore used to
protect access to the critical section of the buffer, preventing
race conditions.

3(b) Explain process hierarchies in the light of GNU/ Linux operating


systems.
Ans:
In the context of the GNU/Linux operating system, process
hierarchies refer to the organization and relationship between
processes that are running on the system. Every process in Linux,
like in most Unix-like operating systems, is part of a hierarchy,
where processes are organized in a tree structure. This
hierarchical structure allows processes to be related to each other
in terms of their parent-child relationships.

The process hierarchy in Linux is a tree structure:

1. init/systemd: The root process, which starts when the


system boots. It manages system processes and services.
2. Children of init/systemd: System services and daemons
that manage parts of the system (e.g., networking, logging).
3. User Processes: Processes initiated by users, such as
running applications from a shell (e.g., bash).
4. Grandchild Processes: Processes created by user
processes (e.g., a text editor spawning subprocesses).

This hierarchy ensures organized process management and


system stability.

4(a) a) Compare and contrast the following resource allocation


policies:
i. All resources request together
ii. Allocation using global numbering
iii. Allocation using Banker's algorithm
Ans:

All Resources Request Together is best for simple, static


environments but is inefficient and prone to deadlock.
Allocation Using Global Numbering is more efficient in
preventing deadlocks by avoiding circular waits, but it can still
suffer from inefficiencies.
Banker's Algorithm provides the most sophisticated deadlock
avoidance and efficient resource utilization, but it is
computationally expensive and requires knowing the maximum
resources in advance.
4(b) What is a deadlock? Write the necessary conditions
that cause deadlock situations to occur.

Ans:

Deadlock is a situation in a multi-tasking environment, typically in


operating systems, where a set of processes are blocked
because each process is holding a resource and waiting for a
resource that another process holds. As a result, the processes
cannot proceed, and the system enters a state where no process
can make progress.

Necessary Conditions for Deadlock:

1. Mutual Exclusion:
○ At least one resource must be held in a non-shareable
mode, meaning only one process can use it at a time.
○ Example: A printer can only be used by one process at
a time.
2. Hold and Wait:
○ A process must hold at least one resource and wait for
others that are held by other processes.
○ Example: A process holding a printer waits for a
scanner.
3. No Preemption:
○ Resources cannot be forcibly taken from processes.
They must be released voluntarily.
○ Example: The operating system cannot take away a
printer from a process holding it.
4. Circular Wait:
○ A cycle must exist where each process is waiting for a
resource held by the next process.
○ Example: Process A waits for a resource held by B, B
waits for C, and C waits for A.

Deadlock occurs when all four conditions (Mutual Exclusion, Hold


and Wait, No Preemption, and Circular Wait) are met. In such a
situation, processes cannot proceed, and the system is stuck in a
state where no process can be completed.

5(b) Ans:

A system call is a programmatic way for a process to request a


service from the operating system's kernel. System calls provide
the interface between a running process and the operating
system, allowing processes to perform tasks such as interacting
with hardware, file management, process control, and
communication.
System calls are essential because they provide a controlled and
secure way for user-level applications to access kernel-level
services.

Five Common System Calls with Functions and Syntax:

1. fork() - Create a new process

2. exec() - Execute a new program


3. read() - Read from a file descriptor
4. write() - Write to a file descriptor
5. close() - Close a file descriptor

6(a)Ans:

i. Process vs. Thread

● Process:

○ A process is an independent program in execution, with


its own memory space, code, and resources.
○ It is the basic unit of resource allocation in an operating
system.
○ Processes do not share memory, making them more
isolated and resource-intensive.
○ Example: Running a word processor or a web browser.
● Thread:

○ A thread is a smaller unit of a process, which can run


concurrently with other threads within the same
process.
○ Threads share the same memory space, making them
lightweight and easier to manage in terms of
communication.
○ Threads are more efficient for tasks that need
concurrent execution but share data.
○ Example: A web browser process might have separate
threads for rendering pages, handling user input, and
downloading content.

ii. Thread Scheduling

● Thread Scheduling is the process of determining the order


in which threads will be executed by the CPU.
● Threads within a process share the same resources, so the
operating system must manage their execution to optimize
CPU usage.
● Thread scheduling algorithms, such as Round Robin, Priority
Scheduling, and Multilevel Queue Scheduling, define the
strategy for allocating CPU time to different threads.
● Thread scheduling ensures efficient CPU utilization,
responsiveness, and fairness in a multi-threaded
environment.
iii. Priority Scheduling

● Priority Scheduling is a CPU scheduling algorithm where


each process (or thread) is assigned a priority, and the CPU
is allocated to the process with the highest priority.
● Preemptive version: If a new process arrives with higher
priority than the currently running process, it preempts the
running process.
● Non-preemptive version: Once a process starts execution,
it runs until completion or voluntarily yields control.
● The priority can be static or dynamic. Static priority remains
unchanged, while dynamic priority may change based on
process behavior.
● Issue: Low-priority processes may suffer from starvation,
where they are never executed due to high-priority
processes continuously preempting them.

iv. Readers and Writers Problem

● The Readers and Writers Problem is a classic


synchronization problem in which multiple processes
(readers and writers) need access to a shared resource
(such as a database or file).

○ Readers: Multiple readers can read the shared


resource simultaneously without conflict.
○ Writers: A writer requires exclusive access to the
resource, meaning no other writers or readers can
access the resource during writing.
● Goal: To allow readers to access the shared resource
simultaneously, but ensure writers have exclusive access
when writing. The challenge is to ensure mutual exclusion
and avoid conflicts (such as allowing a writer to access the
resource while a reader is reading).

● Solutions:

○ First Readers-Writers Problem: Prioritize readers,


ensuring no new readers can enter if a writer is waiting.
○ Second Readers-Writers Problem: Prioritize writers to
prevent indefinite blocking of writers by incoming
readers.

v. Process Control Block (PCB)

A Process Control Block (PCB) is a data structure in the


operating system that stores all the information about a
process.

It contains the essential details required for process


management and execution. Key components of a PCB
include:

○ Process ID (PID): Unique identifier for the process.


○ Process State: Current state of the process (e.g.,
running, waiting, ready).
○ Program Counter (PC): The address of the next
instruction to be executed.
○ CPU Registers: Store the process's register values
when it is not running.
○ Memory Management Information: Includes pointers
to the memory allocated to the process, such as base
and limit registers.
○ I/O Information: List of devices or files the process is
using.
○ Scheduling Information: Priority and other scheduling
details.

The PCB is crucial for process context switching, allowing


the operating system to save and restore the state of a
process during scheduling.

You might also like