0% found this document useful (0 votes)
5 views

Operating System

Uploaded by

animeteen04
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Operating System

Uploaded by

animeteen04
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Operating System

Chapter 1

Q1. Operating System Definitions?


Ans. Operating system is a set of components that manages the hardware and the software resources
and connects them all. Once a computer is switched on, the operating system must be loaded into the
computer system.OS is a resource allocator that manages all resources. It decides between conflicting
requests for efficient and fair resource use.OS is a control program that controls execution of programs to
prevent errors and improper use of the computer.Resource allocator– manages and allocates resources.

Q2. Operations and Functions of OS?


Ans. 1. Process Management : The CPU executes a large number of programs. While its main concern
is the execution of user programs, the CPU is also needed for other system activities. These activities are
called processes. A
process is a program in execution. Typically, a batch job is a process. A time-shared user program is a
process. For now, a process may be considered as a job or a time-shared program, but the concept is
actually more general.
The operating system is responsible for the following activities in connection with processes
management:
a) The creation and deletion of both user and system processes
b) The suspension and resumption of processes.
c) The provision of mechanisms for process synchronization
d) The provision of mechanisms for deadlock handling.

2.Memory Management : Memory is the most expensive part of the computer system. Memory is a large
array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads
or writes of specific memory addresses. The CPU fetches from and stores in memory. There are various
algorithms that depend on the particular situation to manage the memory.

The operating system is responsible for the following activities in connection with memory
management.
a) Keep track of which parts of memory are currently being used and by whom.
b) Decide which processes are to be loaded into memory when memory space becomes
available.
c) Allocate and deallocate memory space as needed.

3.Secondary Storage Management


The main purpose of a computer system is to execute programs. These programs, together with the data
they access must be in main memory during execution. Since the main memory is too small to
permanently accommodate all data and programs, the computer system must provide secondary storage
to backup main memory. Most modem computer systems use disks as the primary on-line storage of
information, of both programs and data.
The operating system is responsible for the following activities in connection with disk
management:
a) Free space management
b) Storage allocation
c) Disk scheduling.
4.I/O Management: One of the purposes of an operating system is to hide the peculiarities or specific
hardware devices from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from
the bulk of the operating system itself by the I/O system.
The operating system is responsible for the following activities in connection to I/O management:
a) A buffer caching system
b) To activate a general device driver code
c) To run the driver software for specific hardware devices as and when required.

5.File Management: File management is one of the most visible services of an operating system.
Computers can store information in several different physical forms: magnetic tape, disk, and drum are
the most common forms. Each of these devices has its own characteristics and physical organization. For
convenient use of the computer system, the operating system provides a uniform logical view of
information storage. The operating system abstracts from the physical properties of its storage devices to
define a logical storage unit, the fi le. Files are mapped, by the operating system, onto physical devices. A
file is a collection of related information defined by its creator. Commonly, files represent programs (both
source and object forms) and data. Data files may be numeric, alphabetic, or alphanumeric. Files may be
free form, such as text files, or may be rigidly formatted. In general, a file is a sequence of bits, bytes,
lines or records whose meaning is defined by its creator and user. It is a very general concept.
The operating system is responsible for
the following activities in connection to the file management:
a) The creation and deletion of files.
b) The creation and deletion of directory.
c) The support of primitives for manipulating files and directories.
d) The mapping of files onto disk storage.

6.Command Interpretation: One of the most important components of an operating system is its
command interpreter. The command interpreter is the primary interface between the user and the rest of
the system. Many commands are given to the operating system by control statements. When a new job is
started in a batch system or when a user logs-in to a time-shared system, a program which reads and
interprets control statements are automatically executed. This program is variously called
(1) the control card
interpreter,
(2) the command line interpreter,
(3) the shell (in Unix), and so on. Its function is quite simple: get the next command statement and
execute it

Q3. Types of Operating Systems?


An Operating System (OS) is a critical software layer that acts as an interface between computer
hardware and the user. It manages hardware resources and provides services to applications. Over the
years, various types of operating systems have evolved, each suited for specific purposes and
environments. Below are the main types of operating systems, their features, and examples.

1. Batch Operating System: Batch Operating Systems were among the first operating systems
developed for mainframe computers in the 1950s and 1960s. They execute a series of jobs (tasks)
without user interaction during processing. Users prepare jobs, submit them to an operator, and receive
results after processing.ex : Early IBM systems like IBM 1401.

Features:
a. Jobs are grouped into batches for processing.
b. No direct user interaction.
c. Efficient for repetitive tasks.

2. Time-Sharing Operating System: Time-sharing OS allows multiple users to access a system


simultaneously by allocating CPU time slices to each user or process. It creates a multitasking
environment.Examples: UNIX, Multics

Features:
● Supports multiple users and processes.
● Provides quick response time.
● Prevents resource conflicts.

3. Real-Time Operating System (RTOS): Real-Time Operating Systems are designed for applications
requiring immediate and predictable responses. They are used in systems where time constraints are
critical, such as embedded systems and industrial control.Examples: VxWorks, FreeRTOS.

Types:
● Hard RTOS: Guarantees task completion within strict deadlines.
● Soft RTOS: Prioritizes tasks but allows minor delays.

4. Distributed Operating System: A Distributed OS manages a group of independent computers and


makes them appear as a single system to users. It ensures efficient resource sharing across
networks.Examples: Amoeba, Apache Hadoop

Features:
● Provides transparency in resource access.
● Supports remote data processing.

5. Network Operating System (NOS): NOS provides features for managing network resources like file
sharing, printers, and communication between devices.
Features:Examples: Windows Server, Linux-based servers.

● Centralized control over network resources.


● Supports security and multi-user environments.

6. Mobile Operating System: Mobile OS powers smartphones, tablets, and other handheld devices,
focusing on touch interfaces and application ecosystems.Examples: Android, iOS.

Features:
● Optimized for touchscreens and portability.
● Supports app stores and cloud integration.

Q4.Components of Operating System:


Ans: Components of an Operating System: An Operating System (OS) is the backbone of a computer
system, managing hardware resources and enabling interaction between the user and the machine. Its
architecture consists of several key components, each responsible for specific functions.
1. Kernel: The kernel is the core component of an operating system and is responsible for managing
hardware resources. It acts as a bridge between software applications and hardware, ensuring efficient
and secure communication
Types:
● Monolithic Kernel: All OS functions run in a single address space (e.g., Linux).
● Microkernel: Core functions are minimal, with other services running in user space (e.g., Minix).
Functions:
● Process Management: Handles the creation, execution, and termination of processes.
● Memory Management: Allocates and deallocates memory to processes.
● Device Management: Coordinates communication between devices and the system.

2.Process Management: Processes are programs in execution. The OS manages these processes to
ensure efficient CPU utilization and smooth multitasking.
Functions:
● Scheduling processes using algorithms like Round Robin, FIFO, or Priority Scheduling.
● Managing process states (Ready, Running, Waiting, etc.).

3.Memory Management: Memory management involves the allocation, deallocation, and organization of
system memory (RAM) for processes and applications.
Functions:
● Partitioning: Dividing memory into fixed or dynamic blocks.
● Virtual Memory: Extending physical memory using disk storage.

4.File System Management: The file system manages how data is stored, organized, and retrieved from
storage devices like hard drives and SSDs.Examples include NTFS (Windows), ext4 (Linux), and APFS
(macOS).
Functions:
● Providing a hierarchical structure for organizing files and directories.
● Controlling file access permissions (Read, Write, Execute).

5.Device Management: Device management involves coordinating and controlling input/output (I/O)
devices such as printers, keyboards, and storage devices.
Functions
I/O Controllers: Interface between devices and the OS.
Drivers: Software that allows the OS to communicate with specific hardware.

Q5. Viewpoints of OS?


Ans. An Operating System (OS) is a multi-faceted software, and its significance can be viewed from
various perspectives, depending on its role and functionality in different environments. Below are the key
viewpoints of an OS, shedding light on its versatility and importance:
User's Viewpoint
● User Interface: The OS offers graphical interfaces (GUIs) or command-line interfaces (CLIs) for
user interaction.
● Performance: It ensures quick response times and efficient execution of tasks.
● Customizability: Modern OSs allow users to personalize settings, themes, and workflows.

System Administrator's Viewpoint


● Resource Management: Allocates CPU, memory, and I/O devices optimally.
● Security: Implements user authentication, access control, and data encryption.
● Monitoring Tools: Provides utilities to track system performance and resolve issues.

Q6. Evolution of OS

Ans

Notes

Chapter 2.
Q1. Explain system calls and its types
Ans. System calls are the mechanisms through which user-level applications interact with the operating
system. They provide an interface between a process and the operating system, allowing programs to
request services such as file operations, process management, and communication.System calls are
low-level, privileged functions executed by the operating system kernel.
Lifecycle of a System Call
● Request: A user application invokes a system call using a predefined library function (e.g., open()
in C).
● Transition: The call switches to kernel mode via an interrupt or trap.
● Execution: The kernel performs the requested operation.
● Response: The result is returned to the application, and control is switched back to user mode.
Types of System Calls
1.Process Control System Calls: These calls manage processes, including creation, termination, and
synchronization.

Examples:
● fork(): Creates a new process by duplicating the parent.
● exec(): Replaces the current process image with a new program.
● exit(): Terminates a process.
● wait(): Pauses the execution of a parent process until a child finishes.

2.File Management System Calls: These calls allow processes to perform operations on files, such as
reading, writing, and closing.

Examples:
● open(): Opens a file for reading or writing.
● read(): Reads data from a file.
● write(): Writes data to a file.
● close(): Closes an opened file.

3.Device Management System Calls:These calls manage I/O devices, including communication,
allocation, and release.

Examples:
● ioctl(): Configures device settings.
● read(): Reads data from an input device.
● write(): Writes data to an output device.

4.Information Maintenance System Calls:These calls retrieve or set system information, including
process and system status.

Examples:
● getpid(): Gets the process ID of the current process.
● gettimeofday(): Retrieves the current time.
● uname(): Provides system information (e.g., OS version).

5.Communication System Calls:These calls facilitate data transfer between processes, either within the
same system or across networks.
Examples:
● pipe(): Creates a unidirectional communication channel.
● shmget(): Allocates shared memory.
● send()/recv(): Sends/receives data over a network socket.

6.Protection and Security System Calls:These calls manage access permissions and enforce security
policies.

Examples:
● chmod(): Changes the permissions of a file.
● setuid(): Sets the user ID for a process.
● umask(): Sets the file mode creation mask.

Q2.Operating System Structure?


Ans: The operating system structure is a container for a collection of structures for interacting withthe
operating system’s fi le system, directory paths, processes, and I/O subsystem. The typesand
functions provided by the operating system substructures are meant to present a model
forhandling these resources that is largely independent of the operating system

UNIX Operating System

Operating System Structure Concepts


1.Hardware: The hardware is, obviously, the physical hardware and not particularly interesting to us in
this module.
2. Kernel: The kernel of an operating system is the bottom-most layer of software present ona
machine and the only one with direct access to the hardware. The code in the kernel is the most ‘trusted’
in the system - and all requests to do anything significant must go via the kernel. It
provides the most key facilities and functions of the system.
3.Outer OS: Surrounding the kernel are other parts of the operating system. These performers
critical functions - for example, the graphics system which is ultimately responsible for what you see on
the screen.
4. Interface: The interface provides a mechanism for you to interact with the computer.
5. Applications: There are what do the actual work - they can be complex (for example Office)or
simple (for example the is command commonly found on UNIX and Linux systems that lists files in
a directory (or folder)

Q3. Monolithic Systems


Ans.Monolithic Systems: This approach is well known as “The Big Mess”. The operating system is written
as a collection of procedures, each of which can call any of the other ones whenever it needs to. When
this technique is used, each procedure in the system has a well-defined interface in terms of parameters
andresults, and each one is free to call any other one, if the latter provides some useful computation that
the former needs. For constructing the actual object program of the operating system when this approach
is used,one compiles all the individual procedures, or fi les containing the procedures, and then binds
them all together into a single object fi le with the linker. In terms of information hiding, there is essentially
none- every procedure is visible to every other one i.e. opposed to a structure containing modules or
packages, in which much of the information is local to module, and only officially designated entry points
can be called from outside the module.

Q4 Client-server Model
Ans. In the Client-server Model, all the kernel does is handle the communication between clients and
servers. By splitting the operating system up into parts, each of which only handles one fact of the
system, such as file service, process service, terminal service, or memory service, each part becomes
small and manageable; furthermore, because all the servers run as user-mode processes,and not in
kernel mode, they do not have direct access to the hardware. As a consequence, if a bug in the fi le
server is
triggered, the file service may crash, but this will not usually bring the whole machine down.
Another advantage of the client-server model is its adaptability to use in distributed system. If
a client communicates with a server by sending it messages, the client need not know whether
the message is handled locally in its own machine, or whether it was sent across a network to a
serveron a remote machine. As far as the client is concerned, the same thing happens in both cases:
a request was sent and a reply came back.

Q5.Exokernel?
Ans. Exokernel is a highly efficient and minimalist operating system architecture that aims to provide
applications with as much direct access to hardware resources as possible while ensuring security and
isolation. Developed as an alternative to traditional OS designs, Exokernel focuses on removing
abstractions imposed by the operating system, giving developers more control over resource
management.

How Exokernel Works

1. Resource Allocation: The Exokernel allocates hardware resources like memory, CPU, and disk space
directly to applications, ensuring security through access control mechanisms.

2. Secure Binding:Secure binding allows applications to securely access and control resources. It uses
techniques like tagging or access control lists to track ownership.

3. Library Operating Systems:Instead of having a monolithic kernel, Exokernel relies on Library


Operating Systems (LibOS) to provide traditional OS functionalities. These libraries are loaded into user
space and tailored to the application's requirements.

Examples of Exokernel Systems

MIT Exokernel: Developed at MIT, it is a proof-of-concept that demonstrates the feasibility of the
Exokernel design.
Xok: An extension of the MIT Exokernel, paired with the ExOS library operating system.

Q6. Explain layered structure of os.


Ans. The layered structure of an operating system is a design approach in which the operating system is
divided into a series of layers, each built upon the lower layer. Each layer has specific responsibilities,
interacts only with adjacent layers, and provides services to the layer above while receiving services from
the layer below.
This modular structure simplifies the design, implementation, and maintenance of operating systems by
organizing complex functionality into manageable components.

Key Features of a Layered Structure

● Modularity:The system is divided into distinct layers, each with a specific function.
● Abstraction:Higher layers are abstracted from the details of lower layers, reducing complexity for
developers working on higher layers.
● Isolation: Changes in one layer typically do not affect other layers, ensuring better fault isolation
and system stability.
● Controlled Communication: Layers communicate only with their immediate neighbors, adhering to
strict interfaces and reducing dependencies.

Structure of Layers

1. Hardware Layer (Layer 0): This is the bottom-most layer consisting of physical hardware such as the
CPU, memory, storage, and I/O devices.The layer provides raw computing resources that the OS
manages. It does not perform any management tasks.

2.Kernel Layer (Layer 1): The kernel is the core of the operating system and directly interacts with the
hardware layer.It serves as a foundation for higher layers.

3. Device Drivers Layer (Layer 2): Device drivers interface with hardware devices (e.g., printers, storage
devices) and provide a unified interface for the OS to access these devices. Translate OS commands into
device-specific operations. Handle interrupts and errors during device communication.

4.System Utilities Layer (Layer 3): Provides essential system utilities and libraries that support the
functioning of the operating system.

5.User Interface Layer (Layer 4): Provides an interface for user interaction with the system.

Types of interfaces:
● Command-Line Interface (CLI): Accepts text-based commands (e.g., Linux Terminal).
● Graphical User Interface (GUI): Offers a visual interface with menus, windows, and icons (e.g.,
Windows, macOS).

6.Application Layer (Layer 5): This is the topmost layer where user applications like word processors,
web browsers, and games run.
Applications rely on OS-provided APIs and libraries to perform tasks such as file operations and memory
allocation.

Q7.Explain Process Concepts


Processes Creation
Process State Transitions
Process Termination in os in detail ?
Ans. 1. Process Concepts : A process is a program in execution. It is a fundamental concept in
operating systems, as it allows the OS to manage multiple tasks simultaneously by allocating system
resources like CPU time, memory, and I/O devices to processes.

Key Process Concepts


a.Program vs. Process: A program is a passive entity (a set of instructions stored on disk). A process is
an active entity representing a running instance of a program.
b..Attributes of a Process:
● Process ID (PID): A unique identifier for each process.
● Program Counter: Points to the next instruction to execute.
● Process State: Current status (e.g., running, waiting).
● CPU Registers: Store temporary data during execution.
● Memory Management Information: Includes the address space allocated to the process.
c. Process Control Block (PCB):A data structure maintained by the OS to store information about a
process. Includes PID, process state, CPU registers, scheduling information, etc.

2.Process Creation: Processes can create other processes, leading to a hierarchy of parent and child
processes. The process creation mechanism is fundamental to multitasking.

Steps in Process Creation:

● Parent Process Initiates Creation: A parent process creates a child process using system calls
like fork() (in Unix/Linux).
● Resource Allocation:The OS allocates resources (memory, CPU time, I/O) to the child process.
● Execution Context Setup: The new process inherits some attributes from the parent (e.g., open
files, environment variables).
● Child Process Execution: The child process may execute the same program as the parent or
load a new program using system calls like exec().

Examples of Process Creation:

Unix/Linux:
● fork() creates a child process that is a duplicate of the parent.
● exec() replaces the process's memory with a new program.

Windows:
● CreateProcess() creates a new process.

3.Process State Transitions: A process can exist in one of several states, depending on its current
activity and the availability of resources. These states are managed by the OS to enable multitasking.

Common Process States:

a. New: The process is being created but has not yet started execution.

b. Ready: The process is prepared to execute but is waiting for CPU availability.

c.Running: The process is actively executing instructions on the CPU.


d. Waiting (or Blocked): The process is waiting for an event to occur (e.g., I/O completion).

e . Terminated: The process has finished execution and is being removed from the system.

4.Process Termination: A process terminates when it completes its execution or is forcibly terminated by
the OS or user. After termination, the process's resources are reclaimed by the OS.

Reasons for Process Termination:

● Normal Completion: The process successfully executes its instructions and exits.
● Error Conditions: Runtime errors like division by zero, invalid memory access, or file not found.
● Manual Termination: A user or administrator kills the process using commands like kill
(Unix/Linux) or End Task (Windows).
● Parent Termination: If a parent process terminates, some systems also terminate its child
processes.
● Resource Shortages: The OS forcibly terminates processes during low-memory or high-CPU
usage conditions.

Steps in Process Termination:

● Exit System Call: The process makes a system call (exit() in Unix/Linux) to signal completion.
● Resource Deallocation: The OS reclaims resources like memory, open files, and CPU time.
● Process Removal: The OS removes the process's entry from the PCB and scheduling queues.

Q8. Explain Inter-Process Communication in detail?


Ans: Inter-Process Communication (IPC) is a mechanism that enables processes to communicate and
synchronize with each other. It is essential in multitasking environments where processes often need to
share information, coordinate actions, or notify each other about events.

Why IPC Is Necessary

1. Data Sharing:Processes may need to share data for tasks like database access or computations.

2. Synchronization: Ensures that processes work in coordination, especially when accessing shared
resources.

3. Modularity: Dividing tasks into smaller processes simplifies development and maintenance.

4. Resource Management: Efficiently manages resources shared among processes.

Types of IPC Mechanisms: IPC can be broadly categorized into two types: message passing and shared
memory.
Q9 Explain the concept of threads. Explain User level and kernel level threads. Explain Multi -
threading, thread libraries, threading issues and benefits of threads in detail.
And. Threads in Operating Systems: A thread is the smallest unit of a program that can be executed
independently. Threads are often referred to as "lightweight processes" because they share the same
process resources, such as memory and file handles, while operating independently within a process.

Key Concepts of Threads

1. Thread vs. Process: A process is a heavy-weight entity with its own memory space and resources. A
thread is a light-weight entity that operates within the process’s memory space.

2. Components Shared Among Threads:


● Code
● Data section
● Open files
● Global variables

3. Components Unique to Each Thread:


● Program counter
● Registers
● Stack

4. Thread Context:The context of a thread includes its register set, stack, and program counter.
Switching between threads involves saving and restoring this context.

Types of Threads

User-Level Threads (ULT)

Definition: User-level threads are managed entirely by the user-level library, and the kernel is unaware of
their existence.
Characteristics: Created and managed by user libraries. No kernel intervention is required for thread
management (e.g., creation, switching).All threads of a process share a single kernel thread.

Advantages:

● Efficiency:Thread creation, switching, and synchronization are faster as they are done in user
space.
● Custom Scheduling: Libraries can implement their own scheduling algorithms.
● Portability: Works across different operating systems without kernel modification.

Kernel-Level Threads (KLT)

Definition: Kernel-level threads are managed directly by the operating system’s kernel.

Characteristics: The kernel handles thread creation, scheduling, and management. Each thread is
represented by a kernel thread.

Advantages:
● True Parallelism: Threads can run in parallel on multiprocessor systems.
● Better Performance: Non-blocking system calls allow other threads to continue executing.
● Integration with OS: Thread management and scheduling are tightly integrated with the OS.
Comparison

Multithreading

Definition:
Multithreading refers to the ability of a CPU or an operating system to execute multiple threads
concurrently. A multithreaded process contains multiple threads running in the same memory space.

Multithreading Models

● Many-to-One Model:Multiple user threads are mapped to a single kernel thread. Example:
Green threads in early Java versions.
● One-to-One Model:Each user thread is mapped to a kernel thread.Example: Windows and Linux
threading.
● Many-to-Many Model: Multiple user threads are mapped to an equal or smaller number of kernel
threads.Example: Solaris OS.

Thread Libraries
Thread libraries provide APIs for creating and managing threads. Examples include:

1. POSIX Threads (Pthreads):Standardized thread library for Unix-like systems.Provides functions for
thread creation, synchronization, and management.

2. Windows Threads:Native thread API provided by the Windows OS.

3. Java Threads: Part of the Java API, offering higher-level abstractions for thread management.
Threading Issues

1. Race Conditions: Occurs when multiple threads access shared resources simultaneously without
proper synchronization, leading to unpredictable results.

2. Deadlocks: A situation where two or more threads are waiting for each other’s resources, causing a
cycle of dependencies and halting execution.

3. Starvation: Some threads may be denied resources indefinitely due to other threads holding priority.

4. Context Switching Overhead: Switching between threads requires saving and restoring thread
contexts, which can slow down execution.

5. Resource Sharing: Managing access to shared resources like memory and files requires
synchronization mechanisms (e.g., mutexes, semaphores).

Benefits of Threads

1. Responsiveness: Threads allow applications to remain responsive. For example, a GUI thread can
continue updating the interface while a background thread processes data.

2. Resource Sharing: Threads within a process share the same memory space, allowing faster
communication compared to inter-process communication.

3. Efficiency:Creating and managing threads is faster than processes due to shared resources.

4. Scalability:Threads can take advantage of multiprocessor systems to achieve parallelism.

5. Modularity:Tasks can be divided into smaller, independent threads, simplifying development.

Notes
Chapter 4

Q1 Explain the operations on processes


Ans. A process is an executing instance of a program, which includes program code, data, and the
resources needed to execute it. Modern operating systems are designed to handle multiple processes
efficiently and allow for various operations on processes to ensure multitasking, synchronization, and
resource sharing.

The major operations on processes include:

1.Process Creation: Process creation is a fundamental operation that occurs when a new process is
initialized. A process can create other processes, which are known as child processes. The process that
creates these child processes is called the parent process. This hierarchy of parent and child processes
forms a process tree.

Steps in Process Creation:


● System Call Invocation: A parent process uses a system call like fork() (in Unix/Linux) or
CreateProcess() (in Windows) to create a child process.
● Resource Allocation: The operating system allocates resources like memory, CPU time, and I/O
handles for the child process.
● Inheritance: The child process may inherit certain attributes from the parent, such as open files,
environment variables, and security credentials.
● Execution: The child process begins execution. It can either execute the same program as the
parent or load a different program using system calls like exec().

Example of Process Creation:


Unix/Linux:

pid_t pid = fork();


if (pid == 0) {
// Child process
} else {
// Parent process
}

2.Process Scheduling: Process scheduling determines the order in which processes execute. The goal
is to optimize CPU utilization, ensure fairness, and reduce waiting times.

Types of Schedulers:
Long-Term Scheduler:
● Decides which processes are admitted into the system for processing.
● Controls the degree of multiprogramming.

Short-Term Scheduler:
● Selects which process will execute next from the ready queue.
● Executes frequently and has a significant impact on system performance.

Medium-Term Scheduler:
● Temporarily removes processes from memory (swapping) to reduce load.

Scheduling Criteria:
● CPU Utilization: Maximize CPU usage.
● Throughput: Number of processes completed per unit time.
● Turnaround Time: Time taken for a process to complete execution.
● Waiting Time: Time spent in the ready queue.
● Response Time: Time between request submission and the first response.

Examples of Scheduling Algorithms:


● FCFS (First-Come, First-Served):Executes processes in the order they arrive. Simple but can
lead to long waiting times (convoy effect).
● Round Robin (RR): Each process gets a fixed time slice (quantum).

3..Process Termination; Process termination occurs when a process finishes its execution or is explicitly
stopped. Once a process is terminated, its resources are released and made available for other
processes.

Reasons for Process Termination:

● Normal Termination: The process completes its execution successfully. Example: Returning
from main() or calling exit().
● Error Termination: The process encounters a fatal error (e.g., segmentation fault, illegal
instruction).

Termination Process:
● The process executes a system call like exit() to signal completion.
● The OS deallocates memory, file handles, and other resources.
● The process is removed from the process table.
● The process’s termination status is communicated to its parent.
4.Process Synchronization: When multiple processes access shared resources, synchronization
ensures that the resources are used consistently and without conflict.

Why Synchronization Is Necessary:

1. Critical Sections: Sections of code where shared resources are accessed need protection to prevent
race conditions.

2. Race Conditions: Occur when multiple processes access and manipulate shared data concurrently,
leading to unpredictable results.

Synchronization Mechanisms:
● Mutexes (Mutual Exclusion): A locking mechanism that allows only one process to access a
resource at a time.
● Semaphores: A signaling mechanism that controls access to shared resources.
● Monitors: High-level constructs that encapsulate shared resources and synchronization
mechanisms.
● Spinlocks: Processes continuously check for resource availability, suitable for short waiting
times.

5.Inter-Process Communication (IPC)- IPC allows processes to exchange data and coordinate actions.
It is essential in multitasking systems where processes need to collaborate.

IPC Mechanisms:

1. Pipes: Allow unidirectional communication between parent and child processes.

2. Message Queues: A queue maintained by the OS for sending and receiving messages.

3. Shared Memory: A memory region accessible by multiple processes for fast data sharing.

4. Sockets:Enable communication between processes on the same or different machines.

5. Signals: Notify processes about events like interrupts or errors.

Q2. PCB?
Ans. The Process Control Block (PCB) is a data structure used by operating systems to store all
information about a specific process. It acts as a repository for information that the OS needs to manage
the execution of the process and maintain its state.

Structure of a PCB: The PCB consists of various fields that hold specific information about a process.
The fields may vary depending on the operating system, but the general components include the
following:

1. Process Identification Information

● Process ID (PID): A unique identifier assigned to each process.


● Parent Process ID (PPID): The PID of the process that created this process.
● User ID (UID): Identifies the user who owns the process.
● Group ID (GID): Identifies the group to which the process belongs.

2. Process State Information

● Process State: Indicates the current state of the process, such as:
● New: Process is being created.
● Ready: Process is ready to run.
● Running: Process is currently executing.
● Waiting: Process is waiting for an event or I/O.
● Terminated: Process has completed execution.
● Program Counter (PC): Stores the address of the next instruction to be executed.

3. CPU Registers: The current values of the CPU registers (e.g., accumulator, base register, stack
pointer) are stored in the PCB when the process is not executing. This ensures the process can resume
correctly during context switching.

4. Memory Management Information

● Base and Limit Registers: Define the process's address space.


● Page Table: Stores the mapping between virtual and physical memory.
● Segment Table: Used for memory segmentation.
● Heap and Stack Pointers: Track dynamic memory allocation.

Role of PCB in Process Management

1. Process Tracking:The OS uses the PCB to keep track of all active processes. PCBs are stored in a
process table, an array or linked list maintained by the OS.

2. Context Switching: During context switching, the current process's state is saved in its PCB, and the
state of the next process is loaded from its PCB. This allows the OS to resume processes exactly where
they left off.

3. Scheduling: Scheduling algorithms use information in the PCB (e.g., priority, process state) to decide
which process to execute next.

4. Resource Allocation: The OS uses PCB data to allocate and deallocate resources such as CPU time,
memory, and I/O devices.

Lifecycle of a PCB

1. Creation: When a process is created, a PCB is allocated and initialized with default values. The PCB is
added to the process table.
2. Ready : The PCB is updated to indicate that the process is in the ready queue. Scheduling information
is adjusted based on priority or other criteria.

3. Running: The PCBs state changes to "Running." The CPU uses the program counter and register
values stored in the PCB to execute the process.

4. Waiting: If the process needs to wait for I/O or an event, the PCBs state changes to "Waiting."
Information about the pending I/O or event is recorded.

5. Termination: When the process completes execution, the PCB is marked as "Terminated."
The OS deallocates the PCB and releases resources.

Chapter 5
Inter process communication

Q1.Cooperating Processes?
Ans.Cooperating Processes: The Concurrent processes executing in the operating system allows for the
processes to cooperate
(bothmutually or destructively) with other processes. Processes are cooperating if they can affect
eachother. The simplest example of how this can happen is where two processes are using the same file.
One process may be writing to a file, while another process is reading from the file; so, what is being read
may be affected by what is being written. Processes cooperate by sharing data.
Cooperation is important for several reasons: 1.Information Sharing: Several processes may need
to access the same data (such as stored in a file.

2.Computation Speedup: A task can often be run faster if it is broken into subtasks and distributed
among different processes. For example, the matrix multiplication code you saw in class. This depends
upon the processes sharing data. (Of course, real speedup also required having multiple CPUs that can
be shared as well.) For another example, consider a web server which may be serving many clients. Each
client can have their own process or thread helping them. This allows the
server to use the operating system to distribute the computer’s resources, including CPU time,
among the many clients.
3.Modularity: It may be easier to organize a complex task into separate subtasks, and then have
different processes or threads running each subtask. Example: A single server process dedicated to a
single client may have multiple threads running – each performing a different task for the client.
4.Convenience: An individual user can run several programs at the same time, to perform some task.
Example:
A network browser is open, while the user has a remote terminal program running (such as
telnet), and a word processing program editing data. Cooperation between processes requires
mechanisms that allow processes to communicate data between each other and synchronize
their actions so they do not harmfully interfere with each other. The purpose of this note is to
consider ways that processes can communicate data with each other, called Inter-process
Communication (IPC).
Q2. Do you think a single user system requires process communication? Support your answer
with logic.
Ans.Yes, a single-user system can require Inter-Process Communication (IPC), depending on the nature
of tasks and system design. Even in a system designed for a single user, there can be multiple processes
running concurrently, and these processes may need to exchange information or coordinate their
activities.

Supporting Logic

1. Multitasking in Single-User Systems:


● A single-user system can support multitasking, where:
● The user may run multiple applications simultaneously (e.g., a text editor, a web browser, and a
music player).
● Processes belonging to these applications may need to communicate or share resources. For
example:
● A browser process may interact with a system process to fetch network data.
● A printing process may interact with a text editor to receive the document to be printed.

2. Modular Design of Applications


● Applications in single-user systems often use modular designs, where:
● Different components or services run as separate processes.
● These processes need to communicate to perform their functions efficiently. For instance:
● A media player may use separate processes for user interface, decoding, and playback.
● These processes coordinate through IPC mechanisms like shared memory or message queues.

3. System Services and Daemons Even in single-user systems: Background services or daemons (e.g.,
file indexing, update managers) may run as separate processes. Communication is needed between
user-facing processes and these system services to request actions or report results.

4. Shared Resources: Processes in a single-user system might need to share resources, such as:
● Accessing the same file or database.
● Coordinating access to hardware resources (e.g., printers, disk drives) to avoid conflicts.

Chapter- 6
CPU Scheduling

Q1.CPU Scheduling
Ans.CPU Scheduling is the process by which the operating system determines which process in the
ready queue should be allocated to the CPU for execution. It is a fundamental function of multitasking
operating systems to ensure efficient CPU utilization and fair resource sharing among processes.

Types of CPU Scheduling

1. Preemptive Scheduling: The CPU can be taken away from a running process before it finishes.Used
in time-sharing systems.
Examples: Round Robin, Shortest Remaining Time First.
2. Non-Preemptive Scheduling:Once a process starts executing, it cannot be preempted until it finishes.
Examples: First Come First Serve, Priority Scheduling.

CPU Scheduling Criteria

When choosing a scheduling algorithm, the following criteria are considered:


● CPU Utilization: Keeping the CPU as busy as possible.
● Throughput: Number of processes completed per unit time.
● Turnaround Time: Total time taken from process submission to completion.
● Waiting Time: Time spent waiting in the ready queue.
● Response Time: Time from process submission to the first response.

Common CPU Scheduling Algorithms-

A.The First Come First Serve (FCFS) scheduling algorithm is the simplest and most straightforward
CPU scheduling technique. In this method, processes are executed in the exact order in which they arrive
in the ready queue, similar to a queue in real life, such as a ticket counter.

Key features of FCFS Scheduling

1. Type: Non-preemptive: Once a process starts execution, it runs to completion without being
interrupted.

2. Mechanism: The CPU is assigned to the process at the front of the ready queue. Processes wait in a
queue based on their arrival times.

3. Scheduling: It follows the FIFO (First In, First Out) principle.

Example of FCFS Scheduling

Problem Statement
Consider the following set of processes with their arrival times and burst times:

Process Arrival time (ms) Burst time (ms)

P1 0 5

P2 1 3

P3 2 8

Execution Steps

1. Order of Execution:
● Since P1 arrives first, it will be executed first.
● P2 arrives next and will execute after P1.
● P3 will execute last.
2. Gantt Chart:
● A graphical representation of the CPU's execution order.
● | P1 | P2 | P3 |
0 5 8 16

3. Completion Time:
● P1 finishes at time 5.
● P2 finishes at time 8 (5 + 3).
● P3 finishes at time 16 (8 + 8).

4. Turnaround Time (TAT):


Formula: Turnaround time= Completion time-Arrival time

● P1: 5-0=5
● P2: 8-1=7
● P3: 16-2=14

5. Waiting Time (WT):


Formula: Waiting time= Turnaround time-Burst time
● P1: 5-5=0
● P2: 7-3=4
● P3: 14-8=6

6. Summary table: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time | Waiting
Time | |---------|--------------|------------|-----------------|-----------------|--------------| | P1 |0 |5 |5
|5 |0 | | P2 |1 |3 |8 |7 |4 | | P3 |2 |8
| 16 | 14 |6 |

7.Average Waiting Time:


AWT= total waiting time. = 0+4+6. = 3.33,ms
Number of processes. 3

8. Average Turnaround Time:

ATAT= total turnaround time. = 5+7+14. =8.67,ms


Number of processes 3

Advantages of FCFS

1. Simple and Easy to Implement: FCFS is straightforward as it requires minimal overhead in scheduling
logic.

2. Fair for Sequential Processes: Each process is treated equally, based on its arrival time.

3. Good for Batch Systems: Works well in environments where process completion time is not critical.
Disadvantages of FCFS

1. Convoy Effect: If a long process arrives first, shorter processes must wait, leading to inefficient CPU
utilization and longer average waiting times.

2. Poor Performance for Interactive Systems: High response times for processes, making it unsuitable for
real-time or interactive environments.

3. No Preemption: The CPU cannot be reassigned to higher-priority tasks during execution.

B. Shortest Job Next (SJN)


Definition: Shortest Job Next (SJN), also known as Shortest Job First (SJF), is a non-preemptive
scheduling algorithm where the process with the smallest burst time is executed first. Once a process
starts execution, it runs to completion before the CPU is assigned to another process.

Key Characteristics
● Type: Non-preemptive.
● Selection Criterion: The process with the smallest burst time is chosen.
● Efficiency: Reduces average waiting time compared to First Come First Serve (FCFS).
● Drawback: Requires knowledge of burst times in advance, which is not always feasible.

Process Arrival time ms Burst time ms

P1 0 6

P2 1 8

P3 2 7

P4 3 3

Execution Steps:

● At time 0, P1 is the only process available, so it starts execution.


● When P1 finishes, the process with the shortest burst time among the remaining is selected.
● The order of execution is determined by burst times: P4 → P3 → P2.

Gantt Chart:

| P1 | P4 | P3 | P2 |
0 6 9 16 24

Calculation: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time (TAT) | Waiting
Time (WT) | |---------|--------------|------------|-----------------|-----------------------|-------------------| | P1 |0
|6 |6 |6 |0 | | P2 |1 |8 | 24 | 23
| 15 | | P3 |2 |7 | 16 | 14 |7 | | P4 |3 |3
|9 |6 |3 |
Average Waiting Time (AWT):
AWT= total waiting time. = 0+15+7+3.=6.25,ms
Number of processes. 4

Average Turnaround Time (ATAT):


ATAT= total turnaroundtime.=6+23+14+6.=12.2,ms
Number of processes 4

Advantages of SJN

1. Optimal Waiting Time: Minimizes average waiting time for all processes.

2. Efficient for Batch Systems: Well-suited for environments where burst times are predictable.

Disadvantages of SJN

1. Starvation: Longer processes may starve if shorter processes keep arriving.

2. Inaccurate Burst Time: Relies on accurate prediction of burst times, which is not always feasible.

C.Shortest Remaining Time First (SRTF)


Definition:Shortest Remaining Time First (SRTF) is the preemptive version of SJN. In SRTF, the
currently executing process is preempted if a new process with a shorter remaining burst time arrives.
Example
Problem Statement:
Consider the same processes as in the SJN example:

Process Arrival time ms Burst time ms

P1 0 6

P2 1 8

P3 2 7

P4 3 3

Execution Steps:

1. At time 0, P1 starts execution since it is the only process.


2. At time 3, P4 arrives with a shorter burst time (3 ms) and preempts P1.
3. After P4 finishes, P1 resumes but is preempted again by P3 at time 7.
4. Processes execute based on remaining burst times.
Gantt Chart:

| P1 | P4 | P1 | P3 | P2 |
0 3 6 7 14 22

Calculation: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time (TAT) | Waiting
Time (WT) | |---------|--------------|------------|-----------------|-----------------------|-------------------| | P1 |0
|6 | 14 | 14 |8 | | P2 |1 |8 | 22 | 21
| 13 | | P3 |2 |7 | 14 | 12 |5 | | P4 |3 |3
|6 |3 |0 |

Average Waiting Time (AWT):


8+13+5+0. = 6.5,ms
4
Average Turnaround Time (ATAT):
14+21+12+3=. 12.5,ms
4

Advantages of SRTF
● Reduced Waiting and Turnaround Time: Offers better performance compared to SJN.
● Dynamic Adaptation: Can handle processes arriving dynamically.

Disadvantages of SRTF
● High Overhead: Frequent context switching can degrade performance.
● Starvation: Longer processes may suffer if shorter processes keep arriving.
● Complexity: More challenging to implement compared to SJN.

D. Round Robin (RR) CPU Scheduling : Round Robin (RR) is one of the simplest and most widely used
preemptive CPU scheduling algorithms. It is designed especially for time-sharing systems, where each
process gets a fixed time slot (quantum) for execution in a cyclic manner. If a process doesn’t complete its
execution within its time slice, it is moved to the end of the ready queue, and the CPU is allocated to the
next process in the queue.

Key features of Round Robin Scheduling

1. Preemptive: RR is preemptive because processes are interrupted after their allocated time slice,
ensuring fairness among processes.

2. Time Quantum: A fixed time slice or quantum is set (e.g., 2ms, 5ms). Determines how long a process
can execute before being preempted.

3. Fairness: Each process gets equal time for execution, making it fair for all processes.

4. Cyclic Nature :Processes are executed in the order they arrive in the ready queue and are placed at
the end of the queue after their time slice expires.
5. Suitable for Interactive Systems: Ensures better response times for processes, making it ideal for
multitasking and time-sharing systems.

Working of Round Robin Scheduling

Let’s consider an example to illustrate Round Robin scheduling.

Example

Problem Statement:
Given the following processes, their arrival times, and burst times, schedule them using Round Robin with
a time quantum of 4ms.

Process Arrival time ms Burst time ms

P1 0 8

P2 1 4

P3 2 9

P4 3 5

Execution Steps

1. At time , P1 starts execution since it is the first process in the queue.


2. Each process gets a time quantum of 4ms. If a process doesn’t finish within this time, it is moved to the
end of the queue.
3. The processes continue executing in a cyclic order until all are completed.

Gantt Chart

The Gantt chart shows the sequence of process execution.

| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 29

Completion Details: Let’s compute the completion time (CT), turnaround time (TAT), and waiting time
(WT).
Formulas:

Turnaround Time (TAT) = completion time - arrival time

Waiting Time (WT) = turnaround time - burst time


Averages:

Average Turnaround Time (ATAT):

Diagrammatic Representation
Below is a visual depiction of Round Robin scheduling, showing how processes are executed in time
slices.

Gantt Chart:

| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 29

Process Queue Dynamics:


● At , the queue contains: [P1]
● After P1’s first time slice, the queue becomes: [P2, P3, P4, P1]
● After P2’s first time slice, the queue becomes: [P3, P4, P1]
● This cyclic execution continues until all processes are completed.

Impact of Time Quantum

1. Small Quantum: Increases context switching overhead. Improves response time but may degrade
throughput.
2. Large Quantum: Reduces context switching. If too large, RR behaves like FCFS, defeating its
purpose.

Advantages of Round Robin

1. Fair Allocation:Each process is treated equally, reducing the chances of starvation.

2. Improved Response Time:Processes are executed periodically, ensuring quick responses for
interactive systems.

3. Efficient for Time-Sharing:Well-suited for environments where tasks are of equal priority.

4. Dynamic Adaptation: The performance can be tuned by adjusting the time quantum.

Disadvantages of Round Robin

1. Context Switching Overhead: Frequent switching between processes increases overhead.

2. Impact of Time Quantum: If the quantum is too small, overhead increases; if it is too large, it behaves
like First Come First Serve (FCFS).

3. Not Ideal for Varying Burst Times: Longer processes may still take a long time to complete due to
cyclic execution.

E.Multilevel Queue Scheduling in Operating Systems

Multilevel Queue Scheduling is a CPU scheduling algorithm that divides the ready queue into multiple
separate queues based on the type, priority, or characteristics of processes. Each queue has its own
scheduling policy, and processes are permanently assigned to a queue depending on specific criteria,
such as priority, process type, or memory size.

Key Features of Multilevel Queue Scheduling

1. Multiple Queues: The ready queue is divided into several smaller queues, each handling a different
type of process.
Example queues: System processes, Interactive processes, Batch jobs.

2. Permanent Assignment: Once a process is assigned to a queue, it remains there throughout its
lifetime.

3. Separate Scheduling Policies: Each queue has its own scheduling algorithm, such as: Round Robin for
interactive processes.First Come First Serve (FCFS) for batch jobs.

4. Inter-Queue Scheduling:A predefined priority governs which queue’s processes are selected for
execution. Higher-priority queues are serviced before lower-priority ones.

Structure of Multilevel Queue Scheduling


A typical multilevel queue system might look like this:

+------------------+
| System Queue | <- Highest Priority
(Scheduled using FCFS)
+------------------+
| Interactive Queue| <- Medium Priority
(Scheduled using RR)
+------------------+
| Batch Queue | <- Lowest Priority
(Scheduled using FCFS)
+------------------+

Types of Multilevel Queue Scheduling

1. Fixed Priority Scheduling: Higher-priority queues are always serviced first. Processes in
lower-priority queues may face starvation.

2. Time-Slice Scheduling: Time slices are allocated to each queue, ensuring no queue is ignored.
Example: System queue gets 70% of CPU time, interactive queue gets 20%, and batch queue gets 10%.

How Multilevel Queue Scheduling Works

Example Problem

Suppose there are 3 queues:

● System Queue (Highest Priority): Scheduled using FCFS.


● Interactive Queue (Medium Priority): Scheduled using Round Robin with a time quantum of 4ms.
● Batch Queue (Lowest Priority): Scheduled using FCFS.

Processes:

Proces Queue Arrival Burst time


s time

P1 System 0 5

P2 Interactive 1 7

P3 Batch 2 4

P4 Interactive 3 5

P5 Batch 4 6
Execution Steps
● At , P1 (System Queue) is executed because it belongs to the highest-priority queue.
● Once P1 finishes, P2 (Interactive Queue) is selected and runs for 4ms (time quantum).
● After P2’s time slice expires, P4 (Interactive Queue) is executed next since it arrived earlier than
any batch process.
● Finally, processes from the Batch Queue (P3 and P5) are executed using FCFS.

Gantt Chart

| P1 | P2 | P4 | P2 | P3 | P5 |
0 5 9 13 17 21 27

Completion Details

Advantages of Multilevel Queue Scheduling

1. Specialization: Processes are grouped based on type, allowing the system to use tailored scheduling
policies.

2. Efficient Resource Utilization: System-critical processes get priority, ensuring faster response times for
important tasks.

3. Flexibility: Each queue can implement a different scheduling algorithm.

Disadvantages of Multilevel Queue Scheduling


1. Starvation: Lower-priority queues may be ignored if higher-priority queues are always full.

2. Rigid Structure: Processes are permanently assigned to a queue, which may not adapt well to
changing system dynamics.

3. Complexity: Managing multiple queues and their scheduling policies can be complex.

Notes

Q2. Explain the types of scheduling at level base?


Ans: Scheduling in operating systems involves deciding which process or thread will execute next. This
ensures efficient CPU utilization, system responsiveness, and fairness among processes. Scheduling can
be categorized based on levels within the operating system hierarchy. These levels address different
aspects of resource allocation and process management.

1.Long-Term Scheduling (Job Scheduling)


Definition: Long-term scheduling determines which processes are admitted into the system for
processing. It controls the degree of multiprogramming, i.e., the number of processes in memory
simultaneously.

Key Characteristics

● Frequency: Infrequent; occurs when a new process is created.


● Goal: To select a balanced mix of CPU-bound and I/O-bound processes for better performance.
● Process States Involved: Transitions from New to Ready state.

Example
In a batch processing system, jobs waiting in a queue are selected based on their priority or resource
requirements.

Advantages
● Improves system throughput by balancing workloads.
● Controls system performance by limiting the number of processes.

2. Medium-Term Scheduling

Definition: Medium-term scheduling temporarily removes processes from the main memory to reduce the
load on the CPU and other resources. These processes are placed in a suspended state and can be
reintroduced later.

Key Characteristics
● Frequency: Occurs more frequently than long-term scheduling but less than short-term
scheduling.
● Goal: To optimize system performance by swapping processes in and out of memory.
● Process States Involved: Transitions from Ready/Running to Suspended and back.

Example: A process that is idle or waiting for I/O may be swapped out to disk to make room for other
active processes.

Advantages
● Improves memory management by allowing more processes to be active.
● Balances the load on the CPU by suspending and resuming processes.

3. Short-Term Scheduling (CPU Scheduling)

Definition: Short-term scheduling, also known as CPU scheduling, selects a process from the ready
queue to execute on the CPU.

Key Characteristics
● Frequency: Occurs most frequently; decisions are made every time the CPU is idle or a process
terminates.
● Goal: Maximize CPU utilization, minimize response time, and ensure fairness.
● Process States Involved:
● Transitions from Ready to Running state.

Common Algorithms
● First Come First Serve (FCFS): Executes processes in the order they arrive.
● Shortest Job Next (SJN): Selects the process with the shortest burst time.
● Round Robin (RR): Allocates fixed time slices to each process.
● Priority Scheduling: Prioritizes processes based on assigned priority values.
● Multilevel Queue Scheduling: Divides processes into multiple priority queues.

Advantages
● Ensures efficient CPU utilization.
● Enhances system responsiveness.

Notes

Chapter 10-11
Memory management

Q1
Ans. Memory management is a critical function of an operating system (OS) that ensures efficient use of
a computer's memory resources. It involves allocating, deallocating, and managing memory to optimize
system performance and allow multiple programs to run simultaneously. Here's a comprehensive
breakdown:

1. Objectives of Memory Management

● Efficient Utilization: Ensure memory is used effectively by allocating the right amount of space to
each process.
● Process Isolation: Prevent one process from interfering with another’s memory.
● Maximizing Multitasking: Allow multiple programs to run concurrently by sharing memory
resources.
● Memory Protection: Safeguard data from unauthorized access or corruption.
● Dynamic Allocation: Adjust memory allocation based on process needs during execution.

2. Memory Hierarchy: Memory management operates across various memory levels, each differing in
speed, size, and cost:

● Registers: Fastest but smallest.


● Cache: High speed, intermediate size.
● Main Memory (RAM): Larger, slower than cache.
● Secondary Storage: Largest and slowest (e.g., HDDs, SSDs).

3.Functions of Memory Management

a. Memory Allocation: Static Allocation: Memory is allocated at compile time and remains fixed. Dynamic
Allocation: Memory is allocated at runtime, allowing flexibility.

b. Address Binding:Translates memory references:


Logical Address: Generated by the CPU (virtual).
Physical Address: Actual address in RAM.

c. Swapping: When RAM is full, the OS moves inactive processes to secondary storage (swap space)
and retrieves them when needed.

d. Paging: Divides memory into fixed-size blocks (pages). Avoids fragmentation and allows
non-contiguous memory allocation.

e. Segmentation: Divides memory into variable-sized segments based on logical divisions like functions or
arrays.

f. Virtual Memory: Allows the system to use more memory than physically available by utilizing disk space
as an extension of RAM.

4.Memory Management Techniques


a. Contiguous Allocation: Assigns a single contiguous block of memory to a process.

● Advantages: Simple implementation.


● Disadvantages: Suffers from fragmentation.
b. Non-Contiguous Allocation: Allows a process to occupy non-adjacent memory blocks.
Managed using:

● Paging: Divides both memory and processes into fixed-sized blocks.


● Segmentation: Divides memory based on logical units.

Challenges in Memory Management

● Fragmentation: Memory is wasted due to inefficient allocation.


● Internal Fragmentation: Unused space within allocated memory blocks.
● External Fragmentation: Unused space between allocated blocks.
● Thrashing: Excessive swapping between RAM and disk, degrading performance.
● Deadlocks: Competing processes waiting for memory can cause a standstill.

Q2.Multistep Processing of a User Program?


Ans. Multistep Processing of a User Program: Multistep processing of a user program refers to the
stages a program goes through from writing code to its execution. These steps involve translation, linking,
loading, and execution, ensuring that the user's program is transformed from human-readable source
code to machine-executable binary instructions.

1. Stages in Multistep Processing

a. Writing the Program: The user writes the program in a high-level language (e.g., C++, Java, Python).
This is done using an editor or Integrated Development Environment (IDE).
Output: A source code file (e.g., program.cpp).

b. Compilation: The source code is converted into machine-readable instructions through a compiler.
Compilation involves:
● Lexical Analysis: Tokenizing the source code.
● Syntax Analysis: Ensuring correct syntax based on grammar rules.
● Semantic Analysis: Checking for logical errors.
● Optimization: Improving code efficiency.
● Code Generation: Translating the code into machine language (object code).
● Output: Object code file (e.g., program.o).

c. Linking: Links the object code with additional required libraries or other modules.

Combines:
User-defined functions.
Standard libraries (e.g., math, I/O libraries).
External libraries.
The linker resolves symbols and addresses used in the program.

Output: An executable file (e.g., program.exe)

d. Loading: The executable file is loaded into the main memory by the loader.
The OS assigns necessary memory and sets up the program for execution.
Steps involved:
● Loading the code segment (instructions).
● Loading the data segment (variables, constants).
● Allocating a stack for execution flow.

Output: Program is in memory, ready for execution.


e. Execution: The CPU executes the program's instructions.

During execution, the program may:

● Access memory and I/O devices.


● Perform calculations and logic operations.
● Interact with the OS for system calls.
The program runs until it completes or encounters an error.

2. Components Involved in Multistep Processing

a. Compiler: Translates high-level code into machine code.


Types of compilers: Single-pass, multi-pass, just-in-time (JIT) compilers.

b. Assembler: Converts assembly code (low-level language) into machine code.

c. Linker: Resolves external references and combines object files into a single executable.

d. Loader: Places the executable file into memory, ready for execution.

e. Operating System: Manages memory, CPU scheduling, and system resources during execution.

3. Example Workflow
For a C++ program:

● Source Code: Write the code in a file like example.cpp.


● Compilation: The compiler generates example.o (object code).
● Linking: The linker creates example.exe (executable file).
● Loading: The OS loads example.exe into memory.
● Execution: The program runs and produces the desired output.
Q3.Explain logical and physical Address Space in detail
Ans.Logical and Physical Address Space: A Detailed Explanation: In computer systems, logical
address space and physical address space are key concepts in memory management. They refer to two
different views of memory that are essential for program execution. Understanding these concepts helps
in grasping how operating systems handle memory allocation and process isolation.

1. Logical Address Space: A logical address is the address generated by the CPU during program
execution. It represents a virtual address that does not directly correspond to a physical location in
memory. These addresses are used by programs and mapped to physical addresses by the operating
system and hardware (Memory Management Unit - MMU).

Characteristics of Logical Address Space

● Virtual in Nature: Exists only during program execution.


● Process-Specific: Each process has its own logical address space.
● Dynamic Allocation: Memory is allocated dynamically, allowing processes to use non-contiguous
memory segments.
● Range: Logical address space is defined by the CPU architecture and can exceed physical
memory limits, thanks to virtual memory.

Example
Suppose a program references a variable at address 0x100 in its code. This is a logical address and must
be translated to a physical address before accessing memory.

2. Physical Address Space: A physical address refers to the actual location in the computer's main
memory (RAM). These addresses are visible to the hardware and used to fetch or store data.

Characteristics of Physical Address Space

● Real Address: Directly corresponds to a memory location in RAM.


● Global Scope: Shared across all processes.
● Fixed Range: Limited by the size of the physical memory (e.g., a system with 16GB RAM has
physical addresses from 0 to 16GB).

Q3. Explain overlays and swapping?


Ans. 1. Overlays :Overlays are a programming technique that allows a program to be larger than the
physical memory by dividing it into smaller sections and loading only the required sections during
execution.

Key Concepts of Overlays

Purpose: Used to manage memory constraints by breaking down a large program into smaller,
manageable parts.

Implementation:
The program is divided into overlays.

● A control module manages which overlay is loaded at a given time.


● Unused overlays are stored on disk and swapped in when needed.

How Overlays Work

● The program is divided into logical parts that do not need to be in memory simultaneously (e.g.,
initialization, computation, and cleanup).
● Each part is loaded into the same memory region (overlapping one another) as required.
● The overlay manager ensures only the necessary section is loaded at any point.

Example

● Consider a program with three sections: A, B, and C. If only one section can fit into memory at a
time:
● First, section A is loaded and executed.
● When section B is needed, A is removed, and B is loaded.
● Similarly, section C replaces B when required.
2. Swapping: Swapping is a memory management technique where entire processes are moved
between main memory (RAM) and secondary storage (disk) to free up memory for other processes.

Implementation:

● The OS selects a process to swap out based on predefined criteria (e.g., inactivity).
● The process is written to disk (swap area).
● When the process is needed again, it is swapped back into memory.

How Swapping Works

● A process is loaded into memory and starts executing.


● When memory is full and another process needs to execute, the OS selects an inactive process
to swap out.
● The swapped-out process is saved on disk.
● The new process is loaded into the freed-up memory.
● When the swapped-out process needs CPU time, it is swapped back into memory.

Types of Swapping

1. Normal Swapping: Entire processes are swapped in and out.

2. Demand Swapping: Only necessary parts of the process are swapped, similar to paging.
Notes

Notes
Notes
Notes

You might also like