Operating System
Operating System
Chapter 1
2.Memory Management : Memory is the most expensive part of the computer system. Memory is a large
array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads
or writes of specific memory addresses. The CPU fetches from and stores in memory. There are various
algorithms that depend on the particular situation to manage the memory.
The operating system is responsible for the following activities in connection with memory
management.
a) Keep track of which parts of memory are currently being used and by whom.
b) Decide which processes are to be loaded into memory when memory space becomes
available.
c) Allocate and deallocate memory space as needed.
5.File Management: File management is one of the most visible services of an operating system.
Computers can store information in several different physical forms: magnetic tape, disk, and drum are
the most common forms. Each of these devices has its own characteristics and physical organization. For
convenient use of the computer system, the operating system provides a uniform logical view of
information storage. The operating system abstracts from the physical properties of its storage devices to
define a logical storage unit, the fi le. Files are mapped, by the operating system, onto physical devices. A
file is a collection of related information defined by its creator. Commonly, files represent programs (both
source and object forms) and data. Data files may be numeric, alphabetic, or alphanumeric. Files may be
free form, such as text files, or may be rigidly formatted. In general, a file is a sequence of bits, bytes,
lines or records whose meaning is defined by its creator and user. It is a very general concept.
The operating system is responsible for
the following activities in connection to the file management:
a) The creation and deletion of files.
b) The creation and deletion of directory.
c) The support of primitives for manipulating files and directories.
d) The mapping of files onto disk storage.
6.Command Interpretation: One of the most important components of an operating system is its
command interpreter. The command interpreter is the primary interface between the user and the rest of
the system. Many commands are given to the operating system by control statements. When a new job is
started in a batch system or when a user logs-in to a time-shared system, a program which reads and
interprets control statements are automatically executed. This program is variously called
(1) the control card
interpreter,
(2) the command line interpreter,
(3) the shell (in Unix), and so on. Its function is quite simple: get the next command statement and
execute it
1. Batch Operating System: Batch Operating Systems were among the first operating systems
developed for mainframe computers in the 1950s and 1960s. They execute a series of jobs (tasks)
without user interaction during processing. Users prepare jobs, submit them to an operator, and receive
results after processing.ex : Early IBM systems like IBM 1401.
Features:
a. Jobs are grouped into batches for processing.
b. No direct user interaction.
c. Efficient for repetitive tasks.
Features:
● Supports multiple users and processes.
● Provides quick response time.
● Prevents resource conflicts.
3. Real-Time Operating System (RTOS): Real-Time Operating Systems are designed for applications
requiring immediate and predictable responses. They are used in systems where time constraints are
critical, such as embedded systems and industrial control.Examples: VxWorks, FreeRTOS.
Types:
● Hard RTOS: Guarantees task completion within strict deadlines.
● Soft RTOS: Prioritizes tasks but allows minor delays.
Features:
● Provides transparency in resource access.
● Supports remote data processing.
5. Network Operating System (NOS): NOS provides features for managing network resources like file
sharing, printers, and communication between devices.
Features:Examples: Windows Server, Linux-based servers.
6. Mobile Operating System: Mobile OS powers smartphones, tablets, and other handheld devices,
focusing on touch interfaces and application ecosystems.Examples: Android, iOS.
Features:
● Optimized for touchscreens and portability.
● Supports app stores and cloud integration.
2.Process Management: Processes are programs in execution. The OS manages these processes to
ensure efficient CPU utilization and smooth multitasking.
Functions:
● Scheduling processes using algorithms like Round Robin, FIFO, or Priority Scheduling.
● Managing process states (Ready, Running, Waiting, etc.).
3.Memory Management: Memory management involves the allocation, deallocation, and organization of
system memory (RAM) for processes and applications.
Functions:
● Partitioning: Dividing memory into fixed or dynamic blocks.
● Virtual Memory: Extending physical memory using disk storage.
4.File System Management: The file system manages how data is stored, organized, and retrieved from
storage devices like hard drives and SSDs.Examples include NTFS (Windows), ext4 (Linux), and APFS
(macOS).
Functions:
● Providing a hierarchical structure for organizing files and directories.
● Controlling file access permissions (Read, Write, Execute).
5.Device Management: Device management involves coordinating and controlling input/output (I/O)
devices such as printers, keyboards, and storage devices.
Functions
I/O Controllers: Interface between devices and the OS.
Drivers: Software that allows the OS to communicate with specific hardware.
Q6. Evolution of OS
Ans
Notes
Chapter 2.
Q1. Explain system calls and its types
Ans. System calls are the mechanisms through which user-level applications interact with the operating
system. They provide an interface between a process and the operating system, allowing programs to
request services such as file operations, process management, and communication.System calls are
low-level, privileged functions executed by the operating system kernel.
Lifecycle of a System Call
● Request: A user application invokes a system call using a predefined library function (e.g., open()
in C).
● Transition: The call switches to kernel mode via an interrupt or trap.
● Execution: The kernel performs the requested operation.
● Response: The result is returned to the application, and control is switched back to user mode.
Types of System Calls
1.Process Control System Calls: These calls manage processes, including creation, termination, and
synchronization.
Examples:
● fork(): Creates a new process by duplicating the parent.
● exec(): Replaces the current process image with a new program.
● exit(): Terminates a process.
● wait(): Pauses the execution of a parent process until a child finishes.
2.File Management System Calls: These calls allow processes to perform operations on files, such as
reading, writing, and closing.
Examples:
● open(): Opens a file for reading or writing.
● read(): Reads data from a file.
● write(): Writes data to a file.
● close(): Closes an opened file.
3.Device Management System Calls:These calls manage I/O devices, including communication,
allocation, and release.
Examples:
● ioctl(): Configures device settings.
● read(): Reads data from an input device.
● write(): Writes data to an output device.
4.Information Maintenance System Calls:These calls retrieve or set system information, including
process and system status.
Examples:
● getpid(): Gets the process ID of the current process.
● gettimeofday(): Retrieves the current time.
● uname(): Provides system information (e.g., OS version).
5.Communication System Calls:These calls facilitate data transfer between processes, either within the
same system or across networks.
Examples:
● pipe(): Creates a unidirectional communication channel.
● shmget(): Allocates shared memory.
● send()/recv(): Sends/receives data over a network socket.
6.Protection and Security System Calls:These calls manage access permissions and enforce security
policies.
Examples:
● chmod(): Changes the permissions of a file.
● setuid(): Sets the user ID for a process.
● umask(): Sets the file mode creation mask.
Q4 Client-server Model
Ans. In the Client-server Model, all the kernel does is handle the communication between clients and
servers. By splitting the operating system up into parts, each of which only handles one fact of the
system, such as file service, process service, terminal service, or memory service, each part becomes
small and manageable; furthermore, because all the servers run as user-mode processes,and not in
kernel mode, they do not have direct access to the hardware. As a consequence, if a bug in the fi le
server is
triggered, the file service may crash, but this will not usually bring the whole machine down.
Another advantage of the client-server model is its adaptability to use in distributed system. If
a client communicates with a server by sending it messages, the client need not know whether
the message is handled locally in its own machine, or whether it was sent across a network to a
serveron a remote machine. As far as the client is concerned, the same thing happens in both cases:
a request was sent and a reply came back.
Q5.Exokernel?
Ans. Exokernel is a highly efficient and minimalist operating system architecture that aims to provide
applications with as much direct access to hardware resources as possible while ensuring security and
isolation. Developed as an alternative to traditional OS designs, Exokernel focuses on removing
abstractions imposed by the operating system, giving developers more control over resource
management.
1. Resource Allocation: The Exokernel allocates hardware resources like memory, CPU, and disk space
directly to applications, ensuring security through access control mechanisms.
2. Secure Binding:Secure binding allows applications to securely access and control resources. It uses
techniques like tagging or access control lists to track ownership.
MIT Exokernel: Developed at MIT, it is a proof-of-concept that demonstrates the feasibility of the
Exokernel design.
Xok: An extension of the MIT Exokernel, paired with the ExOS library operating system.
● Modularity:The system is divided into distinct layers, each with a specific function.
● Abstraction:Higher layers are abstracted from the details of lower layers, reducing complexity for
developers working on higher layers.
● Isolation: Changes in one layer typically do not affect other layers, ensuring better fault isolation
and system stability.
● Controlled Communication: Layers communicate only with their immediate neighbors, adhering to
strict interfaces and reducing dependencies.
Structure of Layers
1. Hardware Layer (Layer 0): This is the bottom-most layer consisting of physical hardware such as the
CPU, memory, storage, and I/O devices.The layer provides raw computing resources that the OS
manages. It does not perform any management tasks.
2.Kernel Layer (Layer 1): The kernel is the core of the operating system and directly interacts with the
hardware layer.It serves as a foundation for higher layers.
3. Device Drivers Layer (Layer 2): Device drivers interface with hardware devices (e.g., printers, storage
devices) and provide a unified interface for the OS to access these devices. Translate OS commands into
device-specific operations. Handle interrupts and errors during device communication.
4.System Utilities Layer (Layer 3): Provides essential system utilities and libraries that support the
functioning of the operating system.
5.User Interface Layer (Layer 4): Provides an interface for user interaction with the system.
Types of interfaces:
● Command-Line Interface (CLI): Accepts text-based commands (e.g., Linux Terminal).
● Graphical User Interface (GUI): Offers a visual interface with menus, windows, and icons (e.g.,
Windows, macOS).
6.Application Layer (Layer 5): This is the topmost layer where user applications like word processors,
web browsers, and games run.
Applications rely on OS-provided APIs and libraries to perform tasks such as file operations and memory
allocation.
2.Process Creation: Processes can create other processes, leading to a hierarchy of parent and child
processes. The process creation mechanism is fundamental to multitasking.
● Parent Process Initiates Creation: A parent process creates a child process using system calls
like fork() (in Unix/Linux).
● Resource Allocation:The OS allocates resources (memory, CPU time, I/O) to the child process.
● Execution Context Setup: The new process inherits some attributes from the parent (e.g., open
files, environment variables).
● Child Process Execution: The child process may execute the same program as the parent or
load a new program using system calls like exec().
Unix/Linux:
● fork() creates a child process that is a duplicate of the parent.
● exec() replaces the process's memory with a new program.
Windows:
● CreateProcess() creates a new process.
3.Process State Transitions: A process can exist in one of several states, depending on its current
activity and the availability of resources. These states are managed by the OS to enable multitasking.
a. New: The process is being created but has not yet started execution.
b. Ready: The process is prepared to execute but is waiting for CPU availability.
e . Terminated: The process has finished execution and is being removed from the system.
4.Process Termination: A process terminates when it completes its execution or is forcibly terminated by
the OS or user. After termination, the process's resources are reclaimed by the OS.
● Normal Completion: The process successfully executes its instructions and exits.
● Error Conditions: Runtime errors like division by zero, invalid memory access, or file not found.
● Manual Termination: A user or administrator kills the process using commands like kill
(Unix/Linux) or End Task (Windows).
● Parent Termination: If a parent process terminates, some systems also terminate its child
processes.
● Resource Shortages: The OS forcibly terminates processes during low-memory or high-CPU
usage conditions.
● Exit System Call: The process makes a system call (exit() in Unix/Linux) to signal completion.
● Resource Deallocation: The OS reclaims resources like memory, open files, and CPU time.
● Process Removal: The OS removes the process's entry from the PCB and scheduling queues.
1. Data Sharing:Processes may need to share data for tasks like database access or computations.
2. Synchronization: Ensures that processes work in coordination, especially when accessing shared
resources.
3. Modularity: Dividing tasks into smaller processes simplifies development and maintenance.
Types of IPC Mechanisms: IPC can be broadly categorized into two types: message passing and shared
memory.
Q9 Explain the concept of threads. Explain User level and kernel level threads. Explain Multi -
threading, thread libraries, threading issues and benefits of threads in detail.
And. Threads in Operating Systems: A thread is the smallest unit of a program that can be executed
independently. Threads are often referred to as "lightweight processes" because they share the same
process resources, such as memory and file handles, while operating independently within a process.
1. Thread vs. Process: A process is a heavy-weight entity with its own memory space and resources. A
thread is a light-weight entity that operates within the process’s memory space.
4. Thread Context:The context of a thread includes its register set, stack, and program counter.
Switching between threads involves saving and restoring this context.
Types of Threads
Definition: User-level threads are managed entirely by the user-level library, and the kernel is unaware of
their existence.
Characteristics: Created and managed by user libraries. No kernel intervention is required for thread
management (e.g., creation, switching).All threads of a process share a single kernel thread.
Advantages:
● Efficiency:Thread creation, switching, and synchronization are faster as they are done in user
space.
● Custom Scheduling: Libraries can implement their own scheduling algorithms.
● Portability: Works across different operating systems without kernel modification.
Definition: Kernel-level threads are managed directly by the operating system’s kernel.
Characteristics: The kernel handles thread creation, scheduling, and management. Each thread is
represented by a kernel thread.
Advantages:
● True Parallelism: Threads can run in parallel on multiprocessor systems.
● Better Performance: Non-blocking system calls allow other threads to continue executing.
● Integration with OS: Thread management and scheduling are tightly integrated with the OS.
Comparison
Multithreading
Definition:
Multithreading refers to the ability of a CPU or an operating system to execute multiple threads
concurrently. A multithreaded process contains multiple threads running in the same memory space.
Multithreading Models
● Many-to-One Model:Multiple user threads are mapped to a single kernel thread. Example:
Green threads in early Java versions.
● One-to-One Model:Each user thread is mapped to a kernel thread.Example: Windows and Linux
threading.
● Many-to-Many Model: Multiple user threads are mapped to an equal or smaller number of kernel
threads.Example: Solaris OS.
Thread Libraries
Thread libraries provide APIs for creating and managing threads. Examples include:
1. POSIX Threads (Pthreads):Standardized thread library for Unix-like systems.Provides functions for
thread creation, synchronization, and management.
3. Java Threads: Part of the Java API, offering higher-level abstractions for thread management.
Threading Issues
1. Race Conditions: Occurs when multiple threads access shared resources simultaneously without
proper synchronization, leading to unpredictable results.
2. Deadlocks: A situation where two or more threads are waiting for each other’s resources, causing a
cycle of dependencies and halting execution.
3. Starvation: Some threads may be denied resources indefinitely due to other threads holding priority.
4. Context Switching Overhead: Switching between threads requires saving and restoring thread
contexts, which can slow down execution.
5. Resource Sharing: Managing access to shared resources like memory and files requires
synchronization mechanisms (e.g., mutexes, semaphores).
Benefits of Threads
1. Responsiveness: Threads allow applications to remain responsive. For example, a GUI thread can
continue updating the interface while a background thread processes data.
2. Resource Sharing: Threads within a process share the same memory space, allowing faster
communication compared to inter-process communication.
3. Efficiency:Creating and managing threads is faster than processes due to shared resources.
Notes
Chapter 4
1.Process Creation: Process creation is a fundamental operation that occurs when a new process is
initialized. A process can create other processes, which are known as child processes. The process that
creates these child processes is called the parent process. This hierarchy of parent and child processes
forms a process tree.
2.Process Scheduling: Process scheduling determines the order in which processes execute. The goal
is to optimize CPU utilization, ensure fairness, and reduce waiting times.
Types of Schedulers:
Long-Term Scheduler:
● Decides which processes are admitted into the system for processing.
● Controls the degree of multiprogramming.
Short-Term Scheduler:
● Selects which process will execute next from the ready queue.
● Executes frequently and has a significant impact on system performance.
Medium-Term Scheduler:
● Temporarily removes processes from memory (swapping) to reduce load.
Scheduling Criteria:
● CPU Utilization: Maximize CPU usage.
● Throughput: Number of processes completed per unit time.
● Turnaround Time: Time taken for a process to complete execution.
● Waiting Time: Time spent in the ready queue.
● Response Time: Time between request submission and the first response.
3..Process Termination; Process termination occurs when a process finishes its execution or is explicitly
stopped. Once a process is terminated, its resources are released and made available for other
processes.
● Normal Termination: The process completes its execution successfully. Example: Returning
from main() or calling exit().
● Error Termination: The process encounters a fatal error (e.g., segmentation fault, illegal
instruction).
Termination Process:
● The process executes a system call like exit() to signal completion.
● The OS deallocates memory, file handles, and other resources.
● The process is removed from the process table.
● The process’s termination status is communicated to its parent.
4.Process Synchronization: When multiple processes access shared resources, synchronization
ensures that the resources are used consistently and without conflict.
1. Critical Sections: Sections of code where shared resources are accessed need protection to prevent
race conditions.
2. Race Conditions: Occur when multiple processes access and manipulate shared data concurrently,
leading to unpredictable results.
Synchronization Mechanisms:
● Mutexes (Mutual Exclusion): A locking mechanism that allows only one process to access a
resource at a time.
● Semaphores: A signaling mechanism that controls access to shared resources.
● Monitors: High-level constructs that encapsulate shared resources and synchronization
mechanisms.
● Spinlocks: Processes continuously check for resource availability, suitable for short waiting
times.
5.Inter-Process Communication (IPC)- IPC allows processes to exchange data and coordinate actions.
It is essential in multitasking systems where processes need to collaborate.
IPC Mechanisms:
2. Message Queues: A queue maintained by the OS for sending and receiving messages.
3. Shared Memory: A memory region accessible by multiple processes for fast data sharing.
Q2. PCB?
Ans. The Process Control Block (PCB) is a data structure used by operating systems to store all
information about a specific process. It acts as a repository for information that the OS needs to manage
the execution of the process and maintain its state.
Structure of a PCB: The PCB consists of various fields that hold specific information about a process.
The fields may vary depending on the operating system, but the general components include the
following:
● Process State: Indicates the current state of the process, such as:
● New: Process is being created.
● Ready: Process is ready to run.
● Running: Process is currently executing.
● Waiting: Process is waiting for an event or I/O.
● Terminated: Process has completed execution.
● Program Counter (PC): Stores the address of the next instruction to be executed.
3. CPU Registers: The current values of the CPU registers (e.g., accumulator, base register, stack
pointer) are stored in the PCB when the process is not executing. This ensures the process can resume
correctly during context switching.
1. Process Tracking:The OS uses the PCB to keep track of all active processes. PCBs are stored in a
process table, an array or linked list maintained by the OS.
2. Context Switching: During context switching, the current process's state is saved in its PCB, and the
state of the next process is loaded from its PCB. This allows the OS to resume processes exactly where
they left off.
3. Scheduling: Scheduling algorithms use information in the PCB (e.g., priority, process state) to decide
which process to execute next.
4. Resource Allocation: The OS uses PCB data to allocate and deallocate resources such as CPU time,
memory, and I/O devices.
Lifecycle of a PCB
1. Creation: When a process is created, a PCB is allocated and initialized with default values. The PCB is
added to the process table.
2. Ready : The PCB is updated to indicate that the process is in the ready queue. Scheduling information
is adjusted based on priority or other criteria.
3. Running: The PCBs state changes to "Running." The CPU uses the program counter and register
values stored in the PCB to execute the process.
4. Waiting: If the process needs to wait for I/O or an event, the PCBs state changes to "Waiting."
Information about the pending I/O or event is recorded.
5. Termination: When the process completes execution, the PCB is marked as "Terminated."
The OS deallocates the PCB and releases resources.
Chapter 5
Inter process communication
Q1.Cooperating Processes?
Ans.Cooperating Processes: The Concurrent processes executing in the operating system allows for the
processes to cooperate
(bothmutually or destructively) with other processes. Processes are cooperating if they can affect
eachother. The simplest example of how this can happen is where two processes are using the same file.
One process may be writing to a file, while another process is reading from the file; so, what is being read
may be affected by what is being written. Processes cooperate by sharing data.
Cooperation is important for several reasons: 1.Information Sharing: Several processes may need
to access the same data (such as stored in a file.
2.Computation Speedup: A task can often be run faster if it is broken into subtasks and distributed
among different processes. For example, the matrix multiplication code you saw in class. This depends
upon the processes sharing data. (Of course, real speedup also required having multiple CPUs that can
be shared as well.) For another example, consider a web server which may be serving many clients. Each
client can have their own process or thread helping them. This allows the
server to use the operating system to distribute the computer’s resources, including CPU time,
among the many clients.
3.Modularity: It may be easier to organize a complex task into separate subtasks, and then have
different processes or threads running each subtask. Example: A single server process dedicated to a
single client may have multiple threads running – each performing a different task for the client.
4.Convenience: An individual user can run several programs at the same time, to perform some task.
Example:
A network browser is open, while the user has a remote terminal program running (such as
telnet), and a word processing program editing data. Cooperation between processes requires
mechanisms that allow processes to communicate data between each other and synchronize
their actions so they do not harmfully interfere with each other. The purpose of this note is to
consider ways that processes can communicate data with each other, called Inter-process
Communication (IPC).
Q2. Do you think a single user system requires process communication? Support your answer
with logic.
Ans.Yes, a single-user system can require Inter-Process Communication (IPC), depending on the nature
of tasks and system design. Even in a system designed for a single user, there can be multiple processes
running concurrently, and these processes may need to exchange information or coordinate their
activities.
Supporting Logic
4. Shared Resources: Processes in a single-user system might need to share resources, such as:
● Accessing the same file or database.
● Coordinating access to hardware resources (e.g., printers, disk drives) to avoid conflicts.
Chapter- 6
CPU Scheduling
Q1.CPU Scheduling
Ans.CPU Scheduling is the process by which the operating system determines which process in the
ready queue should be allocated to the CPU for execution. It is a fundamental function of multitasking
operating systems to ensure efficient CPU utilization and fair resource sharing among processes.
1. Preemptive Scheduling: The CPU can be taken away from a running process before it finishes.Used
in time-sharing systems.
Examples: Round Robin, Shortest Remaining Time First.
2. Non-Preemptive Scheduling:Once a process starts executing, it cannot be preempted until it finishes.
Examples: First Come First Serve, Priority Scheduling.
A.The First Come First Serve (FCFS) scheduling algorithm is the simplest and most straightforward
CPU scheduling technique. In this method, processes are executed in the exact order in which they arrive
in the ready queue, similar to a queue in real life, such as a ticket counter.
1. Type: Non-preemptive: Once a process starts execution, it runs to completion without being
interrupted.
2. Mechanism: The CPU is assigned to the process at the front of the ready queue. Processes wait in a
queue based on their arrival times.
Problem Statement
Consider the following set of processes with their arrival times and burst times:
P1 0 5
P2 1 3
P3 2 8
Execution Steps
1. Order of Execution:
● Since P1 arrives first, it will be executed first.
● P2 arrives next and will execute after P1.
● P3 will execute last.
2. Gantt Chart:
● A graphical representation of the CPU's execution order.
● | P1 | P2 | P3 |
0 5 8 16
3. Completion Time:
● P1 finishes at time 5.
● P2 finishes at time 8 (5 + 3).
● P3 finishes at time 16 (8 + 8).
● P1: 5-0=5
● P2: 8-1=7
● P3: 16-2=14
6. Summary table: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time | Waiting
Time | |---------|--------------|------------|-----------------|-----------------|--------------| | P1 |0 |5 |5
|5 |0 | | P2 |1 |3 |8 |7 |4 | | P3 |2 |8
| 16 | 14 |6 |
Advantages of FCFS
1. Simple and Easy to Implement: FCFS is straightforward as it requires minimal overhead in scheduling
logic.
2. Fair for Sequential Processes: Each process is treated equally, based on its arrival time.
3. Good for Batch Systems: Works well in environments where process completion time is not critical.
Disadvantages of FCFS
1. Convoy Effect: If a long process arrives first, shorter processes must wait, leading to inefficient CPU
utilization and longer average waiting times.
2. Poor Performance for Interactive Systems: High response times for processes, making it unsuitable for
real-time or interactive environments.
Key Characteristics
● Type: Non-preemptive.
● Selection Criterion: The process with the smallest burst time is chosen.
● Efficiency: Reduces average waiting time compared to First Come First Serve (FCFS).
● Drawback: Requires knowledge of burst times in advance, which is not always feasible.
P1 0 6
P2 1 8
P3 2 7
P4 3 3
Execution Steps:
Gantt Chart:
| P1 | P4 | P3 | P2 |
0 6 9 16 24
Calculation: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time (TAT) | Waiting
Time (WT) | |---------|--------------|------------|-----------------|-----------------------|-------------------| | P1 |0
|6 |6 |6 |0 | | P2 |1 |8 | 24 | 23
| 15 | | P3 |2 |7 | 16 | 14 |7 | | P4 |3 |3
|9 |6 |3 |
Average Waiting Time (AWT):
AWT= total waiting time. = 0+15+7+3.=6.25,ms
Number of processes. 4
Advantages of SJN
1. Optimal Waiting Time: Minimizes average waiting time for all processes.
2. Efficient for Batch Systems: Well-suited for environments where burst times are predictable.
Disadvantages of SJN
2. Inaccurate Burst Time: Relies on accurate prediction of burst times, which is not always feasible.
P1 0 6
P2 1 8
P3 2 7
P4 3 3
Execution Steps:
| P1 | P4 | P1 | P3 | P2 |
0 3 6 7 14 22
Calculation: | Process | Arrival Time | Burst Time | Completion Time | Turnaround Time (TAT) | Waiting
Time (WT) | |---------|--------------|------------|-----------------|-----------------------|-------------------| | P1 |0
|6 | 14 | 14 |8 | | P2 |1 |8 | 22 | 21
| 13 | | P3 |2 |7 | 14 | 12 |5 | | P4 |3 |3
|6 |3 |0 |
Advantages of SRTF
● Reduced Waiting and Turnaround Time: Offers better performance compared to SJN.
● Dynamic Adaptation: Can handle processes arriving dynamically.
Disadvantages of SRTF
● High Overhead: Frequent context switching can degrade performance.
● Starvation: Longer processes may suffer if shorter processes keep arriving.
● Complexity: More challenging to implement compared to SJN.
D. Round Robin (RR) CPU Scheduling : Round Robin (RR) is one of the simplest and most widely used
preemptive CPU scheduling algorithms. It is designed especially for time-sharing systems, where each
process gets a fixed time slot (quantum) for execution in a cyclic manner. If a process doesn’t complete its
execution within its time slice, it is moved to the end of the ready queue, and the CPU is allocated to the
next process in the queue.
1. Preemptive: RR is preemptive because processes are interrupted after their allocated time slice,
ensuring fairness among processes.
2. Time Quantum: A fixed time slice or quantum is set (e.g., 2ms, 5ms). Determines how long a process
can execute before being preempted.
3. Fairness: Each process gets equal time for execution, making it fair for all processes.
4. Cyclic Nature :Processes are executed in the order they arrive in the ready queue and are placed at
the end of the queue after their time slice expires.
5. Suitable for Interactive Systems: Ensures better response times for processes, making it ideal for
multitasking and time-sharing systems.
Example
Problem Statement:
Given the following processes, their arrival times, and burst times, schedule them using Round Robin with
a time quantum of 4ms.
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Execution Steps
Gantt Chart
| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 29
Completion Details: Let’s compute the completion time (CT), turnaround time (TAT), and waiting time
(WT).
Formulas:
Diagrammatic Representation
Below is a visual depiction of Round Robin scheduling, showing how processes are executed in time
slices.
Gantt Chart:
| P1 | P2 | P3 | P4 | P1 | P3 | P4 | P3 |
0 4 8 12 16 20 24 25 29
1. Small Quantum: Increases context switching overhead. Improves response time but may degrade
throughput.
2. Large Quantum: Reduces context switching. If too large, RR behaves like FCFS, defeating its
purpose.
2. Improved Response Time:Processes are executed periodically, ensuring quick responses for
interactive systems.
3. Efficient for Time-Sharing:Well-suited for environments where tasks are of equal priority.
4. Dynamic Adaptation: The performance can be tuned by adjusting the time quantum.
2. Impact of Time Quantum: If the quantum is too small, overhead increases; if it is too large, it behaves
like First Come First Serve (FCFS).
3. Not Ideal for Varying Burst Times: Longer processes may still take a long time to complete due to
cyclic execution.
Multilevel Queue Scheduling is a CPU scheduling algorithm that divides the ready queue into multiple
separate queues based on the type, priority, or characteristics of processes. Each queue has its own
scheduling policy, and processes are permanently assigned to a queue depending on specific criteria,
such as priority, process type, or memory size.
1. Multiple Queues: The ready queue is divided into several smaller queues, each handling a different
type of process.
Example queues: System processes, Interactive processes, Batch jobs.
2. Permanent Assignment: Once a process is assigned to a queue, it remains there throughout its
lifetime.
3. Separate Scheduling Policies: Each queue has its own scheduling algorithm, such as: Round Robin for
interactive processes.First Come First Serve (FCFS) for batch jobs.
4. Inter-Queue Scheduling:A predefined priority governs which queue’s processes are selected for
execution. Higher-priority queues are serviced before lower-priority ones.
+------------------+
| System Queue | <- Highest Priority
(Scheduled using FCFS)
+------------------+
| Interactive Queue| <- Medium Priority
(Scheduled using RR)
+------------------+
| Batch Queue | <- Lowest Priority
(Scheduled using FCFS)
+------------------+
1. Fixed Priority Scheduling: Higher-priority queues are always serviced first. Processes in
lower-priority queues may face starvation.
2. Time-Slice Scheduling: Time slices are allocated to each queue, ensuring no queue is ignored.
Example: System queue gets 70% of CPU time, interactive queue gets 20%, and batch queue gets 10%.
Example Problem
Processes:
P1 System 0 5
P2 Interactive 1 7
P3 Batch 2 4
P4 Interactive 3 5
P5 Batch 4 6
Execution Steps
● At , P1 (System Queue) is executed because it belongs to the highest-priority queue.
● Once P1 finishes, P2 (Interactive Queue) is selected and runs for 4ms (time quantum).
● After P2’s time slice expires, P4 (Interactive Queue) is executed next since it arrived earlier than
any batch process.
● Finally, processes from the Batch Queue (P3 and P5) are executed using FCFS.
Gantt Chart
| P1 | P2 | P4 | P2 | P3 | P5 |
0 5 9 13 17 21 27
Completion Details
1. Specialization: Processes are grouped based on type, allowing the system to use tailored scheduling
policies.
2. Efficient Resource Utilization: System-critical processes get priority, ensuring faster response times for
important tasks.
2. Rigid Structure: Processes are permanently assigned to a queue, which may not adapt well to
changing system dynamics.
3. Complexity: Managing multiple queues and their scheduling policies can be complex.
Notes
Key Characteristics
Example
In a batch processing system, jobs waiting in a queue are selected based on their priority or resource
requirements.
Advantages
● Improves system throughput by balancing workloads.
● Controls system performance by limiting the number of processes.
2. Medium-Term Scheduling
Definition: Medium-term scheduling temporarily removes processes from the main memory to reduce the
load on the CPU and other resources. These processes are placed in a suspended state and can be
reintroduced later.
Key Characteristics
● Frequency: Occurs more frequently than long-term scheduling but less than short-term
scheduling.
● Goal: To optimize system performance by swapping processes in and out of memory.
● Process States Involved: Transitions from Ready/Running to Suspended and back.
Example: A process that is idle or waiting for I/O may be swapped out to disk to make room for other
active processes.
Advantages
● Improves memory management by allowing more processes to be active.
● Balances the load on the CPU by suspending and resuming processes.
Definition: Short-term scheduling, also known as CPU scheduling, selects a process from the ready
queue to execute on the CPU.
Key Characteristics
● Frequency: Occurs most frequently; decisions are made every time the CPU is idle or a process
terminates.
● Goal: Maximize CPU utilization, minimize response time, and ensure fairness.
● Process States Involved:
● Transitions from Ready to Running state.
Common Algorithms
● First Come First Serve (FCFS): Executes processes in the order they arrive.
● Shortest Job Next (SJN): Selects the process with the shortest burst time.
● Round Robin (RR): Allocates fixed time slices to each process.
● Priority Scheduling: Prioritizes processes based on assigned priority values.
● Multilevel Queue Scheduling: Divides processes into multiple priority queues.
Advantages
● Ensures efficient CPU utilization.
● Enhances system responsiveness.
Notes
Chapter 10-11
Memory management
Q1
Ans. Memory management is a critical function of an operating system (OS) that ensures efficient use of
a computer's memory resources. It involves allocating, deallocating, and managing memory to optimize
system performance and allow multiple programs to run simultaneously. Here's a comprehensive
breakdown:
● Efficient Utilization: Ensure memory is used effectively by allocating the right amount of space to
each process.
● Process Isolation: Prevent one process from interfering with another’s memory.
● Maximizing Multitasking: Allow multiple programs to run concurrently by sharing memory
resources.
● Memory Protection: Safeguard data from unauthorized access or corruption.
● Dynamic Allocation: Adjust memory allocation based on process needs during execution.
2. Memory Hierarchy: Memory management operates across various memory levels, each differing in
speed, size, and cost:
a. Memory Allocation: Static Allocation: Memory is allocated at compile time and remains fixed. Dynamic
Allocation: Memory is allocated at runtime, allowing flexibility.
c. Swapping: When RAM is full, the OS moves inactive processes to secondary storage (swap space)
and retrieves them when needed.
d. Paging: Divides memory into fixed-size blocks (pages). Avoids fragmentation and allows
non-contiguous memory allocation.
e. Segmentation: Divides memory into variable-sized segments based on logical divisions like functions or
arrays.
f. Virtual Memory: Allows the system to use more memory than physically available by utilizing disk space
as an extension of RAM.
a. Writing the Program: The user writes the program in a high-level language (e.g., C++, Java, Python).
This is done using an editor or Integrated Development Environment (IDE).
Output: A source code file (e.g., program.cpp).
b. Compilation: The source code is converted into machine-readable instructions through a compiler.
Compilation involves:
● Lexical Analysis: Tokenizing the source code.
● Syntax Analysis: Ensuring correct syntax based on grammar rules.
● Semantic Analysis: Checking for logical errors.
● Optimization: Improving code efficiency.
● Code Generation: Translating the code into machine language (object code).
● Output: Object code file (e.g., program.o).
c. Linking: Links the object code with additional required libraries or other modules.
Combines:
User-defined functions.
Standard libraries (e.g., math, I/O libraries).
External libraries.
The linker resolves symbols and addresses used in the program.
d. Loading: The executable file is loaded into the main memory by the loader.
The OS assigns necessary memory and sets up the program for execution.
Steps involved:
● Loading the code segment (instructions).
● Loading the data segment (variables, constants).
● Allocating a stack for execution flow.
c. Linker: Resolves external references and combines object files into a single executable.
d. Loader: Places the executable file into memory, ready for execution.
e. Operating System: Manages memory, CPU scheduling, and system resources during execution.
3. Example Workflow
For a C++ program:
1. Logical Address Space: A logical address is the address generated by the CPU during program
execution. It represents a virtual address that does not directly correspond to a physical location in
memory. These addresses are used by programs and mapped to physical addresses by the operating
system and hardware (Memory Management Unit - MMU).
Example
Suppose a program references a variable at address 0x100 in its code. This is a logical address and must
be translated to a physical address before accessing memory.
2. Physical Address Space: A physical address refers to the actual location in the computer's main
memory (RAM). These addresses are visible to the hardware and used to fetch or store data.
Purpose: Used to manage memory constraints by breaking down a large program into smaller,
manageable parts.
Implementation:
The program is divided into overlays.
● The program is divided into logical parts that do not need to be in memory simultaneously (e.g.,
initialization, computation, and cleanup).
● Each part is loaded into the same memory region (overlapping one another) as required.
● The overlay manager ensures only the necessary section is loaded at any point.
Example
● Consider a program with three sections: A, B, and C. If only one section can fit into memory at a
time:
● First, section A is loaded and executed.
● When section B is needed, A is removed, and B is loaded.
● Similarly, section C replaces B when required.
2. Swapping: Swapping is a memory management technique where entire processes are moved
between main memory (RAM) and secondary storage (disk) to free up memory for other processes.
Implementation:
● The OS selects a process to swap out based on predefined criteria (e.g., inactivity).
● The process is written to disk (swap area).
● When the process is needed again, it is swapped back into memory.
Types of Swapping
2. Demand Swapping: Only necessary parts of the process are swapped, similar to paging.
Notes
Notes
Notes
Notes