OPERATING SYSTEM
Concepts
+ Operating System & its need : Operating Systems are essential software programs
that manage computer hardware and software resources and provide a user interface
to interact with the computer. The need for an Operating System arises because of the
following reasons :
1. Resource Management : Operating Systems manage hardware and software
resources like CPU, memory, disk, and input/output devices efficiently.
2. User Interface : Operating Systems provide a user interface to interact with the
computer, which makes it easy for users to use computers without knowledge of low-
level details.
3. Convenience : Operating Systems make it possible to run multiple programs
simultaneously, and they take care of scheduling, memory allocation, and other low-
level details automatically.
+ Functions of OS: The primary functions of an Operating System are:
1. Process Management : It involves the creation, scheduling, and management of
processes.
2, Memory Management : It involves the allocation and deallocation of memory to
processes and managing memory to prevent conf
3. File Management : It involves managing and organizing files on disk storage.
4, Device Management : It involves managing and controlling input/output devices.
5. Security : It involves protecting the computer system from unauthorized access,
malware, and other security threats.
6, User Interface : It involves providing a user interface to interact with the computer.
+ Types of OS:
1. Simple Batch Systems : In this type of Operating System, the user submits a batch of
jobs, and the Operating System executes them one by one without user intervention.
2. Multiprogrammed Batched Systems: In this type of Operating System, the Operating
System loads multiple programs in the memory simultaneously and executes them
concurrently, which increases the CPU utilization.
3. Time-Sharing Systems : In this type of Operating System, multiple users can interact
the computer simultaneously by sharing the CPU time. The Operating System
schedules tasks to ensure that each user gets a fair share of the CPU time.
4, Parallel Systems : In this type of Operating System, multiple CPUs work together to
solve a problem faster than a single CPU.
5. Distributed Systems : In this type of Operating System, multiple computers work.
together to solve a problem.
6. Real-Time Systems : In this type of Operating System, the system must respond to
events within a specified time frame. These systems are used in applications like
aviation, defense, and robotics, where timely response is critical.Operating-System Structures
* System Components : The major system components of an Operating System are :
1. Kernel: The kernel is the central component of the Operating System that manages
hardware and software resources.
2. Device Drivers: Device drivers are programs that interact with hardware devices to
control and manage them.
3. System Libraries: System libraries are collections of pre-written code that provide
services to applications.
4. Utility Programs: Utility programs are software programs that perform system-related
tasks like disk formatting, file compression, and backup.
+ Operating System Services : The primary services provided by an Operating System
are:
1. Program Execution : The Operating System provides services to load, execute and
terminate programs.
2. /O Operations : The Operating System provides ser
operations.
3. File System Manipulation : The Operating System provides services to manage files
and directories.
4, Communications : The Operating System provides services to enable communication
between processes.
5. Error Detection : The Operating System provides services to detect and recover from
errors in the system.
5 to manage input/output
+ System Calls : System calls are the interface between a user program and the
Operating System. User programs request Operating System services through system
calls. System calls provide a standard interface to access Operating System services.
+ System Structure : The structure of an Operating System can be divided into two
parts: the kernel and the user space. The kernel is the core component of the
Operating System, which provides low-level services like process management,
memory management, and device management. The user space provides high-level
services like user interfaces and application services.
* Virtual Machines : Virtual Machines are software programs that emulate the hardware
and software of a computer system. They allow multiple Operating Systems to run on
the same physical hardware simultaneously. Virtual Machines provide an isolated
environment for applications to run, which makes them more secure and portable.
Process Management
+ Process Concept : A process is a program in execution. A process has its own memory
space, CPU registers, and other resources. Processes can communicate with each
other using inter-process communication (IPC) mechanisms. The process concept is
essential for managing the execution of multiple programs simultaneously.
+ Process Scheduling : Process Scheduling is the process of deciding which process to
run next on the CPU. The goal of process scheduling is to maximize the CPU utilization
and minimize the response time. The Operating System uses various scheduling
algorithms like First-Come-First-Serve (FCFS), Shortest-Job-First (SJF), Round Robin
(RR), and Priority Scheduling to schedule processes.+ Operation on Processes : The major operations that can be performed on processes
are:
1. Creation : The Operating System creates a new process using the fork() system call.
2. Termination : The Operating System terminates a process using the exit() system call.
3. Suspend : The Operating System can suspend a process temporarily to free up system
resources.
4, Resume : The Operating System can resume a suspended process when system
resources become available.
5. Blocking : A process can be blocked when it waits for some event like input/output
operation.
6. Wakeup : The Operating System can wake up a blocked process when the event it
was waiting for occurs.
7. Context Switch : The Operating System performs a context switch to save and restore
the CPU registers of a process when it is suspended or resumed.
CPU Scheduling Algorithms
+ Basic Concepts : CPU Scheduling is the process of selecting the next process to run on
the CPU. The goal of CPU scheduling is to minimize the average waiting time and
maximize the CPU utilization. CPU scheduling algorithms are used to determine the
order in which processes are executed on the CPU.
Scheduling Criteria : The major scheduling criteria are:
CPU Utilization : The percentage of time the CPU is busy.
Throughput : The number of processes completed per unit time.
Turnaround Time : The time taken from the submission of a process to its completion.
Waiting Time : The time a process waits in the ready queue before it gets the CPU.
Response Time : The time taken from the submission of a request until the first
response is produced.
QRONE
FCFS : First-Come-First-Serve (FCF) scheduling algorithm is the simplest scheduling
algorithm. In FCFS, the process that arrives first is allocated the CPU first. The major
disadvantage of FCFS is that it has a long average waiting time,
+ SJF: Shortest Job First (SJF) scheduling algorithm selects the process with the shortest
CPU burst time. SUF provides the minimum average waiting time among all scheduling
algorithms. The major disadvantage of SJF is that itis difficult to predict the CPU burst
time for a process.
* Priority : Priority scheduling algorithm selects the process with the highest priority for
execution. The priority can be defined based on factors like CPU burst time, memory
requirements, and input/output needs. The major disadvantage of priority scheduling
is that it can lead to starvation of low-priority processes.
+ Round-Robin : Round-Robin (RR) scheduling algorithm allocates a fixed time slice to
each process in a cyclic order. When the time slice expires, the process is preempted
and added to the end of the ready queue. RR provides a fair allocation of the CPU
among all processes. The major disadvantage of RR is that it has a long average
waiting time.
+ Multilevel Queue : Multilevel Queue scheduling algorithm divides the ready queue
into multiple priority levels, where each level has a different scheduling algorithm. The
processes are assigned to the appropriate queue based on their priority. This approach
provides better performance for different types of processes.+ Multilevel Feedback Queue : Multilevel Feedback Queue scheduling algorithm is a
combination of the Multilevel Queue and Round Robin scheduling algorithms. In this
approach, the processes are assigned to different queues based on their priority, and
each queue has a different time slice. This approach provides better performance for
long-running processes and interactive processes.
+ Multiple-Processor Scheduling : Multiple-Processor Scheduling is the process of
scheduling processes on multiple CPUs in a multiprocessor system. This approach
provides better performance and scalability than single-processor scheduling.
+ Process Synchronization : Process Synchronization is the process of coordinating the
execution of multiple processes to avoid race conditions and ensure consistency. The
major synchronization mechanisms are:
1. Mutual Exclusion : Ensures that only one process can access a shared resource at a
time.
2. Deadlock Prevention : Ensures that deadlock does not occur in the system.
3. Deadlock Avoidance : Ensures that deadlock is avoided in the system.
* Critical-Section Problem : Critical-Section Problem is a synchronization problem that
arises when multiple processes access a shared resource simultaneously. The major
solution to the Critical-Section Problem is the implementation of mutual exclusion.
* Introduction to Semaphores : Semaphore is a synchronization mechanism that
provides mutual exclusion and coordination among processes. Semaphore has two
operations :
1. The wait/) operation decrements the semaphore value
2. The signal\) operation increments the semaphore value.
Deadlocks
+ System Model : A system model consists of a set of resources and a set of processes
that compete for these resources. The resources can be of two types: reusable
resources and consumable resources. The reusable resources can be shared among
multiple processes, while the consumable resources can be used by only one process
ata time.
* Deadlock Characterization : Deadlock is a situation in which two or more processes
are blocked and cannot proceed because they are waiting for each other to release
resources. The necessary conditions for deadlock are:
1. Mutual Exclusion : At least one resource is non-sharable, and only one process can
access it at a time.
2. Hold and Wait : A process holds at least one resource and is waiting for another
resource.
3. No Preemption : Resources cannot be preempted from a process.
4. Circular Wait : A set of processes is waiting for each other in a circular chain.
+ Methods for Handling Deadlocks : The three methods for handling deadlocks are:
1. Deadlock Prevention : Ensures that the necessary conditions for deadlock do not
occur in the system.
2. Deadlock Avoidance : Ensures that the necessary conditions for deadlock do not.
occur in the future by dynamically allocating resources to processes.
3. Deadlock Detection and Recovery : Detects the occurrence of deadlock and takes
appropriate action to recover from it.+ Deadlock Prevention : Deadlock Prevention ensures that at least one of the necessary
conditions for deadlock does not occur in the system. The major techniques for
deadlock prevention are:
1. Mutual Exclusion : Convert at least one reusable resource to a shareable resource.
2. Hold and Wait : A process must request all its required resources at once.
3. No Preemption : Resources can be preempted from a process.
4, Circular Wait : Assign a unique number to each resource and require that the
resources be requested in a numerical order.
* Deadlock Avoidance : Deadlock Avoidance ensures that the necessary conditions for
deadlock do not occur in the future by dynamically allocating resources to processes.
The major technique for deadlock avoidance is the Banker's Algorithm, which uses a
safety algorithm to determine if a request for a resource will result in a deadlock or
not.
+ Deadlock Detection : Deadlock Detection detects the occurrence of deadlock by
periodically checking the system for the necessary conditions. The major algorithm for
deadlock detection is the Resource-Allocation Graph algorithm, which uses a graph to
represent the allocation of resources to processes.
+ Recovery from Deadlock : Recovery from Deadlock involves taking appropriate action
to recover from deadlock. The two approaches for recovery from deadlock are:
1. Process Termination : Abort one or more processes to break the deadlock.
2. Resource Preemption : Preempt resources from one or more processes to break the
deadlock.
Memory Management
+ Background : Memory management is the process of managing the memory of a
computer system. The memory is divided into different segments, and each segment is
allocated to a particular process. The primary goal of memory management is to
allocate the memory to processes efficiently to achieve maximum performance.
+ Logical versus Physical Address space : The logical address space refers to the
address space that a process uses. The physical address space refers to the actual
physical memory locations where the data is stored.
+ Swapping : Swapping is a memory-management technique in which a process is
temporarily moved from main memory to secondary storage or vice versa to free up
memory space.
+ Contiguous allocation (fragmentation) : Contiguous memory allocation is a memory
management technique in which each process is allocated contiguous memory space.
Fragmentation occurs when there is no available contiguous memory space to allocate
to a process.
+ Paging : Paging is a memory management technique in which the memory is divided
into fixed-size blocks called pages, and the process is divided into pages of the same
size. The pages of the process are mapped to the available pages in memory.
+ Segmentation : Segmentation is a memory management technique in which the
process is divided into logical segments of different sizes. Each segment is allocated a
contiguous memory space.* Virtual Memory : Virtual memory is a memory management technique in which the
memory is divided into fixed-size blocks called pages, and the process is divided into
pages of the same size. The pages of the process are mapped to the available pages in
memory or to the pages on the secondary storage.
+ Demand Paging : Demand paging is a memory management technique in which the
pages of a process are loaded into memory only when they are needed, rather than
loading the entire process into memory at once.
+ Page-replacement Algorithms : Page-replacement algorithms are used by the
operating system to determine which page should be replaced when there is no
available memory space for a new page. The commonly used page-replacement
algorithms are:
1. FIFO (First In, First Out) : The page that was first loaded into memory is the first one
to be replaced,
2. Optimal : The page that will not be used for the longest time in the future is replaced.
3. LRU (Least Recently Used) : The page that has not been used for the longest time is
replaced.
4, Counting : Each page has a counter, and the page with the lowest count is replaced.
File Management
+ File Concepts : In operating systems, a file is a named collection of related information
that is recorded on secondary storage such as a hard disk. A file system is the part of
the operating system that manages files and directories. File operations refer to the
various tasks that can be performed on files, such as creating, opening, reading,
writing, and closing files. File attributes are properties associated with files, such as the
file name, size, type, ownership, permissions, and creation/modification/access
timestamps.
* Access Methods : Access methods are used to retrieve data from files. The commonly
used access methods are sequential, direct, and indexed.
* Directory : A directory is a structure used to organize and manage files. It contains
formation about the files, such as the file name, location, size, and attributes.
* Structure : The file structure determines how the data is stored ina file. The
commonly used file structures are unstructured, structured, and semi-structured.
+ File System Structure : The file system structure determines how files are organized
and managed on a storage device. The commonly used file system structures are flat
file system, hierarchical file system, network file system, and distributed file system.
* Allocation Methods : The allocation method is the way in which disk space is allocated
for files. There are three main allocation methods:
1. Contiguous Allocation : In contiguous allocation, each file occupies a contiguous block
of disk space. The starting location and size of the file are stored in the file allocation
table (FAT). This method is simple and efficient in terms of disk access but suffers from
external fragmentation, where the free space becomes fragmented into smaller pieces
that are too small to allocate to a new file.
2. Linked Allocation : In linked allocation, each file is a linked list of disk blocks that are
not necessarily contiguous. Each block contains a pointer to the next block in the file
The first block's address is stored in the file's directory entry. This method avoids
external fragmentation but suffers from internal fragmentation, where the last block of
a file may not be completely filled.3. Indexed Allocation : In indexed allocation, a separate index block is used to store
pointers to all the blocks of a file. The index block contains a pointer to every block of
the file, and each block contains data and a pointer to the next block. This method
avoids fragmentation and provides fast access to arbitrary blocks of a file but requires
additional disk space for the index block.
Device Management
+ General Device Characteristics : The general characteristics of devices include the
type of device, the data transfer rate, the access method, the error rate, and the
response time.
+ Device Controllers : A device controller is a hardware component that manages a
device. It communicates with the device through a device driver and performs device-
specific operations such as sending commands and receiving status information,
+ Device Drivers : A device driver is a software component that enables the operating
system to communicate with a device. It translates the operating system's generic
commands into device-specific commands that the device controller can understand.
+ Interrupts Driven 1/0 : In interrupt-driven I/O, the device controller interrupts the
processor when it has data ready to be transferred. The processor then services the
interrupt and transfers the data between the device and memory.
+ Memory Mapped I/O : In memory-mapped I/O, the device controller is mapped into
the address space of the processor. The processor can then access the device registers
as if they were memory locations.
* Direct Memory Access (DMA) : DMA is a technique used to transfer data directly
between a device and memory without involving the processor. The device controller
uses DMA to transfer data to or from memory while the processor is free to perform
other tasks. This improves the efficiency of data transfer and reduces the load on the
processor.
Introduction of different Operating systems (Linux)
+ History : Linux was developed in 1991 by Linus Torvalds as a free and open-source
operating system. It was initially developed as a hobby project but gained popularity
among developers and users due to its flexibility, stability, and security. Linux is now
one of the most widely used operating systems in the world.
+ Design Principles : Linux follows the Unix design principles of modularity, simplicity,
and flexibility. It is composed of small, independent programs that can be combined to
perform complex tasks. It also has a robust command-line interface and supports a
wide range of programming languages.
* Kernel Modules : Linux kernel modules are pieces of code that can be dynamically
loaded and unloaded from the kernel. They allow the kernel to be extended with new
features without the need to recompile the entire kernel. Examples of kernel modules
include device drivers, file systems, and network protocols.
+ Process Management : Linux uses a hierarchical process model where each process
has a parent process and can spawn child processes. The kernel scheduler manages
the execution of processes on the CPU based on their priority and scheduling
algorithm.Scheduling : Linux supports various CPU scheduling algorithms such as the
Completely Fair Scheduler (CFS), the Round Robin Scheduler, and the Priority-Based
Scheduler. The scheduler assigns CPU time to processes based on their priority and
other criteria.
Memory Management : Linux uses virtual memory to manage memory allocation. It
employs techniques such as paging and demand paging to efficiently use memory
resources. It also supports memory allocation policies such as the buddy system and
the slab allocator.
File Systems : Linux supports a wide range of file systems, including ext4, Btrfs, XFS,
and NTES. The file system provides a hierarchical structure for organizing and storing
files and supports features such as permissions, ownership, and file attributes.
Input and Output : Linux provides a unified interface for managing input and output
devices, such as keyboards, mice, and printers. It uses device drivers to manage the
interaction between the kernel and devices.
Inter-Process Communication : Linux supports various mechanisms for inter-process
communication, such as pipes, sockets, and message queues. These mechanisms allow
processes to communicate and share data with each other.
Network Structure : Linux supports various network protocols and services, including
TCP/IP, HTTP, FTP, and SSH. It also provides tools and utilities for managing network
settings and diagnosing network issues.
Security : Linux is known for its robust security features. It employs various
mechanisms such as file permissions, access control lists (ACLs), and secure shell (SSH)
for secure remote access. It also provides tools for managing user accounts, firewalls,
and intrusion detection.