Operating System Notes
Operating System Notes
com
Operating System
Notes
1
Introduction to OS
What is Operating System?
An Operating System acts like an interface between you and your
computer's hardware. It handles tasks like running programs, managing
resources, taking care of the CPU, and organizing files. The main goal of an
operating system is to create a user-friendly and efficient environment for
running programs.
Page 2
Types of OS
● Batch OS: It works by storing similar jobs in the computer's memory.
The CPU handles one job at a time, waiting for the current one to
finish before moving on to the next.
● Time Sharing OS: These systems involve direct interaction with the
user. Users give instructions to the OS through input devices like
keyboards, and the OS responds with output. It's like taking turns to
use the computer.
● Windows
Page 3
Developed by Microsoft, Windows is a widely employed operating system
for personal computers, known for its user-friendly interface, extensive
software application support, and compatibility with diverse hardware
configurations.
● macOS:
Crafted by Apple, macOS functions as the operating system for Apple Mac
computers, featuring an elegant and intuitive user interface, smooth
integration with other Apple devices, and a robust ecosystem of software
applications.
● Linux
● Unix
● Android
● iOS
Page 4
Another creation of Apple, iOS operates as the operating system for
iPhones, iPads, and iPods. Offering a seamless and secure user experience,
iOS prioritizes performance and integrates smoothly with other Apple
devices.
Program
A program is a set of instructions written in a programming language to
perform a specific task. It exists as a static entity, typically stored on
non-volatile storage like a disk, until loaded into memory for execution by
the operating system.
Process
A process is a program under execution (which is currently running). The
program counter (PC) shows the address of the next instruction in the
executing process. Each process is identified and managed by a Process
Control Block (PCB).
Process Scheduling:
Page 5
1. Arrival Time- Time at which the process arrives in the ready queue.
5. Waiting Time (WT)- Time Difference between turn around time and
burst time.
Thread
A thread is a lightweight process and is the basic unit for CPU usage. A
process can handle multiple tasks simultaneously by incorporating multiple
threads.
● A thread has its own program counter, register set, and stack.
● Threads within the same process share resources like the code
section, data section, files, and signals.
● There are two types of threads:
○ User threads (Implemented by users)
○ Kernel threads (Implemented by the operating system)
Page 6
Process vs Thread
programming model.
Page 7
Multiprocessing
Multiprocessing involves the simultaneous execution of multiple processes
on a system equipped with multiple CPUs or CPU cores. Each process
represents a running program and runs independently with its own memory
space and resources. The goal of multiprocess is to enhance system
throughput and speed up execution by distributing the workload across
several processors.
Multithreading
Multithreading, on the other hand, revolves around the execution of
multiple threads within a single process. Threads are lightweight units of
execution that share the same memory space and resources, allowing
parallel execution within a process.
Multiprogramming
Multiprogramming is a strategy where multiple programs are loaded into
memory simultaneously, and the CPU switches between them to execute
instructions. The objective is to maximize CPU utilization by swiftly
switching between different programs, especially when one is waiting for
I/O or other operations. Each program operates within its distinct memory
space.
Multitasking
Multitasking is a technique that enables the concurrent execution of
multiple tasks or processes on a single CPU. CPU time is divided among
tasks, creating the illusion of parallel execution. Modern operating systems
commonly use multitasking to provide responsiveness and the capability to
run multiple applications simultaneously.
Page 8
Interprocess Communication (IPC)
Interprocess Communication (IPC) serves as a communication bridge in a
computer system, enabling different processes or threads to exchange
information through shared resources like memory. It acts as an approved
channel facilitated by the operating system, allowing processes to
communicate and share data.
Process States in OS
In an operating system, a process undergoes various states throughout its
lifecycle. Here is the diagram:
Page 9
CPU Scheduling Algorithms
CPU scheduling algorithms are essential components of operating systems
that determine the order in which processes are executed by the CPU.
These algorithms aim to maximize CPU utilization, throughput, and fairness
among processes. Common CPU scheduling algorithms include:
Page 10
● Priority-Based Scheduling (Non-Preemptive): Processes are
scheduled according to their priorities, with the highest-priority
process scheduled first. If two processes have equal priorities,
scheduling follows their arrival times.
● Highest Response Ratio Next (HRRN): Schedules processes based
on the highest response ratio, which considers both waiting time and
burst time. This algorithm helps prevent starvation.
○ Response Ratio = (Waiting Time + Burst Time) / Burst Time
● Multilevel Queue Scheduling (MLQ): Processes are sorted into
different queues based on priority. High-priority processes are placed
in the top-level queue, and lower-priority processes are scheduled
only after completion of top-level processes.
● Multilevel Feedback Queue (MLFQ) Scheduling: Allows processes to
move between queues based on their CPU burst characteristics. If a
process consumes excessive CPU time, it is moved to a lower-priority
queue.
Page 11
allocation, allowing low-priority processes to eventually access the
resources they need for execution.
● Critical Section: This is the part of the code where shared variables
are accessed and/or updated.
● Remainder Section: The rest of the program excluding the Critical
Section.
Page 12
● Mutual Exclusion: The synchronization mechanism must enforce
mutual exclusion, allowing only one process or thread to access a
shared resource or enter a critical section at any given time. This
prevents conflicts and ensures consistency during concurrent access.
● Progress: The synchronization mechanism should allow processes or
threads to make progress by ensuring that at least one process/thread
can enter the critical section when it desires to do so. It avoids
situations where all processes/threads are blocked indefinitely,
leading to a deadlock.
● Bounded Waiting: The synchronization mechanism needs to
guarantee that a process or thread waiting to enter a critical section
will eventually be granted access. This prevents starvation, ensuring
that a process or thread doesn't wait indefinitely to access a shared
resource.
Here are some ways to make sure that different processes or threads work
well together:
● Locks/Mutexes: These are like special keys that only one process or
thread can have at a time. It ensures that only one of them can use a
shared resource or critical section. It's like having a key to a room that
only one person can use at a time, making sure they have it to
themselves.
Page 13
processes or threads to access a shared resource simultaneously.
Semaphores provide mechanisms for mutual exclusion, signaling, and
coordination.
Deadlocks
A deadlock is a situation where a set of processes is blocked because each
process is holding a resource and waiting for another resource acquired
by some other process.
● Hold and Wait: A process is holding at least one resource and waiting
for additional resources.
Page 14
● Circular Wait: A set of processes is waiting for each other in a circular
form.
Page 15
banking system, where a bank never allocates available cash in a way
that it can no longer satisfy the requirements of all its customers.
Memory Management
Memory Management in an operating system involves organizing and
optimizing the use of a computer's memory. It includes allocating memory
to different processes, ensuring they don't interfere with each other, and
freeing up memory when it's no longer needed.
● Fixed Partitioning
● Dynamic Partitioning.
Fixed Partitioning
This method is simple to implement and allows for quick memory allocation.
However, it may lead to inefficient use of memory because, as tasks with
varying memory needs run, some allocated memory might not be used,
creating what's called internal fragmentation.
Page 16
Dynamic Partitioning
● First Fit: This algorithm allocates the first available memory block that
is large enough for a process. It begins the search from the beginning
of the memory and selects the first suitable block encountered. While
simple and providing fast allocation, it may result in larger leftover
fragments, impacting memory efficiency.
● Best Fit: The best-fit algorithm looks for the smallest available memory
block that can accommodate a process. It aims to minimize leftover
fragments by selecting the most optimal block. Although it can lead to
better overall memory utilization, it may require more time-consuming
searches to find the perfect fit.
● Worst Fit: The worst-fit algorithm assigns the largest available memory
block to a process intentionally. This strategy keeps larger fragments
for potential future larger processes. Despite seeming counterintuitive,
it helps reduce fragmentation caused by small processes and
enhances overall memory utilization.
Each algorithm comes with its trade-offs, impacting speed, efficiency, and
the extent of leftover memory fragments.
Page 17
Paging
Paging is a storage strategy in Operating Systems that involves bringing
processes from secondary storage into the main memory as pages. The
concept revolves around dividing each process into pages, and
concurrently, the main memory is divided into frames. Each page of a
process is stored in one frame of the memory. While pages can be stored in
different memory locations, the focus is on finding contiguous frames or
holes for optimal storage. Pages are brought into the main memory only
when needed; otherwise, they stay in secondary storage.
Page 18
Page Replacement Algorithms
Page Replacement Algorithms are strategies used by operating systems to
decide which pages to replace in memory when a page fault occurs. Here
are explanations for three of them:
FIFO maintains a queue of pages in the order they were brought into
memory.
When a page fault occurs, the earliest page in the queue (the one brought in
first) is selected for replacement.
The new page is brought into memory, and the oldest page in the queue is
removed.
This algorithm replaces pages that are not expected to be used for the
longest duration in the future.
Page 19
LRU replaces the page that has been least recently used.
It operates on the principle that recently accessed pages are more likely to
be used again soon.
Page Fault
A page fault occurs when a running program attempts to access a memory
page that is part of its virtual address space but is not currently loaded in the
computer's physical memory. This triggers an interrupt at the hardware level.
Thrashing
Thrashing is a condition in computer systems where a substantial amount
of time and resources is consumed continuously swapping pages (Page
Fault) between the computer's physical memory (RAM) and secondary
storage, like a hard disk. This excessive paging activity results in the system
being preoccupied with moving pages in and out of memory rather than
performing useful tasks, leading to a significant decline in overall
performance.
Page 20
Demand Paging
Demand paging is like fetching parts of a book only when you're about to
read them, rather than bringing the entire book at once. Here's how it
works:
Page 21
● Memory Reference Check: The computer makes sure the information
is valid and can be found in another storage place (secondary
storage).
● Page-In Operation: If all checks pass, a quick operation happens to
bring that needed part into the memory.
● Restart Instruction: The program goes back to where it left off, now
with the required information in hand.
So, demand paging helps the computer be efficient by only bringing in the
necessary information when it's actually needed, making things smoother
and saving memory space.
Virtual Memory
Virtual memory is a concept that expands a computer's usable memory
beyond its physical capacity. It forms an imaginary memory space by
combining RAM (physical memory) with secondary storage like a hard disk.
Page 22
Segmentation
Segmentation is a memory management technique that divides processes
into smaller segments or modules. Unlike paging, these segments do not
need to be placed in contiguous memory locations, eliminating internal
fragmentation. The length of each segment is determined based on the
purpose of that segment within the user program.
Page 23
Fragmentation
Fragmentation refers to the inefficient use of memory, leading to reduced
capacity and performance. This wastage occurs in two main forms:
Page 24
External Fragmentation: Systems with variable-size allocation units
experience external fragmentation. This phenomenon arises when free
memory exists in non-contiguous chunks, making it challenging to allocate
larger memory blocks. Despite the total available memory being sufficient,
the scattered nature of free space limits the system's ability to accommodate
processes with larger memory requirements. Over time, external
fragmentation can result in inefficient memory utilization and impact overall
system performance.
Page 25
Disk Management
Disk management is a critical aspect of operating systems that involves
efficient handling and organization of disk storage resources. It
encompasses various tasks, including disk partitioning, formatting, and
implementing disk scheduling algorithms to optimize Input/Output (I/O)
operations.
Page 26
● FCFS (First-Come, First-Served)
○ Description: Processes disk requests in the order they arrive.
○ Consideration: Does not account for disk head position or
request proximity, potentially causing delays with long seek
times.
Page 27
● LOOK
Each algorithm has its strengths and weaknesses, and the choice depends
on specific system requirements and characteristics.
Recap
Operating System Essentials in Short:
2. Memory Management:
Page 28
Fixed Partitioning: Divides memory into fixed-sized partitions.
3. Replacement Algorithms:
Page 29
5. Operating Systems:
6. Process Synchronization:
8. Virtual Memory:
Page 30
Extends RAM with secondary storage.
9. Fragmentation:
10.Disk Management:
Page 31
Thank You!
Join Now
Page 32