0% found this document useful (0 votes)
19 views22 pages

Operating System Notes

The document provides an overview of operating systems, detailing their functions, organization, architecture, and operations. It covers process management, scheduling, interprocess communication, synchronization, and memory management, highlighting key concepts such as deadlocks and system calls. Additionally, it discusses various scheduling algorithms and the importance of resource allocation and security in operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views22 pages

Operating System Notes

The document provides an overview of operating systems, detailing their functions, organization, architecture, and operations. It covers process management, scheduling, interprocess communication, synchronization, and memory management, highlighting key concepts such as deadlocks and system calls. Additionally, it discusses various scheduling algorithms and the importance of resource allocation and security in operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT 1 – INTRODUCTION

1. What Operating Systems Do

• An Operating System (OS) is software that acts as an intermediary between


a user and the computer hardware.
• It manages hardware resources (CPU, memory, I/O devices), provides a user
interface, and allows application execution.
• Two main functions:
o Resource Manager: Allocates/deallocates CPU, memory, I/O devices.
o Control Program: Controls the execution of user programs and prevents
errors.

Main tasks:

1. Process management: Creating, scheduling, and terminating processes.


2. Memory management: Keeps track of memory, allocation/deallocation.
3. File system management: Organizes data into files/directories.
4. Device management: Coordinates and assigns I/O devices.
5. Security & protection: Ensures secure access and controls unauthorized
access.
6. User interface: Through CLI or GUI.

2. Computer-System Organization

• A computer system has one or more CPUs, device controllers, and memory.
• All devices connected via common buses.
• Components:
1. CPU (Central Processing Unit): Executes instructions.
2. Memory: Stores data and instructions.
3. Device Controllers: Each controls a specific I/O device.
4. Buses: Used for communication between components.
• Each device controller has a local buffer and communicates via interrupts.

3. Computer-System Architecture

• Defines how hardware components are organized and communicate.


• Types:
1. Single-processor systems: One main CPU.
2. Multiprocessor systems (Parallel Systems):
▪ Symmetric: All CPUs equal.
▪ Asymmetric: One master CPU controls others.
3. Clustered systems: Multiple systems working together.
• Advantages of multiprocessor systems:
o Increased throughput
o Cost-effective
o High reliability

4. Operating System Operations

• OS is interrupt-driven: Reacts to hardware and software interrupts.


• Supports multiprogramming: Keeps several jobs in memory.
• Time-sharing (multitasking): Switches rapidly between tasks for user
interaction.
• OS maintains:
o Job scheduling
o Memory management
o File & I/O handling
o Security features

5. Operating System Services

• OS provides services to users and programs.

Main Services:

1. Program execution: Load and run programs.


2. I/O operations: Access to hardware I/O devices.
3. File system manipulation: Create, read, write, and delete files.
4. Communication: Between processes via shared memory/messages.
5. Error detection: Detects and handles errors in CPU, memory, etc.
6. Resource allocation: Efficient distribution of resources.
7. Security and protection: Prevents unauthorized access.
6. System Calls

• A system call is the way programs request services from the OS.
• Acts as interface between the process and the OS.
• Implemented via traps or software interrupts.

Categories:

1. Process control (e.g., fork, exec, exit)


2. File management (e.g., open, read, write, close)
3. Device management (e.g., request, release device)
4. Information maintenance (e.g., get/set time or attributes)
5. Communication (e.g., send, receive messages)

7. System Programs

• System programs provide a convenient environment for program


development and execution.

Types of System Programs:

1. File management: Create, delete, copy files.


2. Status information: Date/time, memory usage.
3. File modification: Text editors.
4. Programming language support: Compilers, interpreters.
5. Program loading and execution: Loaders, linkers.
6. Communication: E-mail, network programs.
7. Application programs: Not part of OS but run on top of it.

8. Operating System Design and Implementation

• Design depends on user needs and system goals (efficiency, user-friendliness).

Steps:

1. Design goals: Decide between user-friendly or efficient system.


2. Mechanisms and policies:
o Mechanisms: How to do things (e.g., file writing).
Policies: What to do (e.g., who gets CPU).
o
3. Implementation: Written in C, C++, and some assembly.
o Monolithic kernel: All services in one program.
o Microkernel: Minimal kernel + user-mode services.

9. Operating System Structure

• Determines how components of the OS are organized.

Types of Structures:

1. Simple structure: No modularity (e.g., MS-DOS).


2. Layered approach: OS divided into layers (e.g., THE OS).
3. Microkernel: Only core services in the kernel.
4. Modules: Loadable modules that extend OS functionality.

10. Operating System Generation

• OS is configured (tailored) to a specific machine setup.


• System generation (SYSGEN) uses:
o Configuration files
o Compiled modules
• Takes input about hardware (CPU type, memory size) and creates customized OS
version.

11. System Boot

• Booting = Starting a computer by loading the OS into memory.


• On power-up, bootstrap program in ROM executes.
• Bootstrap:
o Initializes system hardware
o Loads OS kernel from disk into memory
• Two types:
o Cold boot: From powered-off state.
o Warm boot: Restart without cutting power.
UNIT 2 – PROCESS CONCEPT

Process Concept

• A process is more than a program; it’s an active entity.


• It includes the program code, current activity (program counter, registers), stack
(temporary data), data section (global variables), and heap (dynamically
allocated memory).
• The process is created and managed by the Operating System (OS).
• Each process has a unique Process Control Block (PCB) that contains:
o Process state (new, ready, running, waiting, terminated)
o Program counter (next instruction)
o CPU registers and scheduling info
o Memory management info (page tables, segment tables)
o Accounting info (CPU usage, time limits)
o I/O status (devices used, open files)

Process Scheduling

• The OS handles scheduling using a scheduler that selects which process should
run next from the ready queue.
• Schedulers:
o Long-term scheduler: selects processes from job pool (controls degree
of multiprogramming)
o Short-term scheduler: selects from ready queue (executes frequently)
o Medium-term scheduler (optional): suspends/resumes processes to
improve performance

Operations on Processes

• Process Creation: A parent process creates child processes. Created using


system calls (e.g., fork(), exec() in UNIX).
o Child can be independent or share resources with the parent.
• Process Termination:
o Process can terminate itself (normal exit)
o Parent can terminate child (if needed or if child exceeds limits)
o Termination leads to release of resources

Interprocess Communication (IPC)

• When processes need to exchange data or signals.


• Two main models:
1. Shared Memory: Both processes access a common memory space.
▪ Faster, but needs proper synchronization to avoid conflicts.
2. Message Passing: Processes communicate via messages sent by the OS.
▪ Slower but more structured and safe.
• IPC is needed in all multitasking environments like OS shells, browsers, or client-
server models.

Example of IPC System

• Producer-Consumer Problem:
o Producer adds items to a buffer.
o Consumer takes items from the buffer.
o If buffer is full, producer waits. If empty, consumer waits.
o Solved using bounded buffer with synchronization tools like
semaphores or mutexes.

Communication in Client-Server System

• Client sends request; server responds with data.


• Communication methods:
o Pipes: Unidirectional/bidirectional communication (used in same
system)
o Sockets: For network communication between different systems (IP-
based)
o Remote Procedure Calls (RPCs): Let a program call a function on a
remote system as if it's local
⚙️ PROCESS SCHEDULING

Basic Concepts

• CPU Scheduling is the process of choosing which process gets CPU time.
• The dispatcher gives CPU control to selected process.
• Important when multiple processes are ready but only one CPU is available.

Scheduling Criteria

1. CPU Utilization – Maximize use of CPU (keep it busy)


2. Throughput – Number of processes completed per unit time
3. Turnaround Time – Total time taken from submission to completion
4. Waiting Time – Time a process spends in the ready queue
5. Response Time – Time from request submission to the first response

Scheduling Algorithms

1. FCFS (First-Come-First-Serve):
o Non-preemptive
o Simple but not efficient for short tasks
2. SJF (Shortest Job First):
o Non-preemptive or preemptive
o Best average waiting time
o Needs prior knowledge of burst time
3. Priority Scheduling:
o Each process assigned a priority
o Higher priority runs first
o Risk of starvation (solved using aging)
4. Round Robin (RR):
o Preemptive
o Each process gets a fixed time slice (quantum)
o Good for time-sharing systems
5. Multilevel Queue Scheduling:
o Different queues for different process types
o E.g., system, interactive, batch
o Each queue has its own scheduling algorithm
6. Multilevel Feedback Queue:
o Processes can move between queues
o Based on history and behavior

Thread Scheduling

• Threads (within a process) also need scheduling.


• Two models:
o Process-Contention Scope (PCS): User-level, within same process
o System-Contention Scope (SCS): Kernel-level, across all threads in
system

Multiple Processor Scheduling

• In multi-CPU systems, scheduling becomes more complex.


• Types:
1. Asymmetric Multiprocessing (AMP): One processor handles all
scheduling
2. Symmetric Multiprocessing (SMP): Each processor schedules
independently
• Can use processor affinity (process sticks to one CPU to improve cache
performance)

Real-Time CPU Scheduling

• Used in systems like medical equipment, aircraft, etc.


• Hard Real-Time: Must meet deadlines strictly (e.g., pacemaker)
• Soft Real-Time: Deadline missed occasionally is okay (e.g., video streaming)
• Scheduling algorithms used:
o Rate-Monotonic Scheduling (RMS)
o Earliest Deadline First (EDF)
Algorithm Evaluation

• Evaluate using:
o Deterministic modeling (using set formulas)
o Queueing models (statistical analysis)
o Simulations
o Implementation and real-world measurement
• Goals: reduce waiting time, maximize CPU use, balance fairness
UNIT 3 – SYNCHRONIZATION

Background

• In modern OS, many processes execute concurrently (at the same time or
interleaved).
• When multiple processes share data/resources (e.g., variables, files), race
conditions can occur if not synchronized.
• A race condition happens when multiple processes access and manipulate
shared data simultaneously, and the final outcome depends on the order of
execution.
• To avoid this, OS must ensure mutual exclusion and synchronization.

The Critical Section Problem

• The critical section is the part of a process where it accesses shared resources
(like variables or files).
• Problem: ensure only one process is in its critical section at any time.
• Conditions to solve it:
1. Mutual Exclusion – Only one process in the critical section.
2. Progress – If no process is in the critical section, and some want to enter,
one must be allowed.
3. Bounded Waiting – Limit the number of times other processes can
enter their critical sections before a waiting process is allowed.

Peterson’s Solution

• Software-based solution for two processes.


• Uses two shared variables:
o flag[i] – true if process i wants to enter critical section.
o turn – indicates which process’s turn it is.
• Each process:
1. Sets its flag to true.
2. Gives the other process the turn.
3. Waits until the other’s flag is false or it's their turn.
• Ensures mutual exclusion and satisfies all three conditions, but not reliable on
modern CPUs with optimization.
Semaphores

• A semaphore is a synchronization tool introduced by Dijkstra.


• It’s an integer variable accessed only through:
o wait(S) (also called P(S)) – decreases the value. If < 0, the process blocks.
o signal(S) (also called V(S)) – increases the value. If any process is waiting, it
gets unblocked.
• Types:
o Counting semaphore – value can range over unrestricted domain.
o Binary semaphore – value is either 0 or 1 (acts like a lock).
• Used to solve synchronization problems like producer-consumer, reader-writer,
etc.
• Must be used carefully to avoid issues like deadlock or starvation.

Classic Problems of Synchronization

1. Bounded-Buffer (Producer-Consumer) Problem:


o Producers produce items and put them in a fixed-size buffer.
o Consumers remove items.
o Use semaphores to ensure producer doesn’t insert when full, and
consumer doesn’t remove when empty.
2. Readers-Writers Problem:
o Multiple readers can read simultaneously.
o Writers need exclusive access.
o Ensure no writer writes while a reader is reading.
3. Dining Philosophers Problem:
o 5 philosophers sit at a table with one fork between each.
o To eat, each must pick up the fork on the left and right.
o Must be solved to avoid deadlock and starvation using semaphores or
monitors.

DEADLOCKS

System Model
• A system has multiple resources (CPU cycles, memory, I/O devices).
• Processes request and release resources.
• A deadlock occurs when each process holds one resource and waits to acquire
another, forming a cycle.
• Resources can be:
o Preemptable (can be taken away like CPU)
o Non-preemptable (cannot be taken away like printers)

Deadlock Characterization

Four necessary conditions for deadlock to occur:

1. Mutual Exclusion – Only one process can use a resource at a time.


2. Hold and Wait – A process is holding at least one resource and waiting for
others.
3. No Preemption – A resource can’t be taken away.
4. Circular Wait – A set of processes waiting for each other in a circular chain.

• All four must hold simultaneously for a deadlock to occur.

Methods for Handling Deadlocks

1. Ignore it (Ostrich Algorithm) – Used in most systems (like UNIX) since


deadlocks are rare.
2. Prevention – Design system to never allow any of the 4 conditions.
3. Avoidance – Ensure system doesn’t enter unsafe state.
4. Detection and Recovery – Allow deadlock, but detect and recover from it.

Deadlock Prevention

Prevent at least one of the 4 conditions:

• Mutual Exclusion – Not possible for non-shareable resources.


• Hold and Wait – Process requests all resources at once.
• No Preemption – Take resource back if not all can be allocated.
• Circular Wait – Impose an ordering on resource types and force processes to
request in that order.

Deadlock Avoidance

• Use information about future resource requests to avoid unsafe states.


• Use Banker’s Algorithm:
o Each process must declare the maximum resources it may need.
o The system checks whether allocation keeps system in a safe state.
o Safe state: there is a sequence of processes that can finish one by one
without deadlock.

Deadlock Detection

• Even if deadlocks are allowed, system must detect them:


• Single Instance per resource: Use Wait-for Graph.
• Multiple Instances: Use a matrix-based algorithm (similar to Banker’s).
• If deadlock found, proceed to recovery.

Recovery from Deadlock

1. Process Termination:
o Kill all deadlocked processes.
o Or terminate one by one until deadlock breaks.
2. Resource Preemption:
o Take a resource away from a process.
o Requires rollback of that process.
o Need to avoid starvation.
UNIT 4 – MEMORY MANAGEMENT

Background

• Memory is a key resource; it stores both data and instructions.


• An OS must manage memory efficiently — to allocate, track, and protect it.
• Goals of memory management:
o Maximize CPU utilization
o Increase throughput (no. of processes completed)
o Provide protection and isolation between processes
• Logical address: Used by processes.
• Physical address: Actual location in memory.
• MMU (Memory Management Unit) maps logical to physical addresses.

Swapping

• Swapping: Moving processes between main memory and disk.


• Used when memory is full. Swaps out idle processes to disk and swaps them back
later.
• Roll out, roll in: A process is swapped out and back into a different place.
• Drawback: High I/O overhead and slow if used frequently.
• Needs proper memory protection to avoid one process interfering with
another’s data.

Contiguous Memory Allocation

• Each process occupies a single continuous block of memory.


• Memory is divided into two parts:
o OS memory (usually low addresses)
o User processes (rest of the memory)
• Allocation types:
1. Single partition – only one user process at a time.
2. Multiple partitions – divides memory into fixed or variable-size
partitions.
• Problem: External fragmentation (unused small gaps between allocated
memory).
• Solutions:
o Compaction: Move processes to make free memory contiguous.
o Best-fit, First-fit, Worst-fit allocation strategies.

Segmentation

• Logical memory is divided into segments based on types (e.g., code, data,
stack).
• Each segment has its own base and limit (starting address and size).
• Addresses are of the form ⟨segment number, offset⟩.
• Advantages:
o Supports modularity and protection.
o Segments can grow independently.
• Problems:
o External fragmentation possible.
o Requires complex hardware (segment tables).

Paging

• Breaks memory into fixed-size blocks:


o Pages (logical) and frames (physical).
• Page size = Frame size.
• Logical address: ⟨page number, offset⟩
• A page table maps page numbers to frame numbers.
• Solves external fragmentation, only internal fragmentation may occur.
• Allows non-contiguous allocation.
• Needs hardware support like TLB (Translation Lookaside Buffer) to speed
up lookup.

VIRTUAL MEMORY MANAGEMENT

Background
• Virtual memory lets a process execute even if not fully loaded in physical
memory.
• Stores large programs by keeping only necessary parts in memory.
• Uses demand paging to load pages when required.
• Helps with:
o Running larger programs.
o Multiprogramming (more processes in memory).
o Reducing I/O by loading pages only when needed.

Demand Paging

• A page is loaded only when needed (on a page fault).


• Steps on page fault:
1. OS checks if page is valid.
2. If not in memory, brings it from disk.
3. Updates page table and resumes process.
• Efficient, but too many page faults → Thrashing.

Copy-on-Write (COW)

• Used in process creation (e.g., fork system call).


• Parent and child share same pages initially.
• On modification, a copy is made → avoids unnecessary duplication.
• Improves performance and memory use.

Page Replacement

• When a page fault occurs and memory is full, OS must replace a page.
• Algorithms:
1. FIFO (First-In-First-Out) – Oldest page replaced.
2. LRU (Least Recently Used) – Page not used for longest time.
3. Optimal – Replaces page not needed for longest time in future
(theoretical).
4. Clock – Circular pointer like second-chance algorithm.
• Goal: Reduce page faults.
Allocation of Frames

• Decide how many frames to give each process.


• Fixed Allocation: Equal or proportional number of frames.
• Priority Allocation: Higher priority processes get more frames.
• Use Global or Local Replacement:
o Global: Process can take frames from others.
o Local: Replaces only its own pages.

Thrashing

• Occurs when system spends more time paging than executing.


• Causes:
o Too many processes.
o Insufficient frames.
• Results in:
o Low CPU utilization.
o Increased page faults.
• Solutions:
o Use working set model to track pages actively in use.
o Adjust degree of multiprogramming.
o Page Fault Frequency (PFF) control.
UNIT 5 – FILE SYSTEM & MASS STORAGE STRUCTURE

File Concepts

• A file is a collection of related information stored on secondary storage.


• Types of files: text, binary, executable, etc.
• File attributes: name, type, location, size, protection, timestamps (creation,
modification).
• File operations: create, open, read, write, reposition, delete, truncate.
• OS provides interface to perform these operations.
• Each file is represented internally by a file control block (FCB) containing
metadata.

Access Methods

• Ways to read/write data from a file:

1. Sequential access – data is read/written in order (e.g., text files).


2. Direct (random) access – jump to any location in the file (e.g., databases).
3. Indexed access – uses an index to jump to data blocks (e.g., ISAM).

Directory and Disk Structure

• Directory: Maintains a list of all files.


• Functions:
o Organize files.
o Store metadata (file names, locations).
• Types of directory structures:
1. Single-level – all files in one directory.
2. Two-level – separate directory for each user.
3. Tree-structured – hierarchical, supports subdirectories.
4. Acyclic-graph – allows shared files (hard/soft links).
5. General graph – supports cycles but needs extra control.
• Disk structure: Made of platters and sectors.
o Each disk has a boot block, volume, and partition info.

File Sharing

• Allows multiple users or processes to access the same file.


• File systems must support:
o File locking to avoid conflicts.
o User IDs and permissions for access control.
o Network File System (NFS) for remote sharing.
• Types:
o Controlled access (read, write permissions)
o Simultaneous access using locks.

Protection

• Prevents unauthorized access to files.


• Access control matrix: shows rights of each user for each file.
• Access types: read, write, execute, delete.
• Owner, group, others model used in many systems.
• OS also uses passwords and encryption for protection.

⚙️ IMPLEMENTING FILE SYSTEM

File System Structure

• A file system typically has:


1. Boot control block (for booting OS).
2. Volume control block (info about partitions, size).
3. Directory structure (file names, FCBs).
4. File data blocks.
• File system sits between user programs and physical disk.
File System Implementation

• Requires:
o Structures for directories and metadata.
o Allocation techniques to manage space.
o Data structures like file table, inode, bitmap.
• Goal: fast access and reliable storage.

Directory Implementation

• Two main methods:


1. Linear list: simple, but slow for large directories.
2. Hash table: faster lookup using hash keys.
• Needs efficient lookup, insert, and delete operations.

Allocation Methods

• Determines how disk blocks are assigned to files.

1. Contiguous allocation:
o File blocks are stored together.
o Fast access.
o Issues: external fragmentation and size prediction.
2. Linked allocation:
o Each block points to next.
o No fragmentation but slow access (no random access).
3. Indexed allocation:
o Uses an index block with pointers to all blocks.
o Supports direct access and efficient for large files.

Free Space Management

• Tracks unused blocks.


• Methods:
1. Bitmap/Bit vector – 0 = free, 1 = used (compact and efficient).
2. Linked list – free blocks linked together.
3. Grouping – stores address of free blocks in blocks.
4. Counting – keeps track of consecutive free blocks.

MASS STORAGE STRUCTURE

Overview of Mass-Storage Structure

• Refers to non-volatile, secondary storage devices like:


o Magnetic disks, SSDs, tapes, optical disks.
• Provides large storage but slower than RAM.
• Storage hierarchy: Registers → Cache → RAM → SSD/HDD → Tape.

Disk Structure

• Disk = platters + read/write heads.


• Each platter has tracks and sectors.
• Cylinder = same track on multiple platters.
• Addressing methods:
o CHS (cylinder-head-sector).
o LBA (logical block addressing) – most common today.
• Each disk has disk controller that manages transfer and buffering.

Disk Scheduling

• Determines order in which disk I/O requests are served.


• Common algorithms:
1. FCFS (First Come First Serve) – simple, but inefficient.
2. SSTF (Shortest Seek Time First) – chooses closest request.
3. SCAN – like elevator, moves back and forth.
4. C-SCAN – only services in one direction, then jumps back.
5. LOOK and C-LOOK – variants of SCAN with intelligent movement.
• Goal: minimize seek time and rotational delay.
Disk Management

• Includes:
o Formatting (low-level, logical).
o Partitioning (dividing disk into sections).
o Boot block (to load OS).
o Bad block management (mark unusable blocks).
• Uses tools like defragmentation to improve performance.
• OS handles mounting and unmounting of file systems.

You might also like