0% found this document useful (0 votes)
17 views8 pages

Quickie

The document provides comprehensive revision notes on operating systems, covering key concepts, types, system calls, process management, and concurrency. It details the functions and objectives of operating systems, various scheduling algorithms, and the significance of process control blocks. Additionally, it discusses threads, virtualization, and the importance of process cooperation in enhancing system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

Quickie

The document provides comprehensive revision notes on operating systems, covering key concepts, types, system calls, process management, and concurrency. It details the functions and objectives of operating systems, various scheduling algorithms, and the significance of process control blocks. Additionally, it discusses threads, virtualization, and the importance of process cooperation in enhancing system performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

quickie

Operating Systems – Comprehensive Revision Notes


I. OS Concepts, Fundamentals, and Services
1. Definition and Objectives
Operating System (OS): A collection of software that manages hardware resources and provides common services for computer programs.
Objectives:
User Convenience: Provide intuitive interfaces (CLI or GUI) to simplify interaction.
Resource Utilization: Efficient allocation of CPU, memory, and I/O devices.
Program Execution: Create a stable environment for running applications.
Security and Protection: Enforce access controls to prevent unauthorized use.
Core Functions:
Process, memory, file, and I/O management.
System security and resource accounting.

2. OS as a Resource Manager
Resource Allocation: Dynamically assigns CPU time, memory, and I/O resources to processes.
Resource Tracking: Uses data structures (like process tables) to monitor resource usage.
Protection: Isolates processes to prevent interference and unauthorized access.
Scheduling: Balances load through CPU scheduling and ensures fairness.

3. Abstract View of Computer System Components


Hardware Layer: Physical components (CPU, memory, I/O devices).
Operating System and System Programs:
OS Kernel: Runs in privileged mode to manage hardware.
System Utilities: Compilers, editors, and device drivers.
Application Programs: User-level software (browsers, word processors).
Modes of Operation: Separation of user mode and kernel mode enhances security.
(Reference: citeturn1file0)

II. Types of Operating Systems


1. Classification by Usage and Design
Batch OS: Executes jobs in groups without interactive user intervention.
Time-Sharing Systems: Enable multiple users to interact concurrently through rapid context switching.
Real-Time Operating Systems (RTOS):
Features: Deterministic behavior, minimal latency, and efficient scheduling.
Applications: Industrial control, avionics, and embedded systems.
Multiprocessor OS: Manages systems with multiple CPUs (e.g., symmetric vs. asymmetric multiprocessing).
Personal and Server OS: Optimized for end-user tasks (Windows, macOS, Linux) versus network and resource sharing.
Embedded OS: Designed for devices with limited resources (routers, smart appliances).

2. Specialized Systems
Real-Time OS vs. Distributed OS:
RTOS: Focus on meeting strict timing deadlines.
Distributed OS: Manage a network of computers as one unified system with transparency and fault tolerance.
(Reference: citeturn1file0)

III. System Calls


1. Overview of System Calls
Definition: The programming interface that allows a user program to request services from the OS kernel.
How It Works:
Invocation: User program calls a library function (e.g., read(), write(), fork()).
Mode Switching: Executes a trap instruction to change from user mode to kernel mode.
Kernel Dispatch: The OS uses a system call table to invoke the correct service routine.
Return: Once the operation completes, the kernel returns control to the user program.

2. Types of System Calls


Process Control: Creation and termination (fork(), exec(), exit(), wait()).
File Management: Open, read, write, close, and lseek().
Device Management: Interface with peripheral devices.
Information Maintenance: Retrieve or update system data (time, system status).
Communication: Enable inter-process communication via pipes, sockets, etc.

3. System Call Execution Steps


1. Parameter Preparation: The application pushes necessary parameters onto the stack.
2. Library Call: A standard function prepares the system call number.
3. Trap Instruction: Mode switching to kernel mode.
4. Kernel Dispatch: The OS dispatches the request based on the system call table.
5. Execution and Return: The OS performs the service and returns control to user mode.

4. Process Termination via System Call


Process Exit: A process calls exit() with an exit status.
Resource Deallocation: The OS cleans up all allocated resources and updates the PCB.
Notification: Parent processes may be notified (using wait()/waitpid()).
Removal: The process is removed from scheduling queues, freeing up CPU and memory.
(Reference: citeturn1file0)

IV. Virtual Machines and Virtualization


1. Virtual Machines (VMs)
Concept: Software emulations of physical computers that run multiple OS instances concurrently.
Key Elements:
Hardware Abstraction: Each VM gets virtual CPU, memory, and storage.
Hypervisor: Manages and allocates physical resources among VMs.
Types of VMs:
Type 1 (Bare-Metal): Runs directly on hardware (e.g., VMware ESXi, Hyper-V).
Type 2 (Hosted): Runs atop a host OS (e.g., Oracle VirtualBox).

2. Virtualization Types
Hardware Virtualization: Creating complete virtual machines.
OS-Level Virtualization (Containers): Isolates user-space instances (e.g., Docker, LXC) sharing the same kernel.
Application Virtualization: Encapsulates applications from the OS.
Network and Storage Virtualization: Abstracts network resources and aggregates storage into a single pool.
(Reference: citeturn1file0)

V. Process Management and Scheduling


1. Multiprogramming, Multiprocessing, and Timesharing
Multiprogramming:
Multiple programs are loaded into memory and the CPU switches between them when one waits for I/O.
Focus: Maximizes CPU utilization.
Multiprocessing:
Uses multiple CPUs to run processes in parallel.
Enhances performance and fault tolerance.
Timesharing:
Divides CPU time into small slices to provide interactive response to multiple users.
Focus: Enhances user interactivity and quick response.

2. Time Sharing vs. Multiprogramming


Objective:
Multiprogramming maximizes throughput.
Timesharing provides interactive responsiveness.
Mechanism:
Multiprogramming switches processes on I/O waits.
Timesharing uses rapid context switching (e.g., Round Robin) to give each user a slice of time.
Usage:
Multiprogramming is typically used in batch systems.
Timesharing is used in multi-user and interactive systems.
(Reference: citeturn1file0)

VI. Processes, Threads, and Concurrency


1. Process Concepts and Life Cycle
Process: An executing instance of a program with its own state, program counter, and resources.
States:
Running: Actively executing.
Ready: Waiting for CPU allocation.
Blocked: Waiting for an external event.
5-State Model: New, Ready, Running, Waiting (Blocked), Terminated.
State Transitions: Include transitions like Running → Blocked, Running → Ready, and Blocked → Ready.

2. Process Control Block (PCB)


Purpose: Data structure that holds all information needed for process management.
Key Fields:
Process ID, process state, program counter, CPU registers, scheduling information, memory management details, and I/O status.
Significance: Enables efficient context switching and resource allocation.

3. Scheduling Algorithms
Preemptive Scheduling:
Round Robin (RR): Fixed time slices for each process.
Shortest Remaining Time First (SRTF): Chooses the process with the least remaining burst time.
Non-Preemptive Scheduling:
First Come, First Served (FCFS): Processes run to completion once started.
Criteria for a Good Scheduler: Maximizes CPU utilization, minimizes waiting and turnaround time, ensures fairness, and reduces overhead.

4. Threads and Multithreading


Thread: The smallest unit of execution within a process; has its own execution context but shares the process’s resources.
Lightweight Nature: Faster to create and switch between than processes due to shared memory and resources.
Types of Threads:
User-level Threads (ULT): Managed in user space.
Kernel-level Threads (KLT): Managed by the OS kernel.
Hybrid Models: Combine ULT and KLT.
Advantages: Enhanced concurrency, improved responsiveness, and lower overhead.

5. Concurrency, Context Switching, and Process Cooperation


Race Conditions: Occur when multiple threads/processes access shared data simultaneously.
Critical Section: Code segment where shared resources are accessed; must be executed exclusively.
Context Switching: The process of saving and restoring process or thread states when switching execution.
Process Cooperation: Processes that share resources and synchronize via inter-process communication (IPC) mechanisms. They improve performance, enable modular design, and enhance
system reliability.
(Reference: citeturn0file0)

Operating Systems – Processes Revision Notes


I. Process Concepts and States
Definition of a Process
Process: An executing instance of a program; it’s a dynamic entity with its own activity, including a program counter, CPU registers, and allocated resources (memory, files, I/O devices).
Program vs. Process: A program is a static set of instructions, whereas a process is that program in action.

Process States
Running: Actively executing on the CPU.
Ready: Loaded in memory and waiting for CPU time.
Blocked (Waiting): Suspended because it’s waiting for an external event (e.g., I/O completion).

Process Life Cycle & State Transitions


Basic Transitions:
Running → Blocked: When the process waits for an external event.
Running → Ready: When a process’s time slice expires or is preempted.
Ready → Running: When the scheduler dispatches the process.
Blocked → Ready: When the awaited event occurs.
5-State Model:
New: Process is being created.
Ready: Process is waiting in memory for CPU allocation.
Running: Process is currently executing.
Waiting (Blocked): Process is waiting for an event.
Terminated: Process has completed execution or has been aborted.
(Reference: citeturn0file0)

II. Process Control Block (PCB)


What is a PCB?
PCB (Process Control Block): A data structure used by the OS to manage and control processes. It holds all essential information about a process.

Key Components of PCB


Process Identification: Unique process ID (PID).
Process State: Indicates current status (new, ready, running, waiting, terminated).
Program Counter: Address of the next instruction.
CPU Registers: Saved values during context switching.
Scheduling Information: Priority levels, pointers to scheduling queues, and time slice details.
Memory Management Information: Base/limit registers, page tables, or segment tables.
Accounting Information: CPU usage, execution time, and resource usage.
I/O Status Information: Open files and allocated I/O devices.

Significance
Process Management: Helps the OS track and manage processes.
Context Switching: Stores state information to switch processes seamlessly.
Resource Allocation: Centralizes management of process resources.
(Reference: citeturn0file0)
III. Process Scheduling and Algorithms
Scheduler and Scheduling
Scheduler: A component that decides which process gets CPU time.
Scheduling: The method of ordering process execution to ensure efficient resource use.

Types of Schedulers
1. Long-Term Scheduler (Job Scheduler):
Controls which processes are admitted into the system.
Regulates the degree of multiprogramming.
2. Short-Term Scheduler (CPU Scheduler):
Selects processes from the ready queue for CPU allocation.
Makes frequent, rapid decisions.
3. Medium-Term Scheduler (Swapping Scheduler):
Manages the swapping of processes between main memory and secondary storage.
Balances memory usage by moving inactive processes.

Criteria for a Good Scheduling Algorithm


CPU Utilization: Maximize busy CPU time.
Throughput: Maximize the number of processes completed per time unit.
Turnaround Time: Minimize the total execution time.
Waiting Time: Reduce the time processes spend waiting.
Response Time: Minimize the delay before process execution starts.
Fairness: Ensure equitable CPU allocation.
Overhead: Keep the scheduling overhead low.

Scheduling Algorithms
Preemptive Scheduling:
Example – Round Robin (RR): Each process gets a fixed time quantum; if unfinished, it returns to the end of the queue.
Example – Shortest Remaining Time First (SRTF): The process with the smallest remaining CPU burst is chosen.
Non-Preemptive Scheduling:
Example – First Come, First Served (FCFS): Processes are scheduled in the order they arrive and run to completion.

Detailed Examples
Round Robin:
Mechanism: Processes rotate with fixed time slices.
Advantages: Fairness, improved responsiveness.
Disadvantages: Overhead due to frequent context switches.
Priority Scheduling:
Mechanism: Each process is assigned a priority; the highest priority process is selected next.
Variants: Can be preemptive or non-preemptive.
Advantages: High-priority tasks get prompt service.
Disadvantages: Risk of starvation for lower-priority processes; possible priority inversion.
(Reference: citeturn0file0)

IV. Threads and Multithreading


What is a Thread?
Thread: The smallest unit of execution within a process with its own program counter, registers, and stack, yet sharing the process’s address space and resources.
Lightweight Process: Threads are called lightweight because they require less overhead in creation and context switching due to shared resources.

Resources Used in Thread Creation


Thread Control Block (TCB): Contains thread-specific data (ID, state, etc.).
Execution Context: Program counter, CPU registers, and private stack.
Scheduling Information: Priority and time-slice details.
Shared Process Resources: Common address space, heap, and open files.

Thread Structure and Benefits


Execution Context: Maintains individual execution history.
Shared Resources: Enables efficient inter-thread communication.
Advantages:
Enhanced concurrency and parallelism.
Improved responsiveness in interactive applications.
Lower overhead compared to processes.

Multi-threading and Types of Threads


Multi-threading: Running multiple threads concurrently within a single process.
Types:
User-level Threads (ULT): Managed in user space; faster context switching but limited by kernel awareness.
Kernel-level Threads (KLT): Managed directly by the OS kernel; support true parallelism.
Hybrid Models: Combine aspects of ULT and KLT for balanced efficiency and control.
(Reference: citeturn0file0)

V. Concurrency and Context Switching


Key Definitions
Race Condition: Occurs when processes or threads access shared data concurrently and the outcome depends on the execution order.
Critical Section: A portion of code that accesses shared resources and must be executed by only one process or thread at a time to prevent inconsistencies.
Context Switching: The process of saving the state of a currently running process and loading the state of the next process to resume its execution.

Context Switching in Brief


Mechanism: Involves saving registers, program counter, and other state data to the PCB/TCB, then loading the new process’s state.
Overhead: Causes delays as no useful work is done during switching.
Performance Factors: Depends on hardware support, number of registers, and the OS’s efficiency in managing PCBs/TCBs.
(Reference: citeturn0file0)

VI. Process Cooperation


Cooperating Processes
Definition: Processes that can interact by sharing data, synchronizing activities, or communicating via IPC mechanisms.
Advantages:
Improved Performance: By dividing tasks among processes, execution can occur concurrently.
Resource Sharing: Minimizes redundancy and optimizes resource usage.
Modularity: Breaks down complex tasks into manageable, independent modules.
Reliability: The failure of one process does not necessarily collapse the entire system.
Scalability: Facilitates expansion based on workload.

Independent vs. Cooperating Processes


Independent Processes:
Operate in isolation.
Do not share data or state.
Have separate resources.
Cooperating Processes:
Interact and share resources.
Require inter-process communication (IPC).
Must be synchronized to avoid issues like race conditions.
(Reference: citeturn0file0)

You might also like