Operating System OSG202 FPT - English
Operating System OSG202 FPT - English
Chapter 1: Introduction
1.1 Introduction
Memory:
- Hierarchy: Ranges from fast, small storage like registers to slower, larger storage than
disks.
- Main Memory (RAM): Volatile memory used for running programs.
- Secondary Storage: Non-volatile storage like hard drives and SSDs.
I/O Devices:
- Device Controllers: Manage specific types of devices.
- Interrupts: Mechanism for devices to signal the CPU for attention.
- DMA (Direct Memory Access):** Allows devices to transfer data directly to/from memory
without CPU intervention.
Buses:
-Types of Buses: System bus, local bus, PCI, USB, etc., used for communication between
CPU, memory, and I/O devices.
OS Components:
- Process Management: Handles creation, scheduling, and termination of processes.
Provides mechanisms for synchronization and communication between processes.
- Main Memory Management: Manages the allocation and deallocation of memory space to
processes.
- File Management: Manages file creation, deletion, and manipulation. Maps files onto
secondary storage.
- I/O Management: Manages I/O operations and device drivers. Provides caching, buffering,
and spooling.
-Secondary Storage Management: Manages free space, storage allocation, and disk
scheduling.
Metric Prefixes:
- Used to denote the size and scale of measurements in computing, such as kilobytes (KB),
megabytes (MB), gigabytes (GB), etc.
This summary covers the key points and concepts from each section of the slides, providing
a comprehensive overview of Chapter 1 from the provided presentation. If you need further
details or explanations on specific topics, feel free to ask!
Process Concept:
● An operating system executes various programs including batch jobs and user tasks
in time-shared systems.
● A process is a program in execution and must progress sequentially.
● A process includes resources such as address space, CPU time, program counter,
registers, stack, and other resources like open files and child processes.
Process Model:
Process States:
Process Creation:
● Processes can be created due to system initialization, system calls, user requests, or
batch job initiation.
● Unix examples include fork() to create a new process and exec() to replace the
process’s memory space with a new program.
Process Termination:
● Processes can terminate due to normal exit, fatal error, error exit, or being killed by
another process.
Process Hierarchies:
● Processes can create child processes forming a hierarchy. Unix calls this a "process
group."
● Windows treats all processes as equal without hierarchy.
● Each process is represented by a PCB which includes process ID, program counter,
register set, and scheduling information.
Context Switch:
● Interrupts cause the OS to change the CPU from its current task to run a kernel
routine, performing a state save and restore (context switch).
Implementation of Processes:
● The OS maintains a process table with entries for each process, known as process
control blocks (PCBs).
2.2 Threads
Thread Concept:
● A thread is a unit of execution within a process, which can have one or many
threads.
● Threads share the same address space but have their own program counter,
registers, and stack.
Thread Model:
Benefits of Threads:
Implementing Threads:
● User Space Threads: Managed by a thread library within user space, fast thread
management but with issues like blocking system calls blocking the entire process.
● Kernel Space Threads: Managed by the OS kernel, allowing better support for
multiprocessor systems, but slower thread management.
● Hybrid Implementations: Combine user-level and kernel-level threads, providing
flexibility but still facing issues like blocking kernel calls affecting user threads.
Process Relationship:
Race Conditions:
Critical Regions:
● Parts of the program where shared memory is accessed, requiring mutual exclusion.
● Busy Waiting: Using techniques like lock variables, strict alternation, and Peterson’s
solution.
● Hardware Support: Disabling interrupts and using TSL (Test-and-Set Lock)
instructions.
Synchronous Solutions:
Message Passing:
Introduction to Scheduling:
Levels of Scheduling:
Scheduling Decisions:
Types of Scheduling:
Scheduling Criteria:
Optimization Goals:
Scheduling Algorithms:
● Hard Real-Time Systems: Must complete critical tasks within a guaranteed time.
● Soft Real-Time Systems: Prioritize critical tasks but do not guarantee strict timing.
Thread Scheduling:
● Description: Five philosophers sit at a table with five forks. Each philosopher needs
two forks to eat and can think while waiting for forks.
● Challenges: Deadlock and resource starvation.
● Solutions: Various algorithms to avoid deadlock, such as picking up both forks
simultaneously or using a waiter to control fork access.
● Description: Readers can read data simultaneously, but writers need exclusive
access.
● Challenges: Ensuring no writer is starved by readers and vice versa.
● Solutions: Priority strategies such as giving priority to readers or writers and using
synchronization mechanisms like semaphores or mutexes
Memory Hierarchy:
● Simple memory organization with one user process and the operating system.
● Fixed Memory Partitions: Each partition has separate input queues or a single input
queue.
● Dynamic Relocation: Uses a relocation register to map logical addresses to
physical addresses.
● Base and Limit Registers: Ensure a program stays within its allocated memory
space and does not interfere with other processes.
● MMU (Memory Management Unit): Dynamically maps logical addresses to physical
addresses.
3.2 Swapping
Swapping:
Multiple-Partition Allocation:
● Memory Management with Bit Maps: Tracks free and allocated memory.
● Memory Management with Linked Lists: Easier to manage memory allocation and
merging free spaces.
● Paging: Divides physical memory into fixed-sized blocks (page frames) and logical
memory into pages. Uses a page table to translate logical addresses to physical
addresses.
● Segmentation: Divides the memory into segments of variable size, allowing each
segment to grow or shrink independently.
● Page Replacement Algorithms:
○ FIFO (First-In-First-Out): Replaces the oldest page.
○ Optimal Algorithm: Replaces the page that will not be used for the longest
time.
○ LRU (Least Recently Used): Replaces the least recently used page.
● Breaks the page table into multiple levels to handle larger address spaces efficiently.
● Local Replacement: Each process selects pages from its own set.
● Global Replacement: Pages are selected from the overall set of pages, allowing
more flexibility but with potential fairness issues.
Page Size:
Cleaning Policy:
● The operating system handles page faults by determining the required page,
swapping it in, and updating the page table.
Backing Store:
● Static swap area or dynamic backing store used for pages swapped out of main
memory.
● Keeps the policy (what to do) separate from the mechanism (how to do it) for easier
updates and maintenance.
3.6 Segmentation
Segmentation:
● Segments: Allows programs to be divided into logical units (e.g., code, data, stack)
that can be independently managed.
● Comparison with Paging:
○ Paging uses fixed-size pages; segmentation uses variable-size segments.
○ Paging can lead to internal fragmentation; segmentation can lead to external
fragmentation.
This summary covers the essential points and concepts of Chapter 3, focusing on memory
management, swapping, virtual memory, paging systems, implementation issues, and
segmentation. If you need further details or specific explanations on any topic, feel free to
ask!
File Concept:
● Long-term Information Storage: Files are used to store large amounts of data that
persist even after the process that created them has ended. Multiple processes can
access the information concurrently.
● File System: Part of the operating system responsible for managing files. It includes
two independent parts: the set of files and the directory structure that organizes and
provides information about all files in the system.
File Naming:
● Files are typically named with extensions indicating their type, such as .txt for text
files, .exe for executable files, etc.
File Structure:
File Types:
● Contiguous Logical Address Space: Types of files include data files (numeric,
character, binary) and program files (source, object, executable).
● Examples: An executable file and an archive file.
File Attributes:
● Common attributes include name, identifier, type, location, size, protection, time,
date, and user identification.
File Operations:
Memory-mapped Files:
2. Directories
● All files are contained in a single directory, which can lead to naming conflicts and
difficulties in managing files.
● Each user has their own directory, helping to avoid naming conflicts but still
somewhat limited in scalability.
● Allows directories to contain other directories, forming a tree structure. This is the
most common directory structure.
Path Names:
● Absolute Path: The complete path from the root directory to the file.
● Relative Path: The path relative to the current working directory.
Directory Operations:
● The operating system maintains various in-memory structures to manage files and
directories efficiently.
Allocation Methods:
● Contiguous Allocation: Files are stored in contiguous blocks. Simple and provides
good performance but can lead to fragmentation and difficulty in file growth.
● Linked Allocation: Each file is a linked list of disk blocks scattered anywhere on the
disk. Simple but does not support efficient random access.
● Indexed Allocation: Uses an index block to store pointers to data blocks. Supports
random access and dynamic file sizes but requires additional overhead for index
management.
Example of i-node:
● An i-node (index node) is a data structure used to represent a file in Unix-like file
systems. It contains attributes and disk block addresses of the file.
● Combines direct, single indirect, double indirect, and triple indirect pointers to
manage files of various sizes efficiently.
● Techniques to manage free space on the disk include linked lists and bit maps.
Efficient management is crucial to avoid fragmentation and ensure optimal disk
performance.
● Standard format for CD-ROM file systems. Includes specific directory entries and
attributes.
● Simple file system used in older operating systems like MS-DOS. Uses File
Allocation Table (FAT) for managing file storage.
● Extended version of the MS-DOS file system with support for long file names.
● Classical file system used in early Unix systems. Uses i-nodes to manage files and
supports hierarchical directory structures.
This summary covers the essential points and concepts of Chapter 4, focusing on file
systems, file and directory management, file system implementation, and examples of
various file systems. If you need further details or specific explanations on any topic, feel
free to ask!
Chapter 5: Input-Output
5.1 Principles of I/O Hardware
● Block Devices: Store information in fixed-size blocks (e.g., disk drives). Commands
include read, write, and seek. Memory-mapped file access is possible.
● Character Devices: Deliver or accept a stream of characters (e.g., keyboards,
mice). Commands include get and put.
Common Concepts:
● I/O Device Controller: Handles communication between the device and the
computer.
● I/O Port: A register in the device interface.
● I/O Bus: Connects the CPU, memory, and I/O devices.
Device Controllers:
● I/O devices have both mechanical and electronic components. The electronic
component is the device controller, which can handle multiple devices and perform
tasks like error correction and data transfer to main memory.
● Programmed I/O: The CPU actively polls the device until it is ready for data transfer.
● Interrupt-Driven I/O: The device sends an interrupt signal to the CPU when it is
ready, allowing the CPU to perform other tasks in the meantime.
● Direct Memory Access (DMA): A DMA controller transfers data directly between I/O
devices and memory, freeing the CPU for other tasks.
● Device Independence: Programs can access any I/O device without specifying the
device in advance.
● Uniform Naming: The name of a file or device is a string or an integer, not
depending on the machine.
● Error Handling: Errors are handled as close to the hardware as possible.
● Synchronous vs. Asynchronous Transfers: Blocked transfers vs. interrupt-driven.
● Buffering: Data coming off a device is temporarily stored before being sent to its
final destination.
● Sharable vs. Dedicated Devices: Some devices can be shared among multiple
users, while others cannot.
Programmed I/O:
● Steps involved in printing a string using programmed I/O, which ties up the CPU until
the I/O operation is complete.
Interrupt-Driven I/O:
● Writing a string to a printer using interrupt-driven I/O, which is more efficient than
programmed I/O but still has overhead due to interrupts.
DMA:
● Using DMA for I/O operations reduces the number of interrupts and allows the CPU
to perform other tasks.
1. Interrupt Handlers: Manage interrupts and notify the driver of I/O completion.
2. Device Drivers: Device-specific code for controlling I/O devices.
3. Device-Independent I/O Software: Provides a uniform interface for device drivers,
handles buffering, error reporting, and device allocation.
4. User-Space I/O Software: Includes libraries and spooling systems to manage I/O
operations.
Interrupt Handlers:
● Perform tasks like saving registers, setting up context, acknowledging interrupts, and
running the service procedure.
Device Drivers:
● Logical position of device drivers, which communicate with device controllers over
the bus.
Buffering:
Disk Hardware:
RAID Levels:
● RAID (Redundant Array of Independent Disks) levels 0 through 5, which provide
different methods of data redundancy and performance improvements.
Display Hardware:
● Memory-mapped displays where the driver writes directly into the display's video
RAM.
● Character-Oriented Terminals: Use RS-232 serial communication for transmitting
data one bit at a time.
Clocks:
This summary covers the essential points and concepts of Chapter 5, focusing on I/O
hardware, software principles, software layers, and various I/O devices. If you need further
details or specific explanations on any topic, feel free to ask!
Chapter 6: Deadlock
Chapter Objectives
6.1 Resources
Types of Resources:
Resource Types:
● Preemptable Resources: Can be taken away from a process without causing failure
(e.g., memory).
● Nonpreemptable Resources: Cannot be taken away without causing process
failure (e.g., a Blu-ray recorder).
● Resources are requested and released via system calls like open/close file,
allocate/free memory, and wait/signal.
Definition:
● A set of processes is deadlocked if each process is waiting for an event that only
another process in the set can cause, usually the release of a currently held
resource. In a deadlock, none of the processes can run, release resources, or be
awakened.
● Used to model deadlocks with directed graphs. Nodes represent processes and
resources, and edges represent resource allocation and requests.
● Deadlock occurs when there is a circular chain of processes, each waiting for a
resource held by the next in the chain.
● Pretend there is no problem, reasonable if deadlocks occur very rarely and the cost
of prevention is high. This approach is taken by UNIX and Windows.
Detection:
Recovery:
Resource Trajectories:
● Determines whether granting a resource request will leave the system in a safe state.
If so, the request is granted; otherwise, it is postponed.
Other Issues
Two-Phase Locking:
Nonresource Deadlocks:
● Possible when processes wait for each other to perform tasks, such as in semaphore
usage.
Starvation:
This summary covers the essential points and concepts of Chapter 6, focusing on
deadlocks, their detection, avoidance, prevention, and recovery. If you need further details or
specific explanations on any topic, feel free to ask!