0% found this document useful (0 votes)
42 views17 pages

Operating System OSG202 FPT - English

Tài liệu rút gọn bản tiếng anh

Uploaded by

phast28904
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views17 pages

Operating System OSG202 FPT - English

Tài liệu rút gọn bản tiếng anh

Uploaded by

phast28904
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Here is a detailed summary of the key content from the provided slides in English:

Chapter 1: Introduction
1.1 Introduction

What is an Operating System?


- Definition: An operating system (OS) is a program that acts as an intermediary between a
user and computer hardware. It manages hardware resources and provides an environment
for application programs.
- Components of a Computer System
- Hardware: The physical components like CPU, memory, and I/O devices.
-System Programs: Programs that manage the hardware.
-Application Programs:Programs that perform tasks for users, such as word processors,
spreadsheets, and web browsers.

Functions of an Operating System


- Extended Machine: Hides the complexities of hardware, presenting a simpler interface.
- Resource Manager: Allocates time and space for various programs to run efficiently.

1.2 History of Operating Systems

Generations of Operating Systems:


- First Generation (1945-1955): No OS, using vacuum tubes and plugboards.
- Second Generation (1955-1965): Batch systems with transistors, early OS like FMS
(Fortran Monitor System) and IBSYS.
- Third Generation (1965-1980 Introduction of integrated circuits, multiprogramming, and
time-sharing systems like OS/360, MULTICS, and Unix.
- Fourth Generation (1980-present) Personal computers with GUI-based operating systems
like CP/M, DOS, Windows, and Unix.
- Fifth Generation (1990-present): Mobile computers, OS for smartphones like Symbian, iOS,
and Android.

1.3 Computer Hardware Review

Components of a Computer System:


- CPU (Central Processing Unit): Executes instructions from memory.
- Registers:Small, fast storage locations within the CPU.
- Pipeline and Superscalar: Techniques to improve CPU performance by executing multiple
instructions simultaneously.
- Dual-mode Operation: Distinguishes between user mode and kernel mode for security
and protection.

Memory:
- Hierarchy: Ranges from fast, small storage like registers to slower, larger storage than
disks.
- Main Memory (RAM): Volatile memory used for running programs.
- Secondary Storage: Non-volatile storage like hard drives and SSDs.

I/O Devices:
- Device Controllers: Manage specific types of devices.
- Interrupts: Mechanism for devices to signal the CPU for attention.
- DMA (Direct Memory Access):** Allows devices to transfer data directly to/from memory
without CPU intervention.

Buses:
-Types of Buses: System bus, local bus, PCI, USB, etc., used for communication between
CPU, memory, and I/O devices.

1.4 The Operating System Zoo

Types of Operating Systems:


- Mainframe Operating Systems Designed for batch processing, multiprogramming, and
time-sharing.
- Server Operating Systems: Optimized for handling network resources and providing
services.
- Multiprocessor Operating Systems: Manage multiple CPUs for high throughput and
availability.
- Personal Computer Operating Systems: Interface with a single user, handling a variety of
I/O devices.
- Real-time Operating Systems: Ensure timely processing, used in industrial control systems.
- Embedded Operating Systems: Used in devices with limited resources like PDAs and
smartphones.
- Smart Card Operating Systems: Small OS for devices with limited processing power and
memory.

1.5 Operating System Concepts

OS Components:
- Process Management: Handles creation, scheduling, and termination of processes.
Provides mechanisms for synchronization and communication between processes.
- Main Memory Management: Manages the allocation and deallocation of memory space to
processes.
- File Management: Manages file creation, deletion, and manipulation. Maps files onto
secondary storage.
- I/O Management: Manages I/O operations and device drivers. Provides caching, buffering,
and spooling.
-Secondary Storage Management: Manages free space, storage allocation, and disk
scheduling.

1.6 System Calls

Making System Calls:


- User Mode vs. Kernel Mode:System calls transition the system from user mode to kernel
mode for executing privileged instructions.
- POSIX System Calls: Standardized API for process management, file management,
directory management, and miscellaneous tasks.
- Examples:
- Process Management: `fork()`, `execve()`, `waitpid()`
- File Management: `open()`, `read()`, `write()`, `close()`
- Directory Management: `mkdir()`, `rmdir()`, `opendir()`, `closedir()`
1.7 Operating System Structure
Architectures:
- Monolithic Systems: Single large kernel with all OS services.
- Layered Systems: Divides OS into layers, each built on top of lower layers.
- Virtual Machines: Emulates hardware to run multiple OS instances simultaneously.
- Client-Server Model: Separates functionalities into client processes and server processes.
- Microkernel: Minimal kernel with most services running in user space for modularity and
reliability.
1.8 Metric Units

Metric Prefixes:
- Used to denote the size and scale of measurements in computing, such as kilobytes (KB),
megabytes (MB), gigabytes (GB), etc.

This summary covers the key points and concepts from each section of the slides, providing
a comprehensive overview of Chapter 1 from the provided presentation. If you need further
details or explanations on specific topics, feel free to ask!

Chapter 2: Processes and Threads


2.1 Processes

Process Concept:

● An operating system executes various programs including batch jobs and user tasks
in time-shared systems.
● A process is a program in execution and must progress sequentially.
● A process includes resources such as address space, CPU time, program counter,
registers, stack, and other resources like open files and child processes.

Process Model:

● Multiprogramming of four programs is demonstrated, showing that only one program


is active at any given instant.
● Processes each have their own flow of control, making progress over time, but only
one is running at any given moment.

Process States:

● Processes can be in states such as running, blocked, or ready.


● Transitions between states occur due to scheduling decisions or I/O events.

Process Creation:
● Processes can be created due to system initialization, system calls, user requests, or
batch job initiation.
● Unix examples include fork() to create a new process and exec() to replace the
process’s memory space with a new program.

Process Termination:

● Processes can terminate due to normal exit, fatal error, error exit, or being killed by
another process.

Process Hierarchies:

● Processes can create child processes forming a hierarchy. Unix calls this a "process
group."
● Windows treats all processes as equal without hierarchy.

Process Control Block (PCB):

● Each process is represented by a PCB which includes process ID, program counter,
register set, and scheduling information.

Context Switch:

● Interrupts cause the OS to change the CPU from its current task to run a kernel
routine, performing a state save and restore (context switch).

Implementation of Processes:

● The OS maintains a process table with entries for each process, known as process
control blocks (PCBs).

2.2 Threads

Thread Concept:

● A thread is a unit of execution within a process, which can have one or many
threads.
● Threads share the same address space but have their own program counter,
registers, and stack.

Thread Model:

● Threads can be within separate processes or multiple threads within a single


process.

Benefits of Threads:

● Responsiveness: Allows a program to continue running even if part of it is blocked.


● Resource Sharing: Threads share the same address space and resources.
● Economy: Creating and context switching threads is more economical than
processes.
● Utilization of Multiprocessor Architectures: Multiple threads can run in parallel on
different processors.

Thread Usage Examples:


● Word processors using threads to interact with the user and reformat documents in
the background.
● Multithreaded web servers using dispatcher and worker threads for efficient
processing.

Implementing Threads:

● User Space Threads: Managed by a thread library within user space, fast thread
management but with issues like blocking system calls blocking the entire process.
● Kernel Space Threads: Managed by the OS kernel, allowing better support for
multiprocessor systems, but slower thread management.
● Hybrid Implementations: Combine user-level and kernel-level threads, providing
flexibility but still facing issues like blocking kernel calls affecting user threads.

2.3 Interprocess Communication (IPC)

Process Relationship:

● Independent Processes: Do not affect each other’s execution.


● Cooperating Processes: Can affect each other’s execution for information sharing,
computation speed-up, modularity, and convenience.

Race Conditions:

● Occur when multiple processes access shared memory simultaneously, requiring


mechanisms like mutual exclusion to prevent data inconsistency.

Critical Regions:

● Parts of the program where shared memory is accessed, requiring mutual exclusion.

Mutual Exclusion Solutions:

● Busy Waiting: Using techniques like lock variables, strict alternation, and Peterson’s
solution.
● Hardware Support: Disabling interrupts and using TSL (Test-and-Set Lock)
instructions.

Synchronous Solutions:

● Sleep and Wakeup: OS primitives for blocking and waking processes.


● Semaphores: Introduced by Dijkstra, using down() and up() operations for
synchronization.
● Monitors: High-level synchronization constructs that ensure mutual exclusion and
use condition variables.

Message Passing:

● Processes communicate by sending and receiving messages, establishing links


automatically, typically bi-directional.

Classical Synchronization Problems:

● Bounded-Buffer Problem (Producer-Consumer)


● Readers and Writers Problem
● Dining-Philosophers Problem

Chapter 2.2: Scheduling and IPC Problems


2.4 Scheduling

Introduction to Scheduling:

● CPU Utilization: Achieved through multiprogramming.


● CPU-I/O Burst Cycle: Alternating periods of CPU execution and I/O wait.
○ CPU-bound Process: Long CPU bursts.
○ I/O-bound Process: Short CPU bursts.

Levels of Scheduling:

● Long-term Scheduling: Determines which processes are admitted to the system


(job scheduling).
● Medium-term Scheduling: Handles swapping processes in and out of memory
(swapping).
● Short-term Scheduling: Decides which of the ready processes will be executed
next (CPU scheduling).

Scheduling Decisions:

● Occur when a process:


1. Switches from running to waiting.
2. Switches from running to ready.
3. Switches from waiting to ready.
4. Terminates.

Types of Scheduling:

● Non-preemptive Scheduling: A process runs until it blocks or releases the CPU


voluntarily.
● Preemptive Scheduling: A process can be interrupted and moved to the ready state
by the scheduler.

Scheduling Criteria:

● CPU Utilization: Keep the CPU as busy as possible.


● Throughput: Number of processes that complete their execution per time unit.
● Turnaround Time: Total time to execute a particular process.
● Waiting Time: Total time a process spends in the ready queue.
● Response Time: Time from the submission of a request until the first response is
produced.

Optimization Goals:

● Maximize CPU utilization and throughput.


● Minimize turnaround time, waiting time, and response time.

Scheduling Algorithms:

● First-Come, First-Served (FCFS):


○ Non-preemptive, processes are executed in the order they arrive.
○ Can lead to the convoy effect, where short processes get stuck behind long
processes.
● Shortest Job First (SJF):
○ Non-preemptive or preemptive.
○ Process with the shortest next CPU burst is selected.
○ Optimal in terms of minimum average waiting time but requires knowledge of
future CPU bursts.
● Round Robin (RR):
○ Each process gets a small unit of CPU time (time quantum).
○ Processes are preempted and placed back in the ready queue after their time
quantum expires.
○ Balances responsiveness and efficiency.
● Priority Scheduling:
○ Each process is assigned a priority.
○ CPU is allocated to the highest priority process.
○ Can be preemptive or non-preemptive.
○ Issues of starvation can be addressed with aging, which increases the priority
of processes waiting for a long time.

Scheduling in Real-Time Systems:

● Hard Real-Time Systems: Must complete critical tasks within a guaranteed time.
● Soft Real-Time Systems: Prioritize critical tasks but do not guarantee strict timing.

Scheduling Policy vs. Mechanism:

● Policy: Decides what is to be done.


● Mechanism: Decides how it is to be done.
● The kernel provides mechanisms, while user processes set policies.

Thread Scheduling:

● Local Scheduling: Thread library decides which thread to run.


● Global Scheduling: Kernel decides which kernel thread to run next.

2.5 Classic Interprocess Communication Problems

The Dining Philosophers Problem:

● Description: Five philosophers sit at a table with five forks. Each philosopher needs
two forks to eat and can think while waiting for forks.
● Challenges: Deadlock and resource starvation.
● Solutions: Various algorithms to avoid deadlock, such as picking up both forks
simultaneously or using a waiter to control fork access.

The Readers and Writers Problem:

● Description: Readers can read data simultaneously, but writers need exclusive
access.
● Challenges: Ensuring no writer is starved by readers and vice versa.
● Solutions: Priority strategies such as giving priority to readers or writers and using
synchronization mechanisms like semaphores or mutexes

Chapter 3: Memory Management


3.1 Basic Memory Management

Memory Hierarchy:

● Ideal Memory: Large, fast, and non-volatile.


● Hierarchy:
○ Small, fast, expensive memory (cache)
○ Medium-speed, medium-priced main memory (RAM)
○ Slow, cheap disk storage

Logical vs. Physical Address Space:

● Logical Address: Generated by the CPU, also known as a virtual address.


● Physical Address: Address seen by the memory unit.
● The logical address space is bound to a separate physical address space.

Monoprogramming without Swapping or Paging:

● Simple memory organization with one user process and the operating system.

Multiprogramming with Fixed Partitions:

● Fixed Memory Partitions: Each partition has separate input queues or a single input
queue.
● Dynamic Relocation: Uses a relocation register to map logical addresses to
physical addresses.

Relocation and Protection:

● Base and Limit Registers: Ensure a program stays within its allocated memory
space and does not interfere with other processes.
● MMU (Memory Management Unit): Dynamically maps logical addresses to physical
addresses.

3.2 Swapping

Swapping:

● Swapping processes in and out of memory to ensure proper memory utilization.


● External Fragmentation: Memory is fragmented into small holes.
● Internal Fragmentation: Allocated memory may be slightly larger than requested.

Multiple-Partition Allocation:

● Memory Management with Bit Maps: Tracks free and allocated memory.
● Memory Management with Linked Lists: Easier to manage memory allocation and
merging free spaces.

Dynamic Storage-Allocation Algorithms:

● First-Fit: Allocate the first hole that is big enough.


● Next-Fit: Continue searching from the last allocated position.
● Best-Fit: Allocate the smallest hole that fits the request.
● Worst-Fit: Allocate the largest hole, leaving the largest leftover hole.

3.3 Virtual Memory


Virtual Memory:

● Paging: Divides physical memory into fixed-sized blocks (page frames) and logical
memory into pages. Uses a page table to translate logical addresses to physical
addresses.
● Segmentation: Divides the memory into segments of variable size, allowing each
segment to grow or shrink independently.
● Page Replacement Algorithms:
○ FIFO (First-In-First-Out): Replaces the oldest page.
○ Optimal Algorithm: Replaces the page that will not be used for the longest
time.
○ LRU (Least Recently Used): Replaces the least recently used page.

Address Translation Scheme:

● Logical address divided into page number and page offset.


● Physical address generated by combining the base address from the page table and
the page offset.

Two-Level Page Tables:

● Breaks the page table into multiple levels to handle larger address spaces efficiently.

3.4 Design Issues for Paging Systems

Local vs. Global Allocation Policies:

● Local Replacement: Each process selects pages from its own set.
● Global Replacement: Pages are selected from the overall set of pages, allowing
more flexibility but with potential fairness issues.

Page Size:

● Small Pages: Less internal fragmentation but larger page tables.


● Large Pages: More internal fragmentation but smaller page tables.

Cleaning Policy:

● Uses a background process (paging daemon) to clean up and free pages.

3.5 Implementation Issues

Page Fault Handling:

● The operating system handles page faults by determining the required page,
swapping it in, and updating the page table.

Backing Store:

● Static swap area or dynamic backing store used for pages swapped out of main
memory.

Separation of Policy and Mechanism:

● Keeps the policy (what to do) separate from the mechanism (how to do it) for easier
updates and maintenance.
3.6 Segmentation

Segmentation:

● Segments: Allows programs to be divided into logical units (e.g., code, data, stack)
that can be independently managed.
● Comparison with Paging:
○ Paging uses fixed-size pages; segmentation uses variable-size segments.
○ Paging can lead to internal fragmentation; segmentation can lead to external
fragmentation.

Segmentation with Paging:

● Combines both methods for more efficient memory management.

This summary covers the essential points and concepts of Chapter 3, focusing on memory
management, swapping, virtual memory, paging systems, implementation issues, and
segmentation. If you need further details or specific explanations on any topic, feel free to
ask!

Chapter 4: File Systems


1. Files

File Concept:

● Long-term Information Storage: Files are used to store large amounts of data that
persist even after the process that created them has ended. Multiple processes can
access the information concurrently.
● File System: Part of the operating system responsible for managing files. It includes
two independent parts: the set of files and the directory structure that organizes and
provides information about all files in the system.

File Naming:

● Files are typically named with extensions indicating their type, such as .txt for text
files, .exe for executable files, etc.

File Structure:

● Files can have different structures:


○ Byte Sequence: A linear sequence of bytes.
○ Record Sequence: A series of fixed or variable-length records.
○ Tree: A hierarchical structure of records.

File Types:

● Contiguous Logical Address Space: Types of files include data files (numeric,
character, binary) and program files (source, object, executable).
● Examples: An executable file and an archive file.

File Access Methods:


● Sequential Access: Read all bytes or records from the beginning sequentially.
● Random Access (Direct/Relative Access): Read bytes or records in any order.
Essential for database systems.

File Attributes:

● Common attributes include name, identifier, type, location, size, protection, time,
date, and user identification.

File Operations:

● Common operations include creating, deleting, opening, closing, reading, writing,


appending, seeking, getting attributes, setting attributes, and renaming.

Memory-mapped Files:

● Memory-mapping allows a process to access files directly in memory, which can


improve performance for large files.

2. Directories

Single-Level Directory Systems:

● All files are contained in a single directory, which can lead to naming conflicts and
difficulties in managing files.

Two-Level Directory Systems:

● Each user has their own directory, helping to avoid naming conflicts but still
somewhat limited in scalability.

Hierarchical Directory Systems:

● Allows directories to contain other directories, forming a tree structure. This is the
most common directory structure.

Path Names:

● Absolute Path: The complete path from the root directory to the file.
● Relative Path: The path relative to the current working directory.

Directory Operations:

● Common operations include creating, deleting, opening, closing, reading, renaming


directories, and creating or removing links to files.

3. File System Implementation

File System Layout:

● A typical file system layout includes:


○ Master Boot Record (MBR): Contains the partition table and bootstrap
loader.
○ Partition Table: Stores information about partitions on the disk.
○ File System Superblock: Contains key parameters about the file system.
In-Memory File System Structures:

● The operating system maintains various in-memory structures to manage files and
directories efficiently.

Allocation Methods:

● Contiguous Allocation: Files are stored in contiguous blocks. Simple and provides
good performance but can lead to fragmentation and difficulty in file growth.
● Linked Allocation: Each file is a linked list of disk blocks scattered anywhere on the
disk. Simple but does not support efficient random access.
● Indexed Allocation: Uses an index block to store pointers to data blocks. Supports
random access and dynamic file sizes but requires additional overhead for index
management.

Example of i-node:

● An i-node (index node) is a data structure used to represent a file in Unix-like file
systems. It contains attributes and disk block addresses of the file.

Combined Scheme (UNIX):

● Combines direct, single indirect, double indirect, and triple indirect pointers to
manage files of various sizes efficiently.

4. File System Management and Optimization

Disk Space Management:

● Techniques to manage free space on the disk include linked lists and bit maps.
Efficient management is crucial to avoid fragmentation and ensure optimal disk
performance.

5. Example File Systems

CD-ROM File Systems (ISO 9660):

● Standard format for CD-ROM file systems. Includes specific directory entries and
attributes.

MS-DOS File System:

● Simple file system used in older operating systems like MS-DOS. Uses File
Allocation Table (FAT) for managing file storage.

Windows 98 File System:

● Extended version of the MS-DOS file system with support for long file names.

UNIX V7 File System:

● Classical file system used in early Unix systems. Uses i-nodes to manage files and
supports hierarchical directory structures.

This summary covers the essential points and concepts of Chapter 4, focusing on file
systems, file and directory management, file system implementation, and examples of
various file systems. If you need further details or specific explanations on any topic, feel
free to ask!

Chapter 5: Input-Output
5.1 Principles of I/O Hardware

Types of I/O Devices:

● Block Devices: Store information in fixed-size blocks (e.g., disk drives). Commands
include read, write, and seek. Memory-mapped file access is possible.
● Character Devices: Deliver or accept a stream of characters (e.g., keyboards,
mice). Commands include get and put.

Common Concepts:

● I/O Device Controller: Handles communication between the device and the
computer.
● I/O Port: A register in the device interface.
● I/O Bus: Connects the CPU, memory, and I/O devices.

Device Controllers:

● I/O devices have both mechanical and electronic components. The electronic
component is the device controller, which can handle multiple devices and perform
tasks like error correction and data transfer to main memory.

Data Transfer Methods:

● Programmed I/O: The CPU actively polls the device until it is ready for data transfer.
● Interrupt-Driven I/O: The device sends an interrupt signal to the CPU when it is
ready, allowing the CPU to perform other tasks in the meantime.
● Direct Memory Access (DMA): A DMA controller transfers data directly between I/O
devices and memory, freeing the CPU for other tasks.

5.2 Principles of I/O Software

Goals of I/O Software:

● Device Independence: Programs can access any I/O device without specifying the
device in advance.
● Uniform Naming: The name of a file or device is a string or an integer, not
depending on the machine.
● Error Handling: Errors are handled as close to the hardware as possible.
● Synchronous vs. Asynchronous Transfers: Blocked transfers vs. interrupt-driven.
● Buffering: Data coming off a device is temporarily stored before being sent to its
final destination.
● Sharable vs. Dedicated Devices: Some devices can be shared among multiple
users, while others cannot.

Programmed I/O:

● Steps involved in printing a string using programmed I/O, which ties up the CPU until
the I/O operation is complete.
Interrupt-Driven I/O:

● Writing a string to a printer using interrupt-driven I/O, which is more efficient than
programmed I/O but still has overhead due to interrupts.

DMA:

● Using DMA for I/O operations reduces the number of interrupts and allows the CPU
to perform other tasks.

5.3 I/O Software Layers

Layers of the I/O Software System:

1. Interrupt Handlers: Manage interrupts and notify the driver of I/O completion.
2. Device Drivers: Device-specific code for controlling I/O devices.
3. Device-Independent I/O Software: Provides a uniform interface for device drivers,
handles buffering, error reporting, and device allocation.
4. User-Space I/O Software: Includes libraries and spooling systems to manage I/O
operations.

Interrupt Handlers:

● Perform tasks like saving registers, setting up context, acknowledging interrupts, and
running the service procedure.

Device Drivers:

● Logical position of device drivers, which communicate with device controllers over
the bus.

Buffering:

● Different methods of buffering: unbuffered input, buffering in user space, buffering in


the kernel, and double buffering in the kernel.

5.4 I/O Devices

Types of I/O Devices:

● Storage Devices: Hard disks, CD-ROMs, DVDs.


● Display Devices: Character-oriented terminals, graphical user interfaces.
● Clocks: Used for timing and scheduling tasks.

Disk Hardware:

● Structure of disk drives and various disk arm scheduling algorithms:


○ First-Come, First-Served (FCFS): Processes requests in the order they
arrive.
○ Shortest Seek First (SSF): Selects the request with the shortest seek time.
○ Elevator Algorithm: Moves the disk arm in one direction fulfilling requests
until it reaches the end, then reverses direction.

RAID Levels:
● RAID (Redundant Array of Independent Disks) levels 0 through 5, which provide
different methods of data redundancy and performance improvements.

Display Hardware:

● Memory-mapped displays where the driver writes directly into the display's video
RAM.
● Character-Oriented Terminals: Use RS-232 serial communication for transmitting
data one bit at a time.

Clocks:

● Programmable clocks for timing and scheduling.

This summary covers the essential points and concepts of Chapter 5, focusing on I/O
hardware, software principles, software layers, and various I/O devices. If you need further
details or specific explanations on any topic, feel free to ask!

Chapter 6: Deadlock
Chapter Objectives

● Develop an understanding of deadlocks, which prevent sets of concurrent processes


from completing their tasks.
● Present different methods for preventing or avoiding deadlocks in a computer
system.

6.1 Resources

Types of Resources:

● Examples: Printers, tape drives, tables.


● Usage Sequence: Request, use, release.
● Processes need access to resources in a specific order. For example, a process
holding resource X may request resource Y, while another process holding Y
requests X, leading to a deadlock if both are blocked.

Resource Types:

● Preemptable Resources: Can be taken away from a process without causing failure
(e.g., memory).
● Nonpreemptable Resources: Cannot be taken away without causing process
failure (e.g., a Blu-ray recorder).

Request and Release:

● Resources are requested and released via system calls like open/close file,
allocate/free memory, and wait/signal.

6.2 Introduction to Deadlocks

Definition:

● A set of processes is deadlocked if each process is waiting for an event that only
another process in the set can cause, usually the release of a currently held
resource. In a deadlock, none of the processes can run, release resources, or be
awakened.

Four Conditions for Deadlock:

1. Mutual Exclusion: Each resource is assigned to one process or is available.


2. Hold and Wait: Processes holding resources can request additional resources.
3. No Preemption: Resources cannot be forcibly taken away from a process.
4. Circular Wait: A circular chain of two or more processes exists, each waiting for a
resource held by the next process in the chain.

6.3 Deadlock Modeling

Resource-Allocation Graph (RAG):

● Used to model deadlocks with directed graphs. Nodes represent processes and
resources, and edges represent resource allocation and requests.
● Deadlock occurs when there is a circular chain of processes, each waiting for a
resource held by the next in the chain.

6.4 The Ostrich Algorithm

Ignoring the Problem:

● Pretend there is no problem, reasonable if deadlocks occur very rarely and the cost
of prevention is high. This approach is taken by UNIX and Windows.

6.5 Deadlock Detection and Recovery

Detection:

● One Resource of Each Type: Use a wait-for graph to detect deadlocks.


● Multiple Resources of Each Type: Use a deadlock detection algorithm that marks
processes and checks for safe states.

Recovery:

● Preemption: Take a resource from another process.


● Rollback: Periodically checkpoint processes and restart them if a deadlock occurs.
● Killing Processes: Terminate one or more processes in the deadlock cycle.

6.6 Deadlock Avoidance

Resource Trajectories:

● Graphical approach to visualize resource allocation and avoid deadlocks by careful


resource allocation.

The Banker's Algorithm:

● Determines whether granting a resource request will leave the system in a safe state.
If so, the request is granted; otherwise, it is postponed.

6.7 Deadlock Prevention

Attacking the Four Conditions:


1. Mutual Exclusion: Spool devices to avoid exclusive access.
2. Hold and Wait: Require processes to request all resources before starting or release
resources before requesting new ones.
3. No Preemption: Not viable in most cases, especially with nonpreemptable
resources.
4. Circular Wait: Impose an ordering on resource types and require processes to
request resources in a specified order.

Other Issues

Two-Phase Locking:

● A technique to avoid deadlocks in database systems, involving acquiring all locks in


the first phase and performing updates in the second phase.

Nonresource Deadlocks:

● Possible when processes wait for each other to perform tasks, such as in semaphore
usage.

Starvation:

● Occurs when a process is perpetually denied necessary resources. Can be mitigated


using a first-come, first-served policy.

This summary covers the essential points and concepts of Chapter 6, focusing on
deadlocks, their detection, avoidance, prevention, and recovery. If you need further details or
specific explanations on any topic, feel free to ask!

You might also like