Course Module CpE Operating System 1
Course Module CpE Operating System 1
Tuguegarao City
COURSE MODULE ON
COME 1123 – OPERATING SYSTEM
Prepared by:
Reviewed by:
Recommended by:
Approved by:
Modern operating systems require advanced interrupt handling features beyond the basic mechanism. These
features include:
1. Deferring interrupts: The ability to temporarily disable interrupts during critical tasks to ensure their
completion without interruption.
2. Efficient dispatching: Efficient methods to locate the correct interrupt handler for a device, potentially
using techniques like interrupt chaining.
3. Multilevel interrupts: A system that prioritizes interrupts, allowing high-priority interrupts to preempt
lower-priority ones for a more responsive system.
1.2.2 Storage Structure
Basic Unit:
o Bit: The fundamental unit (0 or 1).
o Byte: A collection of 8 bits (common storage unit).
o Word: A computer-specific unit consisting of multiple bytes (e.g., 64-bit word).
Storage Measurements:
o Kilobyte (KB): 1,024 bytes (conventionally referred to as 1,000 bytes).
o Megabyte (MB): 1,024 KB (conventionally referred to as 1 million bytes).
o Gigabyte (GB): 1,024 MB (conventionally referred to as 1 billion bytes).
o Terabyte (TB): 1,024 GB.
o Petabyte (PB): 1,024 TB.
Main Memory (RAM):
o Volatile (loses data on power off).
o Fast access.
o Limited storage capacity.
o Interacts with CPU via load/store instructions.
Multicore
Processors:
Multiple processing cores reside on a single chip.
Seen by the operating system as individual CPUs (N cores for N CPUs).
This puts pressure on developers to optimize software for efficient use of multiple cores (discussed in
Chapter 4).
Benefits of Multitasking:
Virtual Memory: Allows running programs larger than physical memory (covered in Chapter 10).
o Separates logical memory (user view) from physical memory.
o Frees programmers from memory size limitations.
1.4.2 Dual-Mode and Multimode Operation
User vs. Kernel Mode:
o Hardware distinguishes between user programs and the operating system itself using a mode bit.
o Kernel mode (privileged) is for core system tasks.
o User mode (less privileged) is for user applications.
Transitioning Between Modes:
o System boots and runs in kernel mode.
o User applications run in user mode.
o Traps, interrupts, or system calls trigger a switch to kernel mode.
Protection Mechanisms:
o Certain instructions (e.g., I/O control) are privileged and can only run in kernel mode.
o Attempting a privileged instruction in user mode triggers a trap to the operating system.
Error Handling:
o Hardware traps mode violations and program
errors (illegal instructions, memory access
issues) to the operating system.
o The operating system terminates the program
abnormally and may provide an error
message or memory dump.
1.4.3 Timer
timers help operating systems maintain control of the CPU:
Preventing Infinite Loops and Stalled Programs:
o User programs might get stuck in infinite loops or fail to relinquish control to the OS.
Timer as a Safety Mechanism:
o A timer can be set to interrupt the CPU after a specific time interval.
o This ensures the OS regains control periodically.
o Memory Management:
Keeps multiple programs in memory for efficient CPU utilization and faster user experience.
Different memory management schemes exist with varying approaches.
o OS Responsibilities in Memory Management:
Tracking memory usage (which parts are used and by which processes).
Allocating and deallocating memory space as needed.
1.5.3 File-System Management
Files: A Logical View of Storage:
o The OS provides a consistent way (files) to view information regardless of physical storage
details.
o Files act as a logical storage unit.
Physical Storage Devices:
o The OS maps files onto physical storage media (e.g., hard drives).
o File management is a crucial part of the OS.
File Characteristics:
o Files can hold various data types (programs, numeric data, text, etc.).
o They can be structured (fixed format) or unstructured (free-form).
1.5.4 Mass-Storage Management(Secondary Storage)
o Importance of Secondary Storage:
Backs up main memory for persistent data storage.
Stores programs (compilers, web browsers, etc.) until loaded into memory.
1.5.5 Cache Management
Caching
o Queues:
FIFO (First-In-First-Out) principle: items are removed in the order they were added.
Examples: waiting lines, print jobs.
1.9.2 Trees
o Trees:
Represent data with parent-child relationships, forming a hierarchy.
General trees: a parent can have any number of children.
o Binary Trees:
A special type of tree where a parent has at most two children (left and right).
o Binary Search Trees:
A binary tree with an ordering property: left child <= parent <= right child.
Used for efficient searching (worst-case O(n) for unbalanced trees).
o Balanced Binary Search Trees:
Constructed using algorithms to ensure good search performance (worst-case
O(lg n)).
IDENTIFICATION
TRUE OR FALSE
-True or False: The operating system acts as a resource manager that allocates resources
like CPU time, memory, storage, and I/O devices.
TRUE
-True or False: Non-volatile storage (NVS) loses its data when the power is turned off.
FALSE
-True or False: Interrupts can be temporarily disabled to allow critical tasks to complete
without interruption.
TRUE
MULTIPLE CHOICE
A) To ensure that each program uses the least amount of memory possible.
B) To manage the computer’s audio-visual systems more efficiently.
C) To keep multiple programs in memory to enhance CPU utilization and improve user
experience.
D) To reduce the cost of installing new hardware.
-Which of the following best describes a file in the context of file-system management?
A) A collection of data stored in volatile memory.
B) A physical section of the hard drive.
C) A logical storage unit that provides a consistent way to view information, regardless of the
physical storage details.
D) A program that manages the inputs and outputs of the operating system.
An operating system serves as a mediator between computer users and hardware, facilitating the execution of
programs efficiently. It manages hardware resources, ensuring proper system operation and preventing program
interference. Operating systems vary in structure and design, with goals guiding their development. They offer
services, interfaces, and internal components that cater to users, programmers, and designers. Understanding
COME 1123 – OPERATING SYSTEM| 14
these aspects involves examining system services, interfaces, debugging methods, design methodologies, and
the process of creating and initializing operating systems.
Operating systems offer a platform for program execution and provide services to both programs and users.
Although the exact services vary between operating systems, common classes can be identified. Figure 2.1
illustrates the interrelation of these services, which also simplify the programming process for developers.
Operating systems provide various services to users and ensure the system's efficient operation.
1. User Interface: Operating systems offer graphical user interfaces (GUIs), command-line interfaces (CLIs),
or touch-screen interfaces for user interaction.
2. Program Execution: They load programs into memory, execute them, and handle termination, whether
normal or abnormal.
3. I/O Operations: Operating systems manage input/output operations for programs, including access to files
and I/O devices.
4. File-System Manipulation: They handle file and directory operations, such as reading, writing, creation,
deletion, searching, and permission management.
5. Communications: Operating systems facilitate communication between processes, either locally or across a
network, using shared memory or message passing.
6. Error Detection: They constantly detect and address errors in hardware, I/O devices, and user programs,
ensuring correct computing and taking appropriate actions when errors occur.
Users interact with operating systems through various interfaces, including command-line interfaces (CLI),
graphical user interfaces (GUI), and touch-screen interfaces. Here's a summary of each approach:
- Shells such as Bourne-Again shell (bash) provide functionality to execute commands for tasks like file
manipulation.
- Commands can be implemented within the interpreter itself or through system programs, offering flexibility
and ease of adding new commands.
- Users navigate through icons and menus to execute programs, select files or directories, or access system
functions.
- GUIs originated in the 1970s and became widespread with systems like the Xerox Alto and Apple Macintosh.
- UNIX systems, traditionally CLI-dominated, now offer various GUI interfaces such as KDE and GNOME
desktops.
Touch-Screen Interface:
- Mobile systems like smartphones and tablets utilize touch-screen interfaces, allowing users to interact through
gestures on the screen.
- These interfaces simulate keyboards on the touch screen for text input and navigation.
- System administrators and power users typically prefer CLI for efficiency and programmability.
- Windows and macOS users predominantly use GUI interfaces, although CLI options exist.
- The book concentrates on providing adequate service to user programs without distinguishing between user
and system programs from the operating system's perspective.
System calls serve as a crucial interface for accessing the services provided by an operating system. These calls
are typically exposed to programmers through functions written in languages like C and C++. However, certain
Consider a simple task, such as copying data from one file to another. This seemingly straightforward operation
involves multiple system calls. For instance, the program needs to obtain the names of the input and output
files. These names can be provided as command-line arguments or interactively through user input prompts,
requiring sequences of system calls to display messages and gather input.
Once the file names are obtained, the program must open the input file and create the output file, each operation
requiring additional system calls. Error handling becomes critical at this stage, as the program must handle
scenarios such as non-existent files or permission issues gracefully.
With both files set up, the program enters a loop to read from the input file and write to the output file, with
each read and write operation being a system call. These operations also necessitate error checking to handle
conditions like reaching the end of the file or encountering hardware failures.
Finally, after copying the entire file, the program may close both files, provide feedback to the user, and
terminate normally, all of which involve further system calls.
Programmers often interact with the operating system through higher-level constructs known as Application
Programming Interfaces (APIs). These APIs define a set of functions, parameters, and return values, shielding
developers from the complexities of direct system calls and ensuring program portability across different
systems.
Behind the scenes, functions in APIs typically map to actual system calls. For example, a function like
`CreateProcess()` in the Windows API might invoke a system call like `NTCreateProcess()` in the Windows
kernel.
The Run-Time Environment (RTE) plays a crucial role in managing system-call interactions. It includes the
necessary software components for executing applications in a specific programming language, providing a
system-call interface to abstract away OS details. Parameter passing to system calls varies depending on the OS,
with methods like passing parameters in registers, storing them in memory blocks, or pushing them onto a stack.
Application Programming Interfaces (APIs) serve as a crucial abstraction layer between programmers and the
intricacies of system calls in operating systems. These APIs provide a set of functions with well-defined
parameters and return values, shielding developers from the complexities of direct system call invocation.
Programmers commonly design their applications around APIs, such as the Windows API for Windows
systems, the POSIX API for UNIX-based systems (including Linux and macOS), and the Java API for Java-
based applications. Access to these APIs is facilitated through libraries provided by the operating system, like
libc for UNIX and Linux programs written in C. Behind the scenes, functions within APIs typically translate to
actual system calls, such as Windows' `CreateProcess()` function invoking the `NTCreateProcess()` system call
in the Windows kernel.The preference for using APIs over direct system calls is rooted in several factors,
including program portability and ease of use. APIs ensure that applications can compile and run across systems
supporting the same API, abstracting away architectural differences.
Additionally, working with APIs is often simpler and more intuitive than dealing with low-level system calls,
which can be intricate and system-specific. Despite this, APIs often closely align with the underlying system
A crucial component in managing system calls is the run-time environment (RTE), which includes the
necessary software components for executing applications in a given programming language. The RTE provides
a system-call interface that mediates between API function calls and actual system calls in the operating system
kernel.
System calls may require passing parameters in various ways, such as through registers, memory blocks, or the
stack, depending on the operating system and the specific call. These methods ensure that the necessary
information is provided to the system call for its execution, with different OSs employing different strategies
based on their design and requirements.
In system call handling, parameters are often managed through a combination of methods. In Linux, a blend of
register and block approaches is employed. When there are five or fewer parameters, registers are utilized.
However, for cases exceeding this limit, the block method is implemented, where parameters are stored in a
memory block, and the address of this block is passed as a parameter in a register. Additionally, parameters can
also be placed onto the stack by the program and later retrieved by the operating system. This flexibility is
advantageous as it avoids constraints on the number or length of parameters being passed. Figure 2.7: Diagram
illustrating the combination of register and block methods for passing parameters in Linux system calls.
1. Process Control:
Create Process: Initiates a new process.
Terminate Process: Halts the execution of a process.
Load, Execute: Loads and executes another program within a process.
Get Process Attributes, Set Process Attributes: Retrieves or modifies attributes of a process, such as its
priority or execution time.
Wait Event, Signal Event: Allows processes to synchronize by waiting for or signaling events.
Allocate and Free Memory: Manages memory allocation and deallocation for processes.
2. File Management:
Create File, Delete File: Creates or removes files.
Open, Close: Opens or closes files for reading or writing.
Read, Write, Reposition: Reads from, writes to, or repositions within files.
Get File Attributes, Set File Attributes: Retrieves or sets attributes of files, such as permissions or
timestamps.
3. Device Management:
Request Device, Release Device: Requests or releases access to devices.
Read, Write, Reposition: Performs input/output operations on devices, like reading from or writing to
disks.
Get Device Attributes, Set Device Attributes: Retrieves or sets attributes of devices, such as status or
configuration.
4. Information Maintenance:
Get Time or Date, Set Time or Date: Retrieves or updates system time or date information.
Get System Data, Set System Data: Retrieves or updates various system-wide data.
Get Process, File, or Device Attributes: Retrieves attributes of processes, files, or devices.
COME 1123 – OPERATING SYSTEM| 21
Set Process, File, or Device Attributes: Modifies attributes of processes, files, or devices.
5. Communications:
Create, Delete Communication Connection: Establishes or terminates communication connections
between processes.
Send, Receive Messages: Transmits messages between processes.
Transfer Status Information: Exchanges status information between processes.
Attach or Detach Remote Devices: Associates or disassociates remote devices with a system.
6. Protection:
Get File Permissions, Set File Permissions: Retrieves or modifies permissions associated with files.
Allow User, Deny User: Grants or denies user access to resources.
These system calls enable processes to interact with the operating system and manage various aspects of the
system, such as processes, files, devices, and communication channels while ensuring the security and
protection of resources.
QUESTIONS:
A) Process control
B) File organization
C) Device management
D) Information maintenance
2. In which model of inter-process communication are messages exchanged directly between processes?
A) Message-passing model
B) Shared-memory model
C) Hybrid model
D) Synchronous model
1. System calls for file management include operations such as creating, deleting, and closing files.
2. Device management system calls are only relevant for physical devices, not virtual ones.
3. Information maintenance system calls can be used to modify the behavior of processes.
Identification Questions:
1. What system call category deals with operations such as creating, deleting, opening, reading, writing, and
closing files?
2. Which type of system calls involves obtaining time and date information, retrieving system data, and
accessing process, file, or device attributes?
3. What are the primary functions of system calls in the process control category?
1. B) File organization
2. A) Message-passing model
1. True
2. False
3. False
Identification Questions:
1. File management.
2. Information Maintenance
3. Managing processes
Process Concept
A computer can execute only one program at a time, such as on an embedded device that does not
support multitasking, the operating system may need to support its own internal programmed
activities, such as memory management. In many respects, all these activities are similar, so we call
all of them processes.
Process
Operations on Processes
Process Creation
Parent process create children processes, which, in turn create other processes, forming a tree of
processes.
Process Identifier (PID)- identifying processes.
Traditional UNIX systems identify the process init as the root of all child processes. init (also known as
System V init).
The parent continues to execute concurrently with its children.
The parent waits until some or all its children have terminated.
The child process is a duplicate of the parent process (it has the same program and data as the parent).
The child process has a new program loaded into it.
Process Termination
Cascading Termination- a process terminates (normally or abnormally), then all its children must also be
terminated.
Zombie Process- A process that has terminated, but whose parent has not yet called wait().
Orphans- if a parent did not invoke wait() and instead terminated, thereby leaving its child processes.
Android Process Hierarchy
Interprocess Communication
Independent Process – a process that doesn’t share data with any other processes executing in the
system.
Cooperating Process - a process which can affect or be affected by the other processes executing in the
system. Any process that shares data with other processes.
By providing an environment that supports process cooperation, developers can create more efficient, scalable,
and modular systems that better meet the needs of users and applications.
IPC Model
Interprocess communication (IPC) facilitates the exchange of data between cooperating processes. There are
two primary models for IPC: shared memory and message passing.
b. Message Passing Model: In this model, communication occurs through messages exchanged between
processes. Processes send messages containing data to other processes, which receive and process these
messages. Message passing provides a more structured approach to communication, making it easier to
manage and control data flow between processes. However, it may incur overhead due to message
passing operations and buffer management.
Both models have their advantages and trade-offs, and the choice between them depends on factors such as the
nature of the application, performance requirements, and programming preferences. Figure 3.11 likely
illustrates the differences between these two IPC models in more detail.
Producer - is a process or thread responsible for generating data, items, or resources that are consumed by
another process or thread called the consumer.
Consumer - is a process or thread responsible for consuming or utilizing the items produced by a producer.
2. Message Passing:
Message passing is useful in distributed environments where processes may reside on different
computers connected by a network.
It involves operations like send(message) and receive(message).
4. Naming:
Processes communicate using direct or indirect communication.
Direct communication involves explicit naming of sender and receiver.
Indirect communication involves communication via mailboxes or ports.
5. Synchronization:
Message passing operations can be blocking or non-blocking (synchronous or asynchronous).
Blocking operations wait until the message is sent or received, while non-blocking operations continue
immediately.
6. Buffering:
Messages exchanged by processes reside in temporary queues.
Queues can be zero-capacity (no buffering), bounded capacity (finite length), or unbounded capacity
(potentially infinite).
Each of these points addresses different aspects of message passing systems, including communication
mechanisms, synchronization options, and buffering strategies, providing a comprehensive understanding of
message passing in operating systems.
QUESTIONS
MC
1. What state is the process where it is in the secondary memory but is available for execution as soon as it
is loaded into main memory?
COME 1123 – OPERATING SYSTEM| 29
a. Blocked
b. Ready/Suspended
c. Ready
d. Blocked/Suspended
2. A ____ is a unit of activity characterized by the execution of a sequence of instructions, a current state,
and an associated set of a system resources.
a. Identifier
b. Process
c. State
d. Kernel
3. The portion of the operating system that selects the next process to run.
a. Trace
b. Process Control Block
c. Dispatcher
d. PSW
TRUE or FALSE
1. The Process Control Block is the key tool that enables the OS to support multiple processes and to
provide for multiprocessing. TRUE
2. A process switch may occur any time that the OS has gained control from the currently running process.
TRUE
3. If a system does not employ virtual memory each process to be executed must be fully loaded into main
memory. TRUE
IDENTIFICATION
1. A process is in the _____ state when it is in secondary memory and awaiting an event.
Ans: Blocked/Suspended
2. A significant point about the ______ is that it contains sufficient information so that it is possible to
interrupt a running process and later resume execution as if the interruption had not occurred.
Ans: Process Control Block
3. It is a layer of software between the application and the computer hardware that supports applications
and utilities.
Ans: Operating System (OS)
4.1 Overview
A thread, consisting of a thread ID, program counter, register set, and stack, is a fundamental element of CPU
utilization. It shares code and resources with other threads within the same process, enabling parallel execution
of tasks. Unlike traditional single-threaded processes, which have a single thread of control, multithreaded
processes can perform multiple tasks simultaneously. This distinction is illustrated in Figure 4.1, highlighting
the efficiency gains of multithreading in modern computing environments.
4.1.1 Motivation
Multithreading is ubiquitous in modern computing, enabling software applications to perform multiple tasks
simultaneously. Examples include photo thumbnail generation, where separate threads process individual
images, and web browsers that concurrently display content while fetching data from the network. Leveraging
multicore systems, applications can execute CPU-intensive tasks in parallel, enhancing performance.
In scenarios like web server management, where multiple clients access the server concurrently, multithreading
offers efficiency over traditional single-threaded processes. Instead of creating separate processes for each client
COME 1123 – OPERATING SYSTEM| 31
request, a multithreaded server creates threads to handle requests, reducing resource overhead and improving
responsiveness.
Moreover, multithreading extends to operating system kernels, where multiple threads manage diverse tasks
such as device management and memory handling. Additionally, many applications benefit from multiple
threads, including sorting algorithms and data processing tasks.
Overall, multithreading is essential for maximizing computing resources, improving responsiveness, and
optimizing performance across various computing domains.
4.1.2 Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness. Multithreading improves application responsiveness by allowing it to stay active
during time-consuming operations. This is especially important for user interfaces, where uninterrupted
responsiveness is vital. Unlike single-threaded applications, which can become unresponsive during
such tasks, multithreading enables concurrent processing of tasks and user interactions. Thus, in a
multithreaded application, performing a time-consuming operation, like clicking a button, doesn't hinder
user interaction, as the application can continue responding to other inputs simultaneously.
2. Resource sharing. Processes share resources through explicit techniques like shared memory and
message passing, arranged by the programmer. Threads, on the other hand, inherently share the memory
and resources of their process. This sharing enables multiple threads to operate within the same address
space, facilitating efficient code and data sharing within applications.
3. Economy. Thread creation is more economical than process creation due to shared resource allocation
within processes. Threads share memory and resources, making them more efficient to create and switch
between compared to processes. While measuring overhead differences can be challenging, thread
creation generally consumes less time and memory, with faster context switching between threads than
between processes.
4. Scalability. Multithreading offers significant benefits in multiprocessor architectures, where threads can
run concurrently on different cores. Unlike single-threaded processes limited to a single processor,
multithreading maximizes utilization of available processing cores, enhancing performance. Further
exploration of this topic is discussed in the following section.
Concurrency involves multiple tasks making progress simultaneously, while parallelism entails actual
simultaneous task execution. Thus, concurrency can exist without parallelism. In single-processor systems
before multiprocessor and multicore architectures, CPU schedulers facilitated concurrency by rapidly switching
between processes, giving the illusion of parallelism. Despite running concurrently, processes were not
executing in parallel.
The many-to-one threading model allows unlimited creation of user threads but lacks parallelism due to single-
threaded kernel scheduling. The one-to-one model enhances concurrency but requires cautious thread creation
due to system limitations. Conversely, the many-to-many model addresses these issues, allowing unrestricted
user thread creation while enabling parallel execution on multiprocessors and efficient handling of blocking
system calls. A variation, the two-level model, combines multiplexing and user-to-kernel thread binding. While
the many-to-many model offers flexibility, its complexity makes implementation challenging. Despite this, with
modern systems having more processing cores, the importance of limiting kernel threads has diminished.
Consequently, most operating systems now favor the one-to-one model, although contemporary concurrency
libraries utilize the many-to-many model for task mapping.
3. This involves examining applications to find areas that can be divided into separate, concurrent tasks.
a) Identifying Tasks
b) Balance
c) Data Splitting
d) Data Dependency
2. The benefits of multithreaded programming are; identifying task, achieving balance, splitting data, data
dependency. False
3. Many applications can also take advantage of multiple threads, including basic sorting, trees, and graph
algorithms. True
III. Identification
1. It provides a mechanism for more efficient use of these multiple computing cores and improved concurrency?
Multithreaded Programming
3. What type of parallelism involves distributing not data but tasks (threads) across multiple computing core?
Task parallelism
CPU–I/O Burst Cycle: Process execution alternates between CPU bursts and I/O waits. Processes start with
CPU bursts, followed by I/O bursts, and the cycle repeats until termination.
CPU Scheduler: When the CPU is idle, the CPU scheduler selects a process from the ready queue for
execution. The ready queue can be implemented in various ways, such as FIFO, priority queue, etc.
Preemptive and Non-preemptive Scheduling: Scheduling decisions occur when processes switch states, such
as from running to waiting or ready. Non-preemptive scheduling allows a process to keep the CPU until it
voluntarily releases it, while preemptive scheduling forcibly reallocates the CPU, even if the process doesn't
release it voluntarily.
Dispatcher: The dispatcher is responsible for switching control of the CPU to the selected process. It involves
context switching, switching to user mode, and jumping to the appropriate location in the user program.
Dispatch latency refers to the time taken for these operations.
Context Switches: Context switches occur when the CPU switches between processes. They can be system-
wide or specific to individual processes. Context switches can be voluntary (when a process gives up control
due to resource unavailability) or nonvoluntary (when the CPU is taken away from a process).
To evaluate CPU-scheduling algorithms, typically, many processes with sequences of CPU bursts and
I/O bursts are considered. However, for simplicity, examples often focus on a single CPU burst per
process, with the average waiting time being a common metric for comparison. More complex
evaluation mechanisms are discussed in detail in subsequent sections.
QUESTIONS:
Multiple Choice
1. Which scheduling algorithm is non-preemptive?
a) First-come, First-Served (FCFS)
b) Shortest-Job-First (SJF)
c) Round-Robin (RR)
Answer: a) First-come, First-Served (FCFS)
2. What is the primary goal of CPU scheduling?
a) Maximizing CPU burst length
b) Minimizing context switches
c) Maximizing CPU utilization
Answer: c) Maximizing CPU utilization
3. Which scheduling algorithm allows processes to move between queues based on their CPU burst
characteristics?
a) Priority Scheduling
b) Multilevel Queue Scheduling
c) Multilevel Feedback Queue Scheduling
Answer: c) Multilevel Feedback Queue Scheduling
True or False
4. Shortest-Job-First (SJF) Scheduling is optimal for minimizing average waiting time.
Answer: True
BACKGROUND:
Synchronization tools play a crucial role in today's interconnected digital landscape, where individuals and
organizations rely on accessing and sharing information across multiple devices and platforms seamlessly.
These tools serve as the backbone for ensuring data consistency, collaboration, and accessibility, offering users
the ability to synchronize files, documents, and other data in real-time or at scheduled intervals.At their core,
synchronization tools leverage various technologies such as cloud storage, peer-to-peer networking, and version
control systems to enable smooth and efficient synchronization processes. Cloud storage-based synchronization
services, like Dropbox, Google Drive, OneDrive, and iCloud, store data in centralized servers accessible from
anywhere with an internet connection. Users can upload, modify, or delete files on one device, and these
changes are automatically propagated to all synchronized devices linked to the same account.
One of the key benefits of synchronization tools is their ability to facilitate collaboration among multiple users
or teams. By granting access permissions and sharing capabilities, collaborators can work on the same
documents simultaneously, track changes, and maintain version history. This fosters productivity, streamlines
workflows, and reduces the risk of version conflicts or data loss.
Moreover, synchronization tools offer robust backup functionalities, serving as a safety net against data loss due
to device failure, accidental deletion, or unforeseen events. By continuously syncing data to the cloud or other
devices, users can ensure that their valuable information remains intact and accessible even in the event of
hardware failures or disasters. Additionally, synchronization tools are instrumental in enabling cross-platform
compatibility, allowing users to seamlessly transition between different devices and operating systems without
sacrificing data integrity or accessibility. Whether accessing files from a desktop computer, laptop, smartphone,
or tablet, users can expect a consistent and synchronized experience across all devices.
However, while synchronization tools offer numerous benefits, they also pose certain considerations, such as
privacy and security concerns. Users must be mindful of the data they choose to synchronize, understand the
terms of service and privacy policies of the chosen synchronization service, and implement appropriate security
measures to safeguard sensitive information.
CRITICAL SECTION
The Critical-Section Problem is a fundamental challenge in computer science, particularly in the context of
synchronization tools and concurrent programming. At its core, the problem arises when multiple concurrent
processes or threads access shared resources, leading to potential conflicts and inconsistencies. Synchronization
tools must address this challenge to ensure the integrity and correctness of data and operations.
In the realm of synchronization tools, the Critical-Section Problem manifests when multiple users or processes
attempt to access or modify shared data or resources simultaneously. Without proper synchronization
mechanisms in place, these concurrent operations can result in race conditions, data corruption, or unpredictable
behavior.
To mitigate the Critical-Section Problem, synchronization tools employ various techniques and synchronization
primitives, such as locks, semaphores, and mutexes. These mechanisms help coordinate access to critical
sections of code or shared resources, ensuring that only one process or thread can execute within the critical
section at a time.
For example, consider a cloud-based synchronization tool that allows multiple users to edit a shared document
simultaneously. Without proper synchronization, two users might attempt to save conflicting changes to the
document simultaneously, leading to data corruption or loss. By employing synchronization techniques such as
locks or version control systems, the synchronization tool can enforce a sequential execution of edits, ensuring
that changes are applied in a coordinated and consistent manner.
However, implementing synchronization mechanisms introduces its own set of challenges, such as deadlock,
livelock, and contention. Deadlock occurs when two or more processes are unable to proceed because each is
waiting for the other to release a resource. Livelock occurs when processes continuously change their states in
response to each other's actions, but none make progress. Contention arises when multiple processes compete
for access to the same resource, potentially leading to performance bottlenecks.
Peterson's Solution is a classic algorithm used to address the Critical-Section Problem in concurrent
programming. Proposed by Gary L. Peterson in 1981, this solution provides a simple yet effective way to
synchronize concurrent processes or threads accessing shared resources.
At its core, Peterson's Solution relies on two shared variables: a flag array and a turn variable. Each process or
thread that wishes to enter the critical section sets its flag to indicate its desire to access the critical section.
Additionally, it sets the turn variable to indicate that it is its turn to enter the critical section.
1. Each process sets its flag to indicate its intent to enter the critical section and sets the turn variable to signal
its turn.
2. Before entering the critical section, a process checks the flags of other processes and the turn variable to
determine if it can proceed.
3. If a process finds that it is not its turn or that another process also wishes to enter the critical section, it waits
until conditions change.
4. Once a process exits the critical section, it resets its flag, allowing other processes to proceed.
Peterson's Solution ensures that only one process can enter the critical section at a time while avoiding issues
such as deadlock and livelock. However, it has limitations, particularly in scenarios involving more than two
processes or in distributed systems.
Despite its simplicity, Peterson's Solution serves as a foundational concept in concurrent programming and
synchronization, laying the groundwork for more sophisticated synchronization techniques and algorithms.
1. ATOMIC INSTRUCTIONS: Modern processors often provide atomic instructions, such as compare-and-
swap (CAS) or test-and-set, which allow for indivisible operations on memory locations. These atomic
operations are essential for implementing synchronization primitives like locks, semaphores, and barriers
efficiently.
2. MEMORY BARRIERS: Hardware memory barriers, also known as memory fences, ensure the ordering
and visibility of memory operations across multiple processors or cores. They prevent reordering of memory
accesses and enforce consistency, which is crucial for maintaining the correctness of concurrent programs.
3. CACHE TOLERANCE PROTOCOL: Multi-core processors typically employ cache coherence protocols
to ensure that multiple processor cores have consistent views of shared memory. These protocols manage data
coherence and synchronization between processor caches, ensuring that updates made by one core are visible to
other cores in a timely and coherent manner.
4. TRANSACTIONAL MEMORY: Some modern processors feature support for transactional memory,
which allows programmers to define regions of code as transactions. Transactions provide a higher-level
abstraction for synchronization, enabling atomic and isolated execution of groups of instructions. Hardware-
based transactional memory implementations aim to reduce contention and overhead associated with traditional
locking mechanisms.
5. MEMORY ORDERING GUARANTEES: Hardware architectures define memory ordering guarantees that
specify the ordering constraints for memory accesses performed by different processor cores or threads. These
guarantees ensure that memory operations are observed in a consistent and predictable order, which is essential
for synchronization and data consistency.
Overall, hardware support for synchronization plays a crucial role in enabling efficient and scalable concurrent
programming on modern computer systems. By leveraging these hardware-level features, developers can
implement synchronization mechanisms that are both efficient and reliable, ensuring the correct and optimal
execution of concurrent programs.
A mutex (short for mutual exclusion) lock is a synchronization mechanism used to ensure that only one
thread can access a shared resource at a time, preventing race conditions and data corruption. Here's a summary:
1. Exclusive Access: A mutex lock allows only one thread to enter a critical section of code at a time. Other
threads attempting to enter the critical section will be blocked until the mutex is released.
2. Locking and Unlocking: Threads acquire a mutex lock before accessing the shared resource and release it
afterward. This ensures that only one thread can execute the critical section of code at any given time.
3. Blocking: If a thread attempts to acquire a mutex lock that is already held by another thread, it will be
blocked until the lock is released. This prevents multiple threads from accessing the critical section
simultaneously.
4. Deadlocks: Improper use of mutex locks can lead to deadlocks, where two or more threads are waiting for
each other to release resources they need. Careful design and programming practices are necessary to avoid
deadlocks.
5. Performance Considerations: Mutex locks incur some overhead due to context switching and
synchronization, so they should be used judiciously. In some cases, other synchronization mechanisms such as
semaphores or condition variables may be more appropriate.
Overall, mutex locks are a fundamental tool for ensuring thread safety and preventing data corruption in multi-
threaded programs.
Spinlocks: Mutex locks that use busy waiting are often referred to as spinlocks. While spinlocks can be
efficient in certain scenarios, they can lead to performance issues if the critical section is held for an extended
period.
Alternatives: To avoid busy waiting, systems may implement other synchronization mechanisms, such as
semaphores or condition variables, which allow processes to sleep and be awakened when the lock becomes
available.
6.6 Semaphores
Semaphores are another synchronization tool used in multi-threaded programming, offering more flexibility
than mutex locks. Here's a summary:
1. Counting Mechanism: Unlike mutex locks, which provide exclusive access to a shared resource,
semaphores can control access to a resource by multiple threads simultaneously. They maintain a count to track
the number of resources available.
- Binary Semaphores: Also known as mutex semaphores, these have a count of either 0 or 1, effectively
behaving like mutex locks, allowing only one thread to access a resource at a time.
- Counting Semaphores: These have a count greater than 1, allowing multiple threads to access a resource
concurrently, up to a specified limit.
- Wait (P) Operation: Decreases the semaphore count by 1. If the count is already zero, the calling thread
may be blocked until the count becomes greater than zero.
4. Flexibility: Semaphores can be used to solve a variety of synchronization problems, including producer-
consumer, readers-writers, and dining philosophers problems.
5. Performance: While semaphores offer more flexibility than mutex locks, they also incur slightly higher
overhead due to maintaining a count and potentially managing multiple threads accessing the same resource
simultaneously.
In summary, semaphores are a powerful synchronization mechanism that can handle more complex scenarios
than mutex locks, making them suitable for a wide range of multi-threaded programming tasks.
6.7 Monitors
Monitors are abstract data types (ADTs) that include a set of programmer-defined operations with mutual
exclusion. They encapsulate variables and functions that operate on those variables, ensuring that only one
process at a time can be active within the monitor.
Monitors:
- Provide effective process synchronization with mutual exclusion.
- Address timing errors inherent in semaphore and mutex lock usage.
- Introduce higher-level synchronization construct called monitors.
Monitor Usage:
- Monitors encapsulate data with a set of functions for independent operation.
- Ensure mutual exclusion within monitor, allowing only one active process at a time.
- Monitor's synchronization constraint not explicitly coded by the programmer.
- Incorporate condition variables for more complex synchronization schemes.
6.8 Liveness
Liveness
- Processes must make progress during their execution life cycle.
Deadlock
- Occurs when processes wait indefinitely for an event caused by another waiting process.
Priority Inversion
- Higher-priority processes wait for lower-priority processes to finish with resources.
- Priority inheritance protocol temporarily elevates the priority of processes accessing resources needed by
higher-priority processes.
6.9 Evaluation
QUESTIONS
Multiple Choice
a. Gary Booch
b. Gary Peterson
c. Gary Peters
d. Peter Garson
2. It temporarily elevates the priority of processes accessing resources needed by higher priority processes.
a. Moderate Contention
b. Uncontented
c. High Contention
COME 1123 – OPERATING SYSTEM| 49
d. Priority Inheritance Protocol
3. All are hardware feature that supports synchronization in modern computer systems EXCEPT which one?
a. MEMORY BARRIERS
b. ATOMIC INSTRUCTIONS
c. TRANSACTIONAL MEMORY
TRUE OR FALSE
3. Both binary and counting semaphores allow multiple threads to access a resource concurrently.
IDENTIFICATION
2. The Classic Algorithm is commonly employed to tackle the Critical-Section Problem in concurrent
programming.
3. It provide a higher-level abstraction for synchronization, enabling atomic and isolated execution of groups of
instructions.
ANSWER KEY
MULTIPLE CHOICE
1. B
2. D
3. D
TRUE OR FALSE
1. False
3. False
IDENTIFICATION
1. Uncontented
2. Peterson's Solution
3. Transaction
CHAPTER 07
Synchronization Examples
Bounded-Buffer Problem: This problem involves coordinating a producer process that generates data and
a consumer process that consumes it, both sharing a fixed-size buffer (bounded buffer). The goal is to
ensure that the producer doesn't write data into the buffer when it's full and the consumer doesn't read from
it when it's empty. The solution typically involves using semaphores or mutex locks to control access to the
buffer.
Readers-Writers Problem: In a scenario where multiple processes access a shared database, some may
only read while others may both read and write. The challenge is to ensure that writers have exclusive
access to the database while they are writing, to avoid data inconsistency. Various versions of this problem
exist, including prioritizing readers or writers, and solutions often involve semaphores or mutex locks to
manage access.
Dining-Philosophers Problem: This classic synchronization problem involves five philosophers seated
around a circular table, each alternating between thinking and eating. Each philosopher needs two
chopsticks to eat, but there are only five chopsticks available (one between each pair of philosophers). The
challenges in this problem include preventing deadlock, where each philosopher holds one chopstick and
waits indefinitely for the other, and avoiding starvation, where a philosopher may never get a chance to eat.
Solutions include using semaphores to represent chopsticks and implementing strategies such as
asymmetrical chopstick acquisition or utilizing monitors to control access to chopsticks.
- Semaphore Solution: One approach is to represent each chopstick with a semaphore, where
philosophers try to acquire chopsticks through wait() operations and release them through signal()
operations. However, this approach can lead to a deadlock if all philosophers simultaneously pick up
one chopstick each.
COME 1123 – OPERATING SYSTEM| 51
- Monitor Solution: Another solution involves using monitors to manage access to chopsticks.
Philosophers can only pick up both chopsticks if both are available, and they must release both
chopsticks when done eating. This approach ensures deadlock-free execution but may still lead to
starvation, where a philosopher never gets a chance to eat. Further solutions may be required to address
this issue
The synchronization mechanisms within the kernel of Windows and Linux operating systems are crucial
for ensuring proper coordination and resource management among concurrent threads or processes. Here's a
summary of synchronization in both Windows and Linux kernels:
Synchronization in Windows:
Single-Processor System: When accessing global resources, Windows temporarily masks interrupts for all
interrupt handlers that may also access the resource.
Multiprocessor System: Windows uses spinlocks to protect access to global resources. The kernel ensures
that a thread will not be preempted while holding a spinlock for efficiency reasons.
Thread Synchronization: Windows provides dispatcher objects for thread synchronization, including
mutex locks, semaphores, events, and timers. Threads synchronize by acquiring ownership of these objects,
and shared data is protected accordingly.
Dispatcher Object States: Dispatcher objects can be in a signaled or non-signaled state. Signaled objects
are available, while non-signaled objects cause threads to block until they become signaled.
Critical-Section Objects: These are user-mode mutexes that can often be acquired and released without
kernel intervention. On multiprocessor systems, spinlocks are initially used, and if spinning takes too long, a
kernel mutex is allocated.
Synchronization in Linux:
Kernel Preemption: Linux kernels can be preemptive, allowing tasks to be preempted even when running
in kernel mode. Preemption is controlled using system calls like preempt_disable() and preempt_enable().
Synchronization Mechanisms: Linux provides various synchronization mechanisms within the kernel,
including atomic integers, mutex locks, spinlocks, and semaphores.
Atomic Integers: Represented by atomic_t data type, atomic integers ensure that all math operations are
performed without interruption. They are useful for updating shared variables efficiently.
Mutex Locks: Tasks must acquire mutex locks before entering critical sections and release them afterward.
If a lock is unavailable, the task is put into a sleep state until the lock becomes available.
Spinlocks: Used on SMP (Symmetric Multiprocessing) machines for short-duration locking. On single-
processor systems, spinlocks are replaced by enabling and disabling kernel preemption.
Preempt Count: Each task in Linux has a preempt count to indicate the number of locks held. If a task is
holding a lock, kernel preemption is disabled to ensure safety.
QUESTIONS:
Multiple Choice
1. In the classic Dining-Philosophers Problem, which of the following is NOT a challenge faced in
synchronization?
B) Preventing deadlock.
C) Avoiding starvation.
2. Which synchronization mechanism is primarily used in Linux for short-duration locking on SMP machines?
A) Mutex locks
B) Semaphores
C) Spinlocks
D) Atomic integers
A) Dispatched threads
B) Mutex locks
C) Semaphore events
D) Kernel mutexes
True or False
3. Atomic integers in Linux ensure that all math operations are performed without interruption.
Identification
COME 1123 – OPERATING SYSTEM| 53
1. What classic synchronization problem involves coordinating a producer process and a consumer process
sharing a fixed-size buffer?
2. Which kernel synchronization mechanism in Windows ensures that a thread will not be preempted while
holding a lock?
3. What synchronization mechanism within the Linux kernel is used to protect critical sections by putting tasks
into a sleep state if the lock is unavailable?
ANSWER KEY:
Identification
1. A
2. C
3. C
True or False
1. False
2. False
3. True
Identification
1. Bounder-Buffer Problem
2. Spinlocks
3. Mutex Locks
CHAPTER 8: DEADLOCKS
In a multiprogramming environment, multiple threads may compete for limited resources. When a thread
requests resources that are unavailable, it enters a waiting state. Sometimes, a waiting thread remains stuck
because the requested resources are held by other waiting threads, resulting in a deadlock. Deadlock occurs
when every process in a set is waiting for an event caused only by another process in the same set.
A real-world example of deadlock comes from a Kansas law: “When two trains approach each other at a
crossing, both must come to a full stop and neither can start moving until the other has passed.”
To address deadlocks, application developers and operating-system programmers can employ prevention
techniques. While some applications can identify potential deadlocks, operating systems typically lack built-in
deadlock-prevention features. It remains the responsibility of programmers to design deadlock-free programs.
As demand for increased concurrency and parallelism grows on multicore systems, dealing with deadlock issues
becomes more challenging.
System Model
Under the normal mode of operation, a thread may utilize a resource in only the following sequence:
1. Request:
o When a process or thread requires access to a resource (such as CPU time, memory, or I/O de-
vices), it makes a request for that resource.
o The request indicates that the process needs to utilize the resource to perform its designated task.
o For example, a process requesting access to a file or a network interface is making a resource re-
quest.
2. Use:
o After obtaining permission (i.e., when the requested resource becomes available), the
process uses the resource.
o During the usage phase, the process performs operations or computations using the resource.
o For instance, a process using CPU cycles to execute instructions or reading data from a file is in
the usage state.
3. Release:
o Once the process completes its work with the resource, it releases the resource.
o Releasing a resource means making it available for other processes to use.
o For example, when a process finishes reading from a file, it releases the file resource.
Deadlock Characterization
Deadlocks can arise due to improper resource management, leading to situations where processes cannot
proceed. Remember the necessary conditions for deadlock:
1. Mutual Exclusion: Resources are non-shareable (only one process can use them at a time).
2. Hold and Wait: A process holds at least one resource while waiting for others.
3. No Preemption: Resources cannot be forcibly taken from a process unless voluntarily released.
COME 1123 – OPERATING SYSTEM| 55
4. Circular Wait: A set of processes waits for each other in a circular manner.
Banker’s Algorithm
Ensure safe state transitions by checking if resource allocation leads to a safe state.
Requires prior knowledge of resource needs.
Processes request resources incrementally, and the system checks if granting the request will lead to a
safe state.
A well-known approach for deadlock avoidance.
Resource-allocation Graph
a graphical representation that helps detect whether a sys- tem is in a
deadlock state. It provides a visual depiction of the resource allocation
and resource requests among processes.
Components of a Resource Allocation Graph:
Vertices: Represent processes (or threads) and resources.
Edges: Represent resource requests or resource allocations.
An edge from a process to a resource indicates that the process has
requested that resource.
An edge from a resource to a process indicates that the re- source has
been allocated to that process.
Deadlock Prevention
Mutual Exclusion
Deadlock conditions such as mutual exclusion, hold and wait, no preemption, and circular wait are discussed.
Prevention strategies involve ensuring that at least one of these conditions cannot hold. Mutual exclusion
necessitates having at least one nonsharable resource to avert deadlock. However, certain resources like mutex
locks cannot practically be denied mutual exclusion to prevent deadlock.
No Preemption
Avoiding deadlock involves preventing preemption of already allocated resources. Protocols entail preempting
resources if a thread must wait for a new resource or preempting from waiting threads if necessary. This strategy
is applicable to resources with easily saved/restored states, like CPU registers and database transactions.
Circular Wait
Deadlock prevention options for circular wait are generally impractical. However, the circular-wait condition
can be invalidated by imposing a total ordering of resource types. Threads can then request resources in
increasing order of enumeration, ensuring deadlock cannot occur.
Safe State
The system is in a safe state if resources can be allocated to each thread without deadlock. A safe
sequence of threads ensures resource requests can be satisfied without deadlock. Unsafe states may lead
to deadlock, but not all unsafe states result in deadlock. Algorithms aim to keep the system in a safe
state, granting requests only if they don't lead to deadlock.
unsafe
deadlock
safe
Resource-Allocation-Graph Algorithm
A variant of the resource-allocation graph for deadlock avoidance in systems with one instance of each
resource type is introduced. Claim edges indicating potential future resource requests are utilized.
Resources must be claimed in advance to prevent deadlock.
- Algorithm Description:
When a thread requests a resource, it can be granted only if it doesn't create a cycle in the graph. Cycle
detection ensures the system remains in a safe state. If no cycle is detected, resource allocation leaves
the system safe; otherwise, the thread must wait.
Banker’s Algorithm
The Banker’s algorithm is applicable to systems with multiple instances of each resource type. Threads
must declare maximum resource needs, and the system determines if resource allocation will leave it in
a safe state.
- Data Structures:
Matrices such as Available, Max, Allocation, and Need define the resource-allocation system state.
These matrices vary in size and value over time.
COME 1123 – OPERATING SYSTEM| 57
Deadlock Detection
This chapter discusses the consequences of not employing deadlock-prevention or deadlock-avoidance
algorithms. It presents the requirements for deadlock detection and recovery in systems.
Multiple Choice:
1. What is a necessary condition for deadlock to occur?
o A) Mutual exclusion
o B) Hold and wait
o C) No preemption
o D) Circular wait
o Answer: D) Circular wait
True or False:
1. True or False: Deadlocks can occur when processes compete for resources, leading to situations where
none of them can proceed.
o True
2. True or False: The Banker’s Algorithm is used for deadlock prevention by ensuring safe state transitions.
o True
3. True or False: Deadlock detection algorithms periodically check the system state to identify deadlocks.
o True
Identification:
1. Identify the necessary conditions for deadlock.
o Answer: The necessary conditions for deadlock are mutual exclusion, hold and wait, no
preemption, and circular wait.
2. Identify one method for handling deadlocks other than deadlock prevention or avoidance.
o Answer: Deadlock detection and recovery.
3. Identify the graphical representation used to detect deadlock by analyzing resource requests and alloca-
tions.
o Answer: Resource Allocation Graph.
This Chapter discusses CPU scheduling, which improves CPU utilization and computer response speed by
sharing memory among processes. It explores various memory management algorithms, including primitive
bare-machine and paging strategies, and their advantages and disadvantages. The choice depends on hardware
design, and many systems integrate hardware and operating-system memory management.
CHAPTER OBJECTIVES
Explain the difference between a logical and a physical address and the role of the memory
management unit (MMU) in translating addresses.
Apply first-, best-, and worst-fit strategies for allocating memory contiguously.
Explain the distinction between internal and external fragmentation.
Translate logical to physical addresses in a paging system that includes a translation look-aside
buffer (TLB).
Describe hierarchical paging, hashed paging, and inverted page tables.
Describe address translation for IA-32, x86-64, and ARMv8 architectures.
9.1 Background
This module delves into the fundamentals of memory management in computer systems, highlighting the role of
memory as an array of bytes with each byte having its own address. It explains how CPUs fetch instructions
from memory, including fetching and storing operands, and outlines the typical instruction-execution cycle. It
emphasizes that the memory unit only perceives a stream of memory addresses and is unaware of their origin or
purpose within the program. The module concludes with a discussion on dynamic linking and shared libraries.
Figure 9.1 A base and a limit register define a logical address space.
The CPU uses specialized registers, a base register and a limit register, to separate and execute processes. The
base register indicates the starting point of a process's memory area, while the limit register defines the
maximum memory accessible to the process. The CPU scrutinizes every memory access in user mode against
these registers, alerting the operating system if a process attempts to access memory beyond its allotted range.
This prevents processes from maliciously altering each other's or the operating system's memory. Only the
kernel mode operating system can configure these registers, ensuring user programs cannot modify memory
boundaries. This control is crucial for the effective management of memory, requiring unrestricted access to all
memory. This allows the operating system to manage user programs, handle errors, and perform essential tasks
like process switching in multi-process systems.
Figure 9.2 Hardware address protection with base and limit registers.
The CPU uses specialized registers, a base register and a limit register, to separate and execute processes. The
base register indicates the starting point of a process's memory area, while the limit register defines the
maximum memory accessible to the process. The CPU scrutinizes every memory access in user mode against
these registers, alerting the operating system if a process attempts to access memory beyond its allotted range.
The CPU generates a logical address, while the memory unit sees the physical address, which is loaded into the
memory-address register.
Dynamic loading is a memory management technique that loads routines into memory only when needed, rather
than loading the entire program at once. Routines are stored on disk in a relocatable load format, and when
needed, the calling routine checks if it's already in memory. If not, the routine is loaded, and the program's
address tables are updated. This technique enhances memory-space utilization, especially for programs with
extensive codebases.
Dynamic linking is a technique in operating systems where system libraries are linked to user programs at
runtime, reducing memory usage. It is commonly used in Windows and Linux systems. Dynamic linked
libraries (DLLs) are shared libraries that can be updated with bug fixes or new versions to ensure compatibility
between programs and libraries. Unlike dynamic loading, DLLs require assistance from the operating system to
manage memory protection and ensure multiple processes can access the same memory addresses.
Contiguous memory allocation is a memory management technique where main memory is divided into two
partitions: one for the operating system and one for user processes. The OS's location in memory depends on
factors like the interrupt vector. The goal is to efficiently allocate memory to multiple user processes, each
allocated a single contiguous section. This method simplifies memory management but can lead to
fragmentation issues. It is crucial to address memory protection, ensuring each process has its own memory
space and cannot access memory outside its allocated range without causing errors.
Memory allocation assigns processes to memory partitions, initially available as a large hole. Processes are
allocated space based on requirements and when they terminate, memory is returned to the set. Strategies like
first-fit, best-fit, and worst-fit are used to efficiently allocate memory while minimizing fragmentation. First-fit
allocates the first suitable hole, best-fit allocates the smallest suitable hole, and worst-fit allocates the largest
suitable hole. Simulations generally favor first-fit and best-fit over worst-fit due to their efficiency in time and
storage utilization. While neither is definitively superior, first-fit is often faster.
9.2.3 Fragmentation
Memory fragmentation occurs when memory is broken into small, scattered pieces over time, making it difficult
to find contiguous space for new processes. Internal fragmentation occurs when a process receives more
memory than needed, leaving unused space. To address external fragmentation, compaction can be used to
move processes and consolidate free memory into a single block, or processes can be placed wherever space is
available, emphasizing the importance of efficient memory management.
Paging is a memory management technique that ensures a process's physical address space is non-contiguous,
preventing external fragmentation, and is widely used in operating systems for its efficiency, involving
hardware cooperation.
Paging is a process management technique that divides physical memory into fixed frames and logical memory
into equally sized pages. This separation allows a process to have a logical address space larger than physical
memory. Each CPU-generated address consists of a page number and offset, simplifying address translation.
Paging creates the illusion of a single contiguous memory space for the programmer, despite fragmented
physical memory. The operating system controls address-translation hardware, converting logical addresses to
physical addresses, and managing physical memory using a frame table. This process can increase context-
switch time.
The page number is an index in a per-process page table, which stores the base address of each frame in
physical memory. The offset indicates the frame's location. Combining the base address and offset gives the
physical memory address. The process involves using the page number from the logical address, obtaining the
Each process in a computer system has its own page table, which maps logical addresses to physical addresses.
This page table is stored in memory, and a register called the page-table base register (PTBR) points to the start
of the current process. When a process is selected for execution, the CPU scheduler updates the PTBR to point
to the process's page table, enabling quick translation of logical addresses to physical addresses. Some systems
store the page table in high-speed hardware registers for faster access. Modern systems with large page tables
store the page table in main memory, with only the PTBR stored in a register.
The main memory page table requires two memory accesses for data access, making it unacceptable. To speed
up the translation process, a Translation Look-aside Buffer (TLB) is used. TLB stores a subset of page table
entries and checks the TLB when a logical address is generated. If the TLB is full, a replacement policy is used
to select an entry for eviction. Some TLBs allow certain entries to be "wired down" for critical kernel code.
Some TLBs also store an address-space identifier (ASID) for process identification.
9.3.3 Protection
Paging is a technique that enables the sharing of common code among multiple
processes, especially in environments with many processes. It is particularly useful for
reentrant code, which does not modify itself during execution. By sharing the same
physical memory pages for this code, multiple processes can execute the same code
simultaneously without interfering with each other. This reduces memory usage
significantly, as only one copy of libc is kept in memory and mapped to each process's
address space. This approach can also be applied to other commonly used programs,
such as compilers, window systems, and database systems, resulting in further memory
savings.
This section delves into common techniques for structuring page tables, such as hierarchical paging, hashed
page tables, and inverted page tables.
Modern computer systems often have a large logical address space, leading to an excessively large page table.
This can result in up to 4 MB of physical address space for each process. To address this, a two-level paging
algorithm can be used to divide the page table into smaller pieces, such as a page number and a page offset,
which are further divided into a 10-bit page number and a 10-bit page offset. This efficient allocation of
physical address space in main memory is crucial for efficient memory usage.
A hashed page table is a method for handling address spaces larger than 32 bits, using a virtual page number as
the hash value. Each entry contains a linked list of elements that hash to the same location. The algorithm
compares the virtual page number with field 1 in the linked list, and if a match is found, the corresponding page
frame (field 2) is used to form the desired physical address.
Processes typically have a page table with one entry for each virtual address, which the operating system
converts into a physical memory address. This method can consume large amounts of physical memory. To
address this, an inverted page table is proposed, which has one entry for each real page or frame of memory,
containing the virtual address and information about the process that owns the page. This results in only one
Inverted page tables are a method used to map physical memory to different address spaces, ensuring a logical
page for a process is mapped to the corresponding physical page frame. IBM was the first major company to use
inverted page tables, starting with the IBM System 38 and continuing through the RS/6000 and current IBM
Power CPUs. For the IBM RT, each virtual address in the system consists of a triple: each inverted page-table
entry is a pair where the process-id assumes the role of the address-space identifier. When a memory reference
occurs, part of the virtual address is presented to the memory subsystem, and the inverted page table is searched
for a match. If no match is found, an illegal address access is attempted. This scheme decreases the amount of
memory needed to store each page table but increases the time needed to search the table when a page reference
occurs.
Solaris, a 64-bit operating system, uses multiple levels of page tables to address virtual memory issues without
consuming all its physical memory. Two hash tables are used for kernel and user processes, mapping memory
addresses from virtual to physical memory. Each hash-table entry represents a contiguous area of mapped
virtual memory, making it more efficient than having separate hash-table entries for each page. The CPU
implements a Translation Language Block (TLB) that holds Translation Table Entry (TTEs) for fast hardware
lookups. A cache of TTEs is stored in a Translation Storage Buffer (TSB).
9.5 Swapping
Process instructions and data must be in memory for execution, but a process can be temporarily swapped to a
backing store and then back into memory, allowing the total physical address space of all processes to exceed
the system's real physical memory.
1. Which register is used to indicate the starting point of a process's memory area?
B) Base Register
C) Limit Register
3. Which memory management technique divides physical memory into fixed frames and logical memory into
equally sized pages?
B) Swapping
C) Paging
4. Dynamic linking reduces memory usage by linking system libraries to user programs at compile time.
Answer: False
5. Inverted page tables have one entry for each virtual address.
COME 1123 – OPERATING SYSTEM| 72
Answer: False
Identification Question
7. What registers are used by the CPU to separate and execute processes?
Answer: Base Register and Limit Register
8. What memory management technique loads routines into memory only when needed?
Answer: Dynamic Loading
9. What is the purpose of the Translation Look-Aside Buffer (TLB) in memory management?
Answer: TLB caches address translations for faster memory access.
While absolute protection from malicious abuse is impossible, deterrents and detection measures can minimize
security breaches. Countermeasures include physical security measures, network protection, and addressing
vulnerabilities within the operating system. Ultimately, a layered approach to security is essential for mitigating
risks and protecting valuable assets.
16.2 PROGRAM THREATS
Malware, or malicious software, includes programs designed to exploit, disable, or damage computer systems.
One common type is the Trojan horse, which pretends to be legitimate but carries out harmful actions once
Malware thrives when systems violate the principle of least privilege, granting excessive user or process
privileges. This allows malware to spread, evade detection, and exploit vulnerabilities. Design flaws in
operating systems and software contribute to these breaches, highlighting the need for strict access control and
robust security measures.
Malware authors may exploit trap doors or back doors intentionally left in software, providing unauthorized
access. Rigorous security testing and code review processes are crucial to detect and mitigate such threats.
Malware poses a pervasive and evolving threat to computer security, requiring proactive measures to prevent
exploitation and minimize damage.
One common form of code injection is a buffer overflow, where a program writes beyond the bounds of a
buffer, potentially corrupting adjacent memory. The consequences of a buffer overflow vary depending on
factors such as the extent of the overflow and the program's memory layout. In some cases, overflows may go
unnoticed, while in others, they can lead to program crashes or enable attackers to execute arbitrary code.
Developers can mitigate buffer overflow risks by using safer functions like `strncpy()` instead of vulnerable
ones like `strcpy()`, and by implementing bounds checking. However, such precautions are often overlooked,
leaving programs vulnerable to exploitation.
To execute code injection, attackers typically craft shellcode, small code segments that perform specific actions,
such as spawning a shell or establishing network connections. By overwriting return addresses or function
pointers with addresses pointing to shellcode, attackers can redirect program execution to their malicious code.
Shellcode can be obfuscated to evade detection, and techniques like NOP sleds (sequences of no-operation
instructions) can facilitate code execution even in the presence of alignment constraints.
While launching code-injection attacks may require programming skills, tools like shellcode compilers and
exploit kits make it accessible even to less experienced attackers, known as "script kiddies." Moreover, code-
injection attacks can bypass traditional security measures like firewalls and may go undetected within
communication protocols.
Buffer overflows are just one avenue for code injection; heap overflows and mismanagement of memory
buffers can also lead to exploitable vulnerabilities. Vigilance in secure coding practices and thorough testing are
essential to mitigate the risks posed by code injection.
Viruses primarily target architectures, operating systems, and applications, with PCs being particularly
vulnerable due to their widespread use. UNIX and other multiuser systems are generally less susceptible to
viruses due to better protection mechanisms for executable programs.
Common vectors for virus transmission include spam emails, phishing attacks, and downloading infected files
from the internet. Viruses often exploit macros in programs like Microsoft Office to execute malicious actions,
such as formatting the hard drive.
When a virus reaches a target machine, a virus dropper inserts it into the system. Viruses can belong to various
categories, including file viruses, boot viruses, macro viruses, rootkits, and polymorphic viruses. Each category
has distinct characteristics and methods of propagation.
The proliferation of viruses has led to the development of sophisticated variants, including encrypted, stealth,
multipartite, and armored viruses. These variants aim to evade detection by antivirus software and complicate
disinfection efforts.
The existence of a computing monoculture, particularly in Microsoft products, raises concerns about the
widespread impact of virus attacks. Vulnerability information is traded on the dark web, increasing the value of
attacks that can target multiple systems within a monoculture.
Addressing the threat posed by viruses and worms requires robust security measures, including antivirus
software, regular system updates, and user education to mitigate risks associated with malicious code.
The widespread use of broadband and WiFi has made tracking attackers more challenging. Even simple desktop
machines can become valuable targets due to their bandwidth or network access. Wireless networks enable
attackers to remain anonymous or target unprotected networks through "WarDriving."
Network intrusion detection systems continually evolve to detect and mitigate port-scanning techniques,
reflecting the ongoing battle between attackers and defenders.
encryption algorithms, two primary categories are delineated: symmetric and asymmetric. Symmetric
encryption employs a singular key for both encryption and decryption, exemplified by DES and AES.
Conversely, asymmetric encryption employs distinct keys for these functions, with RSA standing as a
prominent example.
Authentication, serving to validate the identity of message senders, is elucidated as a complementary facet of
cryptography. Authentication algorithms utilize keys to generate authenticators for messages, thereby ensuring
that only legitimate senders can be duly verified.
Key distribution is underscored as a critical facet of cryptography, particularly salient in symmetric encryption
where shared access to the same key is requisite. Asymmetric encryption offers a paradigm shift by leveraging
public-private key pairs, albeit necessitating measures to authenticate public keys.
Implementation of cryptography within network protocols, delineating its placement across various layers of the
protocol stack. It introduces TLS (Transport Layer Security) as a quintessential cryptographic protocol for
secure communication over the Internet. The key exchange process and session key generation mechanism
within TLS are expounded upon, highlighting its pivotal role in ensuring confidentiality, authenticity, and
integrity in online transactions.
One-time passwords provide an additional layer of security by generating unique passwords for each
authentication session, reducing the risk of password exposure. Biometric authentication, such as fingerprint
readers, offers a more robust method of user authentication by utilizing unique physical characteristics for
Secure hashing techniques, such as those used in UNIX systems, protect passwords by storing hashed values
instead of plain text passwords, making it impossible for attackers to reverse-engineer passwords from the
stored values. Salt values are added to passwords during hashing to prevent dictionary attacks and ensure
unique hash values for identical passwords.
Educate users about safe computing — don’t attach devices of unknown origin to the computer, don’t
share passwords, use strong passwords, avoid falling for social engineering appeals, realize that an e-
mail is not necessarily a private communication, and so on
Educate users about how to prevent phishing attacks— don’t click on e- mail attachments or links from
unknown (or even known) senders; authenticate (for example, via a phone call) that a request is
legitimate.
Use secure communication when possible.
Physically protect computer hardware.
Configure the operating system to minimize the attack surface; disable all unused services.
Configure system daemons, privileges applications, and services to be as secure as possible.
Use modern hardware and software, as they are likely to have up-to-date security features.
Keep systems and applications up to date and patched.
Only run applications from trusted sources (such as those that are code signed).
Enable logging and auditing; review the logs periodically, or automate alerts.
Install and use antivirus software on systems susceptible to viruses, and keep the software up to date.
Use strong passwords and passphrases, and don’t record them where they could be found.
Use intrusion detection, firewalling, and other network-based protection systems as appropriate.
For important facilities, use periodic vulnerability assessments and other testing methods to test security
and response to incidents.
Encrypt mass-storage devices, and consider encrypting important individual files as well.
Have a security policy for important systems and facilities, and keep it up to date
Identification Questions:
1. Name one common type of code injection attack.
Answer: Buffer overflow attack
3. What is the term for programs designed to exploit, disable, or damage computer systems?
Answer: Malware
Different processor architectures implement privilege separation. In Intel architectures, user mode code operates
in ring 3, while kernel mode code resides in ring 0, with access controlled by special register bits. With
virtualization, Intel introduced an additional ring (-1) for hypervisors, granting them more capabilities than
guest operating system kernels.
ARM processors initially had user and kernel modes (USR and SVC), but ARMv7 introduced TrustZone,
adding an additional ring for a trusted execution environment. TrustZone offers exclusive access to hardware-
Employing a trusted execution environment prevents attackers from accessing cryptographic keys if the kernel
is compromised, mitigating brute-force attacks. ARMv8 architecture further extends this model with four
exception levels (EL0 through EL3), with EL3 reserved for the most privileged secure monitor (TrustZone).
This setup allows running separate operating systems concurrently.
The secure monitor, operating at a higher execution level, is ideal for integrity checks on kernels, as seen in
Samsung's Realtime Kernel Protection for Android and Apple's WatchTower (KPP) for iOS.
For static protection domains, a mechanism must be available to change the content of a domain, as a process
may execute in different phases and need read and write access. This violates the need-to-know principle, as the
domain must always reflect the minimum necessary access rights.
For dynamic protection domains, a mechanism is available to allow domain switching, enabling the process to
switch from one domain to another. The content of a domain can also be changed, or if the content cannot be
changed, a new domain can be created with the changed content and switched to when needed.
However, setuid bit pose security risks, as they grant potentially powerful privileges to users. These executables
need to be carefully written to ensure they only affect necessary files and are resistant to tampering or
subversion. Despite efforts, many setuid bit have been subverted in the past, leading to security breaches and
privilege escalation for attackers. Measures to limit damage from bugs in setuid bit are discussed further in the
Section 17.8.
To further enhance security, Android modifies the kernel to restrict certain operations, such as networking
sockets, to members of specific GIDs, like AID INET (GID 3003). Moreover, Android defines certain UIDs as
"isolated," preventing them from initiating Remote Procedure Call (RPC) requests to any services beyond a
minimal set. These mechanisms collectively bolster the security and isolation of applications on Android
devices.
3. What determines the set of objects that can be accessed if each procedure is a domain?
a) Identity of the user
b) Identity of the process
c) Local variables defined within the procedure
d) All of the above
Answer: c) Local variables defined within the procedure
2. Protection techniques aim to detect interface errors early and prevent system contamination.
Answer: True
3. The separation of policy and mechanism in protection systems allows for flexibility in adapting to new
policies without impacting underlying mechanisms.
Answer: True
Identification Questions:
1. What mechanism in UNIX allows a user to temporarily assume the identity of the file owner, typically
root, to perform privileged operations without requiring root access?
Answer: The setuid bit
2. What does Android assign to each application during installation to ensure isolation, security, and
privacy?
Answer: User ID (UID) and Group ID (GID)
3. In Android, what GID restricts certain operations like networking sockets to its members?
Answer: AID INET (GID 3003)