0% found this document useful (0 votes)
163 views78 pages

CSC 303 - Operating Systems - 78 Pages

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views78 pages

CSC 303 - Operating Systems - 78 Pages

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

WAM UNIVERSITY COTONOU

INSTITUTE SUPÉRIEUR OUEST AFRICAIN DE MICROFINANCE


Computer Science
Operating Systems
300 L

––
–– –– ––

1
Course Description:

This course provides a comprehensive introduction to the fundamental concepts and


principles of operating systems (OS). We will explore the core functionalities of an OS,
including process management, memory management, file management, input/output
(I/O) management, and device management. Through lectures, discussions,
programming assignments, and projects, you will gain a solid understanding of the
internal workings of an operating system and how it manages system resources to
provide a platform for user applications.
----------------

Course Outline:

1: Introduction to Operating Systems


• 🔷What is an Operating System (OS)? Definitions, Functions, and
History of Operating Systems.
• 🔷Types of Operating Systems: Batch Processing, Multitasking, Real-
Time, Distributed Systems, and Mobile Operating Systems.
• 🔷System Architecture: Layered architecture of an OS, Kernel Mode vs.
User Mode.
• 🔷Operating System Services: Process Management, Memory
Management, File Management, I/O Management, Security.

2: Process Management
• 🔷Processes and Threads: Concepts of Processes, Multiprogramming,
Context Switching.
• 🔷Process States: Running, Waiting, Ready, Blocked, Terminated states
and their transitions.
• 🔷Process Scheduling: Scheduling algorithms (FCFS, SJF, Priority,
Round Robin, Multilevel Queue), Scheduling Criteria.
• 🔷Inter-process Communication (IPC): Shared Memory, Semaphores,
Message Passing for process coordination.
• 🔷Deadlocks: Concepts, Prevention, Detection, and Recovery
mechanisms.
• 🔷Programming Assignment 1: Implement basic process scheduling
algorithms in a chosen programming language.

2
3: Memory Management
• 🔷Memory Hierarchy: Cache, Main Memory, Secondary Storage
(HDD/SSD).
• 🔷Memory Allocation Techniques: Contiguous Allocation, Paging,
Segmentation, Virtual Memory.
• 🔷Address Translation: Virtual Memory concepts, Translation
Lookaside Buffer (TLB).
• 🔷Memory Protection: Memory Protection mechanisms (Memory
Management Units - MMU) to prevent process interference.
• 🔷Memory Replacement Policies: FIFO, LRU, Optimal Page
Replacement Algorithms.
• 🔷Programming Assignment 2: Simulate memory allocation and
replacement algorithms in a chosen programming language.

4: File Management
• 🔷File System Concepts: File Structure, Directory Management, Access
Methods (Sequential, Indexed).
• 🔷File System Implementation: Disk Scheduling Algorithms (FCFS,
SCAN, C-SCAN).
• 🔷File Allocation Methods: Contiguous Allocation, Linked Allocation,
Indexed Allocation.
• 🔷File Sharing and Access Control: Mechanisms for secure file access
and sharing.
• 🔷File System Protection: Techniques to prevent unauthorized access
and data corruption.
• 🔷Programming Assignment 3: Implement basic file system operations
(create, read, write, delete) in a chosen programming language.

5: Security and I/O Management


• 🔷System Security Concepts: User Authentication, Access Control
Mechanisms, Security Policies.
• 🔷Security Threats: Malware, Viruses, Worms, Denial-of-Service
attacks, Mitigation Strategies.
• 🔷I/O Management: I/O Devices, Device Drivers, I/O Operations
(Synchronous, Asynchronous).
• 🔷I/O Scheduling Algorithms: FCFS, SCAN, C-SCAN for handling I/O
requests.
• 🔷Case Studies: Exploring different types of operating systems (e.g.,
Windows, Linux, Android)

3
Course Objectives:

• Define the concept of an operating system and its role in a computer system.
• Explain the different types of operating systems (e.g., batch, interactive, real-
time, distributed).
• Describe the core functionalities of an OS, including process management,
memory management, file management, I/O management, and device
management.
• Analyze different process scheduling algorithms (e.g., FCFS, SJF, priority,
round-robin).
• Apply memory management techniques like paging and segmentation.
• Explain file system concepts like directory structures, file allocation methods,
and access control.
• Understand the principles of I/O management, including device drivers,
interrupts, and buffering.
• Implement basic operating system functionalities in a chosen programming
language (e.g., simulate process scheduling algorithms).
• Evaluate the performance and trade-offs of different operating system design
choices.
• Appreciate the historical development of operating systems.

4
__________________________________
1: Introduction to Operating Systems
• 🔷What is an Operating System (OS)? Definitions, Functions, and
History of Operating Systems.

This section provides a foundational understanding of operating systems


(OS), their core functions, and a historical perspective on their development.

1.1 What is an Operating System (OS)?

An operating system (OS) acts as the core software that manages computer
hardware resources and provides a platform for running application programs.
It essentially acts as an intermediary between the hardware and the user,
facilitating communication and resource utilization.

Key Functions of an OS:

• Process Management: Creates and manages processes (running


programs), including scheduling their execution, allocating resources,
and handling inter-process communication.
• Memory Management: Allocates and manages memory for running
programs, ensuring efficient utilization of available memory resources.
• File Management: Provides a structured organization for data storage
and retrieval, handling file creation, deletion, access control, and
directory management.
• Device Management (I/O Management): Controls communication
with peripheral devices like printers, disks, and displays, ensuring
proper data transfer and device utilization.
• Security: Provides mechanisms for user authentication, access
control, and protection against unauthorized access or malicious
software.
• User Interface: Presents an interface for users to interact with the
system, typically through a command-line interface (CLI) or a graphical
user interface (GUI).

Diagram of an Operating System in a Computer System:

5
1.2 Definitions of Operating Systems

There are various definitions for operating systems, emphasizing different


aspects of their functionality:

• Resource Manager: An OS acts as a central manager for all computer


resources (CPU, memory, storage, devices), allocating them to running
programs as needed.
• Platform for Applications: An OS provides a platform for running
application programs, handling their execution, memory allocation, and
interaction with hardware devices.
• User Interface: An OS provides an interface (CLI or GUI) for users to
interact with the computer system, issuing commands and managing
files.
• Abstraction Layer: An OS sits between the hardware and the user,
hiding the complexities of hardware interaction and providing a
simplified interface for applications and users.

1.3 History of Operating Systems

The history of operating systems reflects the evolution of computing from


early mainframes to modern personal computers. Here's a brief timeline:

• Early Systems (1950s-1960s): Simple batch processing systems with


limited user interaction. Jobs were submitted on punch cards and
processed sequentially.
• Multitasking Systems (1960s-1970s): Introduction of multitasking
operating systems that could handle multiple programs running
concurrently, allowing for better resource utilization.

6
• Time-Sharing Systems (1960s-1970s): Enabled multiple users to
share a single computer system by dividing processing time among
them, giving the illusion of simultaneous use.
• Personal Computer Operating Systems (1970s-Present):
Development of operating systems specifically designed for personal
computers, with user-friendly interfaces like the Apple II's command
line and later the graphical user interface (GUI) popularized by the
Macintosh.
• Modern Operating Systems (Present): Operating systems have
become increasingly sophisticated, supporting networking,
multitasking, security features, and virtual memory management for
efficient resource utilization.

Examples of Modern Operating Systems:

• Windows (Microsoft) - Used on a wide range of personal


computers.
• macOS (Apple) - Primarily used on Apple computers
(Macintosh).
• Linux - A free and open-source operating system with various
distributions used on personal computers, servers, and
embedded systems.
• Android (Google) - Primarily used on smartphones and tablets.
• iOS (Apple) - Used on iPhones and iPads.

This section provides a basic understanding of operating systems. In


the following sections, we will delve deeper into the core functionalities
of an OS, including process management, memory management, file
management, and security.

• 🔷Types of Operating Systems: Batch Processing, Multitasking, Real-


Time, Distributed Systems, and Mobile Operating Systems.

Types of Operating Systems


Operating systems come in various flavors, each suited to specific needs and
functionalities. Here, we'll explore the most common types of operating
systems:

1. Batch Processing Systems:

• Description: Batch processing systems process jobs in batches,


submitting them to the system beforehand. The operating system
queues the jobs and executes them one after another, often without
user interaction.

7
• Characteristics:
o Suitable for high-volume, repetitive tasks (e.g., payroll
processing, scientific calculations).
o Less user interaction compared to interactive systems.
o Efficient for utilizing system resources by minimizing idle time.

2. Multitasking Systems:

• Description: Multitasking systems allow users to run multiple


programs concurrently. The operating system manages the allocation
of CPU time and resources between these programs, creating the
illusion of simultaneous execution.
• Characteristics:
o Supports running multiple applications at the same time.
o Improves user productivity by allowing tasks to overlap.
o Requires efficient process management techniques (e.g.,
scheduling).

3. Real-Time Systems:

• Description: Real-time systems prioritize tasks based on strict time


constraints. They guarantee predictable response times to events
within deadlines, making them crucial for applications requiring
immediate response (e.g., industrial control systems, medical
equipment).
• Characteristics:
o Focuses on meeting strict deadlines for task completion.
o Prioritizes time-sensitive tasks over others.
o Requires specialized hardware and software for deterministic
performance.

4. Distributed Systems:

• Description: Distributed systems consist of multiple interconnected


computers working together as a single system. They distribute tasks
and resources across the network, offering scalability and fault
tolerance.
• Characteristics:
o Composed of multiple independent computers working
collaboratively.
o Provides functionalities like resource sharing and
communication between computers.
o Offers increased processing power and data storage capacity
compared to single-user systems.

8
• 🔷System Architecture: Layered architecture of an OS, Kernel Mode vs.
User Mode.

This section dives into the fundamental structure of operating systems (OS)
by exploring the layered architecture and the distinction between kernel mode
and user mode.

1. Layered Architecture of an Operating System

An operating system is often organized as a layered architecture. This


modular approach breaks down the OS functionalities into distinct layers,
each with specific responsibilities. Benefits of layered architecture include:

• Modularity: Each layer can be developed and maintained


independently, promoting easier development and debugging.
• Abstraction: Higher layers are shielded from the complexities of lower
layers, simplifying programming and system interaction.
• Flexibility: New functionalities can be added by introducing new layers
or modifying existing ones.

Diagram of Layered OS Architecture:

9
Description of Layers:

• Application Layer: Provides an interface for users and application


programs to interact with the system. This layer includes programs like
web browsers, word processors, and games.
• API (Application Programming Interface) Layer: Defines the set of
functions and calls that applications can use to access system services
offered by the lower layers. This layer acts as a bridge between
applications and the OS.
• Services Layer: Provides core functionalities like file systems,
networking, security, and process management. This layer interacts
with both the API layer and the device driver layer.
• Device Driver Layer: Handles communication with specific hardware
devices (e.g., printers, disks, network cards). This layer translates
generic requests from higher layers into device-specific instructions.
• Hardware Layer: The physical components of the computer system,
including the CPU, memory, storage devices, and peripherals.

2. Kernel Mode vs. User Mode


Operating systems operate in two distinct modes: kernel mode and user
mode. These modes manage access to system resources and ensure secure
operation:

• Kernel Mode (Supervisor Mode): The privileged mode with


unrestricted access to all system resources (hardware, memory). Only
the core operating system kernel executes in kernel mode. Kernel
mode is responsible for:
o Process management (scheduling, memory allocation)
o Memory management (allocation, protection)
o Device driver management
o Security (access control, protection mechanisms)
• User Mode: The mode in which user applications and processes run.
User mode programs have limited access to system resources and are
restricted from directly manipulating hardware. This isolation protects
the system from application errors or malicious code. User programs
interact with the kernel through system calls, which are requests for
specific services provided by the kernel.

10
Diagram of Kernel Mode vs. User Mode:

Summary:

The layered architecture and distinction between kernel mode and user mode
are fundamental concepts in operating systems. This layered approach
promotes modularity, security, and efficient resource management.
Understanding these concepts is crucial for appreciating how operating
systems provide a platform for user applications to interact with the underlying
hardware.

• 🔷Operating System Services: Process Management, Memory


Management, File Management, I/O Management, Security.

This section dives into the core functionalities provided by an operating


system (OS). These services manage resources like processes, memory,
files, and devices, enabling efficient and secure system operation.

1. Process Management

Process management deals with the creation, execution, and termination of


processes within the system. A process is an instance of a computer program
in execution. The OS oversees:

11
• Process Creation: Spawning new processes based on user requests
or program execution.
• Process Scheduling: Deciding which process gets access to the CPU
for execution at any given time. Scheduling algorithms like FCFS (First
Come, First Served), SJF (Shortest Job First), and Priority scheduling
determine the order of process execution.
• Process Termination: Ending processes that have completed
execution, encountered errors, or been terminated by the user.
• Inter-process Communication (IPC): Facilitating communication and
resource sharing between different processes. Mechanisms like shared
memory, semaphores, and message passing enable processes to
coordinate and exchange data.

Process State Diagram:

2. Memory Management

Memory management oversees the allocation and utilization of system


memory (RAM) for efficient program execution. The OS handles:

• Memory Allocation: Assigning memory space to processes for storing


their code, data, and the program stack. Techniques like contiguous
allocation, paging, and segmentation are used to manage memory
divisions.

12
• Virtual Memory: Creating a virtual memory space that allows
processes to utilize more memory than physically available on the
system. This is achieved by using secondary storage (hard disk) as an
extension of main memory.
• Memory Protection: Preventing processes from interfering with each
other's memory space by employing Memory Management Units
(MMU) to isolate processes and enforce memory access restrictions.

Memory Management Techniques:

• Contiguous Allocation: Allocates a contiguous block of memory to


each process. Simple but inefficient as memory fragmentation can
occur.

+---------+---------+---------+---------+

| Process 1 | Process 2 | Free | Process 3 |

+---------+---------+---------+---------+

Free Memory Fragment

• Paging: Divides both physical memory and process memory space


into fixed-size blocks called pages. Allows for non-contiguous memory
allocation for processes.

Process Memory Physical Memory

+----+----+----+----+ +----+----+----+----+

| P1 | P1 | P2 | P3 | | P2 | P3 | Free | P1 |

+----+----+----+----+ +----+----+----+----+

Paging Table

13
Process 1: Spreads across non-contiguous pages in physical memory

• Segmentation: Divides a process's logical memory into variable-sized


segments based on functionality (code, data, stack). Offers flexibility
but requires additional management overhead.
+--------------+--------------+--------------+
| Code Segment | Data Segment | Stack Segment |
+--------------+--------------+--------------+
|
Segment Table
Process Memory is divided into logical segments

3. File Management

File management deals with the organization, storage, retrieval, and


manipulation of files on a computer system. The OS provides:

• File System: A structured way to organize and store files on storage


devices (hard disks, SSDs). Common file systems include NTFS
(Windows), ext4 (Linux), and FAT32 (older systems).
• File Operations: Creating, reading, writing, deleting, and renaming
files based on user requests or program instructions.
• Directory Management: Organizing files hierarchically using
directories (folders) for efficient access and navigation.
• Access Control: Enforcing user permissions for accessing, modifying,
or deleting files to ensure data security.

File System Hierarchy:

/ (root directory)

+--- Documents

| +--- report.txt

| +--- images

| +--- picture1.jpg

| +--- picture2.png

+--- Downloads

| +--- software.exe

14
2: Process Management
• 🔷Processes and Threads: Concepts of Processes, Multiprogramming,
Context Switching.

This section dives into the fundamental concepts of processes and threads, a
cornerstone of operating system functionality. We will explore how processes
and threads manage program execution, enabling multitasking and efficient
resource utilization.

1. Processes

A process is an instance of a computer program that is in execution. It


represents a unit of work with its own execution context, including:

• Program Code: The instructions to be executed.


• Data Segment: Stores data specific to the process (variables,
temporary data).
• Stack: Manages function calls and local variables during execution.
• Program Counter (PC): Keeps track of the current instruction being
executed.
• Open Files: Any files currently accessed by the process.
• OS Resources: Allocated memory space, CPU time, and other
resources granted by the operating system.

Process States:

A process can be in various states throughout its lifecycle:

• Running: The process is actively executing instructions on the CPU.


• Ready: The process is waiting for the CPU and is ready to run when
allocated.
• Waiting: The process is paused due to an external event (e.g., waiting
for I/O operation to complete).
• Blocked: The process is unable to proceed due to an event beyond its
control (e.g., waiting for a resource held by another process).
• Terminated: The process has finished execution and its resources are
released.

Diagram of Process States:

15
2. Multiprogramming
Operating systems employ multiprogramming to manage multiple processes
concurrently. While only one process can actively execute on the CPU at a
time, the OS can rapidly switch between processes, creating the illusion of
simultaneous execution.

Here's how multiprogramming works:

1. The OS maintains a pool of ready processes.


2. The OS allocates the CPU to a ready process for a short time slice
(e.g., few milliseconds).
3. When the time slice expires or the process encounters an event
requiring it to wait (e.g., I/O), the OS saves its state (registers, memory
pointers).
4. The OS selects another ready process and restores its saved state,
allowing it to resume execution from where it left off.
5. This rapid context switching between processes creates the impression
of multiple programs running simultaneously.

Benefits of Multiprogramming:

• Increased CPU Utilization: The CPU is rarely idle, as another process


can take over when one process waits.

16
• Improved System Response Time: Users perceive faster response
times as the OS can quickly switch to a ready process when the
current process requires I/O.
• Efficient Resource Management: System resources like memory are
utilized effectively by running multiple processes concurrently.

3. Threads
A thread is a lightweight unit of execution within a process. Multiple threads
can exist and execute concurrently within a single process, sharing the same
address space and resources like open files. This allows for finer-grained
control within a process compared to separate heavyweight processes.

Benefits of Threads:

• Improved Responsiveness: A process with multiple threads can


remain responsive even if one thread is blocked, as another thread can
continue execution.
• Efficient Resource Sharing: Threads share the process address
space, reducing memory overhead compared to separate processes.
• Simplified Synchronization: Threads within a process can
synchronize access to shared resources more efficiently than separate
processes.

Context Switching:

Context switching refers to the process of saving the state of one process (or
thread) and restoring the state of another when the CPU needs to switch
execution. This involves saving and restoring registers, program counters,
and other process-specific data. Context switching between threads is
generally faster than context switching between processes due to the shared
address space.

This breakdown provides a foundational understanding of processes, threads,


multiprogramming, and context switching. These concepts are crucial for
comprehending how operating systems manage program execution and
resource allocation in a multitasking environment.

• 🔷Process States: Running, Waiting, Ready, Blocked, Terminated states


and their transitions.

This section dives into the concept of process states, a fundamental aspect of
process management in operating systems. We will explore the different
states a process can be in and the transitions that occur between them.

17
1. Processes and Process States

A process is an instance of a computer program that is actively executing. It


represents a unit of work within the system with its own execution context
(CPU registers, program counter, memory allocation). The operating system
manages multiple processes concurrently, keeping track of their state and
allocating resources efficiently.

A process goes through various states throughout its lifecycle. These states
define the current activity or execution status of the process:

• Running: The process is actively executing instructions on the CPU. It


has control of the processor and is carrying out its computations. (Only
one process can be truly running on a single CPU core at any given
time).
• Ready: The process is waiting to be assigned the CPU. It has all the
necessary resources (memory, I/O devices) and is ready to run as
soon as the CPU becomes available. Ready processes are typically
placed in a ready queue, where the OS uses a scheduling algorithm to
determine which process gets the CPU next.
• Waiting: The process is temporarily suspended and cannot proceed
until a certain event occurs. There are various reasons a process might
enter the waiting state:
o I/O Wait: The process is waiting for an I/O operation to complete
(e.g., reading data from a disk, waiting for network response).
o Resource Wait: The process is waiting for a specific resource
to become available (e.g., waiting for a semaphore, waiting for
exclusive access to a shared file).
o Event Wait: The process is waiting for a specific event to
happen (e.g., waiting for a signal from another process).
• Blocked: This state is often used interchangeably with waiting, but it
can have a slightly different connotation. A blocked process is typically
waiting for an event that is entirely out of its control and cannot proceed
independently.
• Terminated: The process has finished its execution and has released
all its resources. The process may terminate normally by reaching its
end or abnormally due to errors, signals, or external events.

2. Transitions Between Process States


Processes can transition between these states based on various events and
resource availability. Here's a breakdown of some common transitions:

• New to Ready: When a program is created, the operating system


allocates resources and places it in the ready state, waiting for its turn
to be assigned the CPU.
• Ready to Running: The OS selects a process from the ready queue
using a scheduling algorithm and assigns it the CPU. The process then
moves to the running state and starts executing instructions.
• Running to Waiting: A running process might need to enter the
waiting state for various reasons:
18
o It needs to perform an I/O operation and must wait for the I/O to
complete before continuing.
o It requires a specific resource that is currently unavailable (e.g.,
waiting for a file lock to be released).
o It is waiting for a signal or event from another process.
• Waiting to Ready: When the event causing the wait is satisfied (I/O
completion, resource becomes available, signal received), the process
transitions back to the ready state, waiting for its turn to run on the
CPU.
• Running/Waiting to Terminated: A process can terminate due to
various reasons:
o It reaches the end of its program and finishes execution
normally.
o An error occurs during execution (e.g., segmentation fault,
division by zero).
o It receives a signal from the operating system or another
process to terminate.
• Terminated State: Once a process terminates, it releases all its
allocated resources,

• 🔷Process Scheduling: Scheduling algorithms (FCFS, SJF, Priority,


Round Robin, Multilevel Queue), Scheduling Criteria.

Process scheduling is a crucial function of an operating system (OS) that


involves selecting which process should be granted access to the CPU at any
given time. The goal of scheduling is to optimize system performance,
fairness, and resource utilization. This section will explore various scheduling
algorithms commonly used in operating systems.

1. Scheduling Concepts
• Process: A program in execution. Each process requires CPU,
memory, and other resources to run.
• Scheduling Queue: A data structure that holds processes waiting for
CPU allocation.
• Context Switching: The process of saving the state of the current
running process and loading the state of the newly selected process for
execution.
• Scheduling Criteria: Factors considered when selecting a process for
CPU allocation, such as:
o Waiting Time: The amount of time a process has spent waiting
for the CPU.
o Turnaround Time: The total time taken for a process to
complete its execution (from submission to completion).
o Response Time: The time it takes for a process to start running
after it submits a request for CPU access.
o Throughput: The number of processes completed per unit time.

19
2. Common Scheduling Algorithms

Here, we will explore several popular scheduling algorithms with their


advantages and disadvantages:

o First-Come, First-Served (FCFS):


▪ Concept: The simplest scheduling algorithm. Processes are
executed in the order they arrive in the ready queue.
▪ Diagram:

+----------+ +----------+ +----------+ +----------+

| Process 1|->| Process 2|->| Process 3|->| Process 4|

+----------+ +----------+ +----------+ +----------+

▪ Advantages: Easy to implement, fair for short processes.


▪ Disadvantages: Long waiting times for later arriving processes,
can lead to starvation (some processes may never get CPU
time).
o Shortest Job First (SJF):
▪ Concept: The process with the shortest execution time is
chosen next.
▪ Diagram:

+----------+ +----------+ +----------+

| Process 2| (shortest) -> | Process 1| -> | Process 3|

+----------+ +----------+ +----------+

(Ready Queue)

o
▪ Advantages: Minimizes average waiting time and turnaround
time.
▪ Disadvantages: Preemptive scheduling required (knowing
process execution times in advance may not be feasible).
o Priority Scheduling:
▪ Concept: Processes are assigned priorities. Higher priority
processes are executed first.
▪ Diagram:

+----------+ +----------+ +----------+

| High P | (highest) -> | Low P | -> | Med P |

+----------+ +----------+ +----------+

(Ready Queue - prioritized)

20
▪ Advantages: Useful for real-time systems where certain
processes require guaranteed execution.
▪ Disadvantages: Starvation can occur for lower priority
processes if high priority processes are constantly arriving.
o Round Robin (RR):
▪ Concept: Processes are allocated a fixed time slice (quantum).
When a process's time slice is complete, it is preempted and
placed at the back of the ready queue. The next process in the
queue is then given the CPU.
o
▪ Advantages: Ensures fairness for all processes, good for
interactive systems.
▪ Disadvantages: Context switching overhead can reduce overall
performance for CPU-bound processes.
o Multilevel Queue Scheduling:
▪ Concept: Combines multiple scheduling algorithms. Processes
are organized into different queues with different priorities and
scheduling algorithms applied to each queue.
o
▪ Advantages: Provides flexibility to handle different types of
processes effectively.
▪ Disadvantages: Complexity increases

• 🔷Inter-process Communication (IPC): Shared Memory, Semaphores,


Message Passing for process coordination.

Inter-Process Communication (IPC) is a fundamental concept in operating


systems that allows processes to communicate and synchronize their actions.
This communication is essential for tasks like sharing data, requesting
resources, and coordinating activities between different processes running
concurrently on a system. Here, we will explore three common IPC
mechanisms: Shared Memory, Semaphores, and Message Passing.

1. Shared Memory
Shared memory is a technique where processes can directly access and
modify the same portion of memory. This allows for efficient data exchange
between processes without the need for explicit data copying.

21
Diagram of Shared Memory IPC:

Advantages of Shared Memory:

• Fast communication: Direct memory access allows for high-speed


data exchange.
• Efficient for large data: Suitable for sharing large data structures that
would be expensive to copy.

Disadvantages of Shared Memory:

• Synchronization required: Processes need proper synchronization


mechanisms to avoid data corruption when accessing shared memory
concurrently.
• Security concerns: Improper access control can lead to security
vulnerabilities.

2. Semaphores

Semaphores are synchronization primitives that act as shared flags or


counters used to control access to shared resources. They ensure that only
one process can access a critical section of code or resource at a time,
preventing race conditions and data corruption.

Diagram of Semaphore IPC:

22
Types of Semaphores:

• Binary Semaphore: Only takes values of 0 (resource busy) or 1


(resource available).
• Counting Semaphore: Can have a value greater than 1, allowing a
limited number of processes to access a shared resource.

Advantages of Semaphores:

• Simple and efficient: Suitable for basic synchronization needs.


• Mutual exclusion: Ensures only one process accesses a critical
section at a time.

Disadvantages of Semaphores:

• Limited functionality: Not ideal for complex synchronization


scenarios.
• Deadlock potential: Careless semaphore usage can lead to
deadlocks.

3. Message Passing

Message passing is an IPC mechanism where processes exchange data by


sending and receiving messages. Messages can contain data structures,
commands, or any information needed for communication.

Diagram of Message Passing IPC:

Types of Message Passing:

23
• Direct: Messages are sent directly to a specific destination process.
• Indirect: Messages are sent to a mailbox or queue, and the receiving
process retrieves them.

Advantages of Message Passing:

• Modular design: Processes can communicate without tight coupling,


promoting modularity.
• Flexibility: Supports complex communication patterns and data
exchange.

• 🔷Deadlocks: Concepts, Prevention, Detection, and Recovery


mechanisms.

Deadlocks are a critical concept in operating systems, representing a situation


where a set of processes are permanently blocked due to the allocation of
resources. These processes hold resources and are waiting for resources
currently held by other waiting processes, creating a circular dependency.

1. Conditions for Deadlock

Four necessary conditions must be met simultaneously for a deadlock to


occur in an operating system:

• Mutual Exclusion: At least one resource must be held in a non-


sharable mode. A process can only use the resource exclusively, and
no other process can access it while the first process is holding it.
• Hold and Wait: A process holding at least one resource is waiting to
acquire additional resources currently held by other processes.
• No Preemption: Resources cannot be forcibly taken away from a
process holding them. They must be released voluntarily by the
process.
• Circular Wait: There exists a circular chain of waiting processes,
where each process is waiting for a resource held by the next process
in the chain.

24
Diagram of a Deadlock Scenario:

Here, Process P1 holds Resource R1 and is waiting for Resource R2 held by


Process P2. Process P2, in turn, holds Resource R2 and waits for Resource R1 held
by Process P1, creating a circular dependency.

2. Deadlock Prevention

Since deadlocks can significantly impact system performance, prevention is


crucial. Here are some strategies for deadlock prevention:

• Mutual Exclusion Relaxation: Allow resource sharing when safe


(e.g., readers-writers problem).
• Hold and Wait Restriction: Processes can only request resources
they don't already hold.
• No Preemption Restriction: Enforce a policy where resources cannot
be forcibly taken away from a process. This can be risky as it might
lead to resource starvation for other processes.
• Resource Ordering: Impose a global ordering on resource types and
acquire resources only in a specific order to avoid circular wait.

3. Deadlock Detection and Recovery

Despite prevention measures, deadlocks can still occur. Here's how to handle
them:

25
• Deadlock Detection: Use algorithms like resource-wait graphs or
dependency matrices to detect circular wait situations.
• Deadlock Recovery: Once a deadlock is detected, various
approaches can be taken:
o Process Termination: Terminate one or more processes
involved in the deadlock, releasing their held resources. This
should be a last resort due to potential data loss.
o Resource Preemption: Forcefully take away resources from
some processes and allocate them to resolve the deadlock. This
can be risky if the process hasn't completed its critical section
yet.
o Rollback: Roll back processes involved in the deadlock to a
safe state and restart them, potentially losing some progress.

By understanding deadlocks, their conditions, prevention strategies, and


recovery techniques, you can ensure a more robust and efficient operating
system environment.

• 🔷Programming Assignment 1: Implement basic process scheduling


algorithms in a chosen programming language.

This assignment introduces you to process scheduling by implementing basic


scheduling algorithms in a chosen programming language. You'll explore how
different algorithms prioritize and manage processes, impacting their
execution order and overall system performance.

Learning Objectives:

• Understand the concept of process scheduling algorithms.


• Implement First-Come-First-Served (FCFS), Shortest-Job-First (SJF),
and Priority scheduling algorithms in your chosen programming
language.
• Analyze the behavior and performance metrics (average waiting time,
average turnaround time) of each algorithm.

Choosing a Programming Language:

You can choose a language you're comfortable with, such as C, C++, Java, or
Python. Each language has its specific libraries or functions for simulating
process execution. Refer to your chosen language's documentation for details
on process management functionalities.

26
Implementation Steps:

1. Process Representation: Define a data structure to represent a


process. It should typically include attributes like process ID, arrival
time, burst time (CPU execution time), and priority (if applicable).
2. Scheduling Algorithm Implementation: For each chosen scheduling
algorithm (FCFS, SJF, Priority):
o Implement a function that takes a queue of processes as input.
o Simulate the scheduling process according to the algorithm's
logic.
o Maintain relevant data structures (e.g., queues) to track process
states (running, waiting, terminated).
o Calculate the waiting time for each process as the difference
between its completion time and arrival time.
o Calculate the turnaround time for each process as the difference
between its completion time and arrival time.
3. Performance Analysis: After simulating each algorithm, calculate and
print the average waiting time and average turnaround time for all
processes. Analyze how these metrics vary between different
algorithms.

Remember:

• This is a basic example. You'll need to adapt it to your chosen


language and desired functionalities.
• Include comments in your code to explain each step and improve
readability.
• Experiment with different process data sets to analyze how scheduling
algorithms impact waiting and turnaround times.

This assignment provides a foundation for understanding process scheduling


concepts. As you progress through the course, you'll explore more advanced
scheduling algorithms and their trade-offs.

27
3: Memory Management
• 🔷Memory Hierarchy: Cache, Main Memory, Secondary Storage
(HDD/SSD).

The memory hierarchy is a fundamental concept in operating systems design.


It refers to the organization of computer memory into different levels based on
speed, size, and cost. This tiered structure allows the system to balance the
need for fast access with the requirement for large storage capacity.

Levels of Memory Hierarchy:

The memory hierarchy typically consists of the following levels, arranged from
fastest and smallest to slowest and largest:

1. Registers: These are built-in memory locations within the CPU that
can be accessed very quickly. They are used to store temporary data
and instructions that the CPU is currently working on. (Size: Kilobytes
(KB))
2. Cache: This is a small, high-speed memory located between the CPU
and main memory. It stores frequently accessed data and instructions
from main memory, allowing the CPU to retrieve them much faster.
(Size: Megabytes (MB))
3. Main Memory (RAM): This is the primary memory of the computer,
where programs and data are stored while they are actively being
used. It is faster than secondary storage but slower than cache and
registers. (Size: Gigabytes (GB))
4. Secondary Storage (HDD/SSD): This is non-volatile storage that
retains data even when the computer is powered off. It is slower than
main memory but provides much larger storage capacity for data and
programs that are not actively in use. Secondary storage includes:
o Hard Disk Drive (HDD): Uses magnetic platters to store data,
offering high capacity but slower access times.
o Solid State Drive (SSD): Uses flash memory chips and
provides faster access times than HDDs but often has lower
capacity and higher cost per gigabyte. (Size: Terabytes (TB) and
beyond)

28
Diagram of Memory Hierarchy:

Benefits of Memory Hierarchy:


• Improved Performance: The presence of cache reduces the average
access time to data by storing frequently used items closer to the CPU.
• Cost-Effectiveness: By utilizing a tiered system, expensive high-
speed memory can be used sparingly while leveraging larger, less
expensive storage for infrequently accessed data.
• Efficient Memory Utilization: Main memory can be used more
effectively by keeping only actively used data and programs loaded,
allowing for faster processing of frequently accessed information.

Trade-offs in Memory Hierarchy:

• Speed vs. Capacity: As we move down the hierarchy, access speed


decreases but storage capacity increases.
• Cost: Faster memory levels are typically more expensive per unit of
storage.

Memory Access in Operating Systems:

The operating system plays a crucial role in managing the memory hierarchy.
When the CPU needs data, it first checks the registers. If the data is not
found, it moves on to the cache. If the data is not present in the cache (cache
miss), the OS retrieves it from main memory. If the data is not in main
memory (main memory miss), the OS must then fetch it from secondary
storage (HDD/SSD), which is the slowest access. This process involves disk
I/O operations, which can significantly impact performance.

By optimizing cache management strategies and data placement techniques,


the operating system strives to minimize cache misses and main memory
misses, leading to faster overall system performance.

29
• 🔷Memory Allocation Techniques: Contiguous Allocation, Paging,
Segmentation, Virtual Memory.

Memory management is a crucial component of an operating system (OS)


responsible for allocating and managing the computer's main memory (RAM)
efficiently. Various memory allocation techniques exist, each with its
advantages and disadvantages. Here, we will explore four common
techniques: contiguous allocation, paging, segmentation, and virtual memory.

1. Contiguous Allocation
Contiguous allocation is a straightforward approach where a process is
allocated a single contiguous block of memory to hold its entire code, data,
and stack. The OS maintains a free memory list to track available memory
blocks.

Diagram of Contiguous Allocation:

Advantages:

• Simple to implement
• Fast memory access (contiguous block allows for sequential access)

Disadvantages:

• External fragmentation: Over time, free memory becomes


fragmented into small, scattered blocks unusable for larger processes,
leading to wasted memory space.

30
• Internal fragmentation: Memory allocated to a process may not be
fully utilized, leaving unused space within the allocated block.
• Difficulty in loading new processes: Finding a large enough
contiguous block to fit a new process can be challenging, especially
with external fragmentation.

2. Paging
Paging is a memory management technique that divides both physical
memory (RAM) and logical memory (process address space) into fixed-size
blocks called frames and pages, respectively. The OS maintains a page table
that maps logical page numbers from a process's address space to physical
frame numbers in RAM.

Diagram of Paging:

Advantages:

• Eliminates external fragmentation: Memory is allocated in fixed-size


frames, reducing wasted space.
• Simplified memory management: OS only needs to track the page
table for each process.

31
Disadvantages:

• Internal fragmentation: Similar to contiguous allocation, internal


fragmentation can occur within a page if the process doesn't fully utilize
the allocated frame size.
• Page table overhead: Maintaining a page table for each process adds
some memory overhead.

3. Segmentation
Segmentation is another memory allocation technique that divides a process's
logical memory into variable-sized segments based on its logical structure
(e.g., code, data, stack). Each segment has its own base address and length.
A segment table maps logical segment addresses to physical memory
addresses.

Diagram of Segmentation:

32
Advantages:

• Efficient memory utilization: Segments can be sized according to the


process's needs, minimizing internal fragmentation.
• Logical memory management: Segmentation reflects the logical
structure of a process, simplifying memory management for
programmers.

Disadvantages:

• External fragmentation: Similar to contiguous allocation, free memory


can become fragmented over time as segments are loaded and
unloaded.
• Increased complexity: Managing variable-sized segments and the
segment table adds complexity compared to paging.

4. Virtual Memory

• 🔷Address Translation: Virtual Memory concepts, Translation


Lookaside Buffer (TLB).

Virtual memory is a memory management technique that allows a computer


system to address more memory than it physically has. This creates the
illusion of a larger contiguous memory space for processes, enabling them to
run efficiently even when physical memory is limited. A key component in
virtual memory is Address Translation, which involves mapping virtual
addresses used by processes to physical memory locations. Here's a
breakdown of these concepts:

1. Virtual Memory Fundamentals

• Limited Physical Memory: Modern computer systems typically have


limited physical RAM (Random Access Memory).
• Large Address Space: Programs and data can be quite large,
requiring more memory than physically available.
• Virtual Memory Solution: Virtual memory creates a larger logical
address space that processes can use, even if the physical memory is
fragmented.

Benefits of Virtual Memory:

• Process Isolation: Each process has its own virtual address space,
preventing interference between processes.
• Efficient Memory Utilization: Allows processes to use more memory
than physically available, improving memory utilization.

33
• Simplified Memory Management: Programs can access memory
using virtual addresses without needing to know the physical layout.

2. Address Translation Process

Virtual memory relies on a mechanism called address translation to map


virtual addresses used by processes to physical memory locations. This
translation happens in two stages:

• Page Table Lookup: The virtual address is divided into two parts: a
Virtual Page Number (VPN) and an Offset. The VPN is used as an
index to lookup a page table entry in memory. The page table entry
contains a Physical Frame Number (PFN), which points to the physical
memory frame where the corresponding virtual page is located.
• Adding Offset: The offset part of the virtual address remains
unchanged. It is added to the physical frame number obtained from the
page table to get the final physical memory address.

Diagram of Address Translation:

3. Translation Lookaside Buffer (TLB)

To speed up the address translation process, a special cache called the


Translation Lookaside Buffer (TLB) is used. The TLB stores recently used
virtual-to-physical address translations, reducing the need to access the page
table for every memory access.

Benefits of TLB:

• Faster Address Translation: TLB lookups are significantly faster than


accessing the page table in memory.
• Improved Performance: Reduced address translation time improves
overall system performance.

34
TLB Management:

• TLB Entries: The TLB has a limited number of entries, and entries are
filled dynamically based on recently accessed virtual addresses.
• TLB Misses: If the virtual address is not found in the TLB, a page table
lookup is required, and the TLB may be updated with the new
translation.

In conclusion, virtual memory and address translation are essential concepts


in modern operating systems. They enable efficient memory utilization,
process isolation, and simplified memory management, even with limited
physical memory resources. The TLB further optimizes address translation by
caching frequently used mappings, leading to improved system performance.

• 🔷Memory Protection: Memory Protection mechanisms (Memory


Management Units - MMU) to prevent process interference.

Memory protection is a crucial mechanism in operating systems that ensures


the safe and secure execution of multiple processes on a computer system. It
prevents processes from interfering with each other's memory space,
safeguarding data integrity and system stability. Here's a detailed breakdown
of memory protection concepts and techniques:

1. Why Memory Protection is Needed


• Process Isolation: Without memory protection, a process could
accidentally or maliciously overwrite another process's memory,
leading to program crashes, data corruption, or even system instability.
• Security: Memory protection helps prevent unauthorized access to
sensitive data. One process cannot access another's memory space
unless explicitly granted permission.
• Efficient Resource Management: Memory protection allows the
operating system to efficiently allocate memory resources to different
processes, preventing one process from hogging all available memory.

2. Memory Protection Mechanisms


Operating systems employ various techniques to enforce memory protection.
Here are some key mechanisms:

• Memory Management Unit (MMU): This hardware component resides


within the CPU and plays a central role in memory protection. The
MMU translates virtual memory addresses used by processes into
physical memory addresses. It also enforces access permissions
based on information stored in a Memory Management Unit table
(MMU table).

Diagram of Memory Protection with MMU:

35
**Explanation:**

1. Processes use virtual memory addresses for memory access.


2. The MMU table stores information about each process's memory
allocation and access permissions (read-only, read-write, execute).
3. When a process tries to access memory, the virtual address is sent to
the MMU.
4. The MMU uses the MMU table to translate the virtual address to a
physical memory address.

36
5. The MMU also checks the access permissions associated with the
memory location based on the MMU table.
6. If the access is permitted, the MMU allows the memory access to
proceed.
7. If the access violates permissions (e.g., trying to write to a read-only
memory region), the MMU raises a memory protection fault, and the
operating system intervenes.

3. Types of Memory Protection

1. Read-Only Protection: Prevents processes from writing to specific


memory regions, protecting critical system files or code segments.
2. Read-Write Protection: Grants processes permission to both read
and write data in certain memory regions, typically used for process
data and code that can be modified.
3. Execute Only Protection: Allows processes to execute code stored in
a specific memory region but prevents them from modifying the code
itself.

4. Benefits of Memory Protection

1. Improved System Stability: Memory protection reduces the risk of


program crashes and system instability due to process interference.
2. Enhanced Security: It helps safeguard sensitive data by preventing
unauthorized access from other processes.
3. Efficient Resource Management: Memory protection allows for
controlled allocation and utilization of memory resources among
multiple processes.

• 🔷Memory Replacement Policies: FIFO, LRU, Optimal Page


Replacement Algorithms.

This section dives into memory replacement policies, a crucial aspect of


virtual memory management in operating systems. When a process needs a
page that's not currently in physical memory (RAM), the OS needs to decide
which existing page to evict to make space. Effective replacement policies
aim to minimize the number of page faults (situations where a required page
isn't in memory) and improve overall system performance.

1. First-In-First-Out (FIFO)

FIFO (First-In-First-Out) is a simple replacement policy that evicts the page


that has been in memory the longest (the first one loaded). It operates like a
first-come, first-served queue.

37
Diagram of FIFO Replacement Policy:

Advantages of FIFO:

• Simple to implement.
• Fair to all pages, preventing starvation (a page being constantly
replaced before it can be used).

Disadvantages of FIFO:

• Can suffer from the "Belady's Anomaly," where recently used pages
are evicted even though they might be needed again soon, leading to
unnecessary page faults.

2. Least Recently Used (LRU)

LRU (Least Recently Used) replaces the page that hasn't been used for the
longest duration. The OS keeps track of page usage and prioritizes keeping
recently used pages in memory based on the assumption that recently used
pages are more likely to be accessed again soon.

Implementation of LRU:

• Each page has a "use" bit that gets set whenever the page is
accessed.
• A "clock" or timestamp mechanism can also be used to track the last
access time for each page.
• When a page replacement is needed, the OS identifies the page with
the least recently set "use" bit or the oldest timestamp and evicts it.
38
Diagram of LRU Replacement Policy:

Advantages of LRU:

• Generally outperforms FIFO by keeping recently used pages in


memory.
• Reduces page faults compared to FIFO.

Disadvantages of LRU:

• Requires maintaining additional data structures (use bits or


timestamps) for tracking usage, which can add some overhead.

3. Optimal Page Replacement (OPT)

OPT (Optimal Page Replacement) is a theoretical concept that serves as a


benchmark for all other replacement policies. OPT has perfect knowledge of
future page references and replaces the page that won't be used for the
longest time in the future. While not practical to implement in real systems, it
helps us understand the best-case scenario for page replacement.

Concept of OPT:

• The OS has knowledge of the entire reference string (sequence of


future page accesses).
• It evicts the page that will be used the farthest in the future, ensuring
the most optimal memory usage.

Disadvantages of OPT:

• Impossible to implement in real systems because the OS cannot


predict future page references.
• Useful for understanding the theoretical limit of page replacement
performance.

Choosing a Replacement Policy:

The choice of a replacement policy depends on various factors, including:

39
• System workload characteristics (e.g., random vs. sequential access
patterns).
• Hardware limitations (e.g., complexity of maintaining additional data
structures).
• Performance trade-offs (balancing simplicity with efficiency).

FIFO is simple to implement but can suffer from Belady's Anomaly. LRU offers
better performance but adds some overhead for tracking usage. While OPT is
the best-case scenario, it's not practical for real systems. In practice,
variations and combinations of these basic policies are often used to achieve
optimal performance for specific systems.

• 🔷Programming Assignment 2: Simulate memory allocation and


replacement algorithms in a chosen programming language.

This assignment focuses on simulating memory allocation and replacement


algorithms in a chosen programming language. Here's a breakdown to guide
you:

Objectives:

• Implement functions to simulate memory allocation techniques


(contiguous allocation, paging, segmentation).
• Develop code to simulate memory replacement algorithms (FIFO, LRU,
Optimal).
• Evaluate the performance of different memory allocation and
replacement strategies.

Tasks:

1. Memory Allocation Simulation:

• Create functions to simulate memory allocation for a set of processes


with varying memory requirements.
• Implement functions for the following allocation techniques:
o Contiguous Allocation: Simulate allocating contiguous
memory blocks to processes. Keep track of free memory blocks
and assign them to processes as needed.
o Paging: Simulate dividing memory into fixed-size frames and
processes into fixed-size pages. Track the page table that maps
logical page numbers to physical frame numbers.
o Segmentation: Simulate dividing memory into variable-sized
segments for each process. Maintain a segment table that
stores information about segment sizes and base addresses.

40
2. Memory Replacement Simulation:

• Design a function to simulate memory access requests for processes.


These requests can specify the process ID and the memory address
being accessed (logical address for paging and segmentation).
• Implement functions for the following replacement algorithms:
o First-In-First-Out (FIFO): Replace the oldest page/segment
that has been in memory when a new page/segment needs to
be loaded. Maintain a queue to track the order of page/segment
arrivals.
o Least Recently Used (LRU): Replace the page/segment that
has not been used for the longest time when a new
page/segment needs to be loaded. Keep track of the last access
time for each page/segment to identify the least recently used
one.
o Optimal Page Replacement: (This is for reference only, not
practical for real systems due to its requirement for knowing
future access patterns). Simulate the optimal replacement
algorithm that replaces the page that will not be used for the
longest time in the future, minimizing page faults.

3. Performance Evaluation:

• Track and record the number of page faults (for paging and
segmentation) that occur during the simulation for each replacement
algorithm. A page fault happens when a required page/segment is not
currently in memory and needs to be loaded from secondary storage.
• Calculate the page fault ratio (number of page faults / total memory
access requests) for each algorithm. This metric indicates the
efficiency of the memory replacement strategy. Lower page fault ratios
are desirable.

4. Code Structure:

• Organize your code into well-defined functions with clear explanations


and comments.
• Use appropriate data structures (e.g., arrays, lists) to represent
memory blocks, page tables, segment tables, and process information.
• Consider using random number generation to simulate memory access
requests with varying process IDs and memory addresses.

5. Testing and Analysis:

• Test your program with different scenarios involving varying numbers


of processes and memory requirements.
• Analyze the results and compare the performance (page fault ratio) of
different memory allocation and replacement algorithms.
• Discuss the trade-offs between these techniques in terms of memory
utilization and efficiency.

41
Remember:

• This is a guideline, and the specific implementation details may vary


depending on your chosen programming language.
• Focus on clear and well-documented code that demonstrates your
understanding of memory allocation and replacement concepts.
• Feel free to modify or extend this assignment to explore additional
memory management techniques or performance metrics.

4: File Management
• 🔷File System Concepts: File Structure, Directory Management, Access
Methods (Sequential, Indexed).

File systems are a crucial component of operating systems, enabling users to


organize, store, and retrieve data efficiently. This section delves into the
fundamental concepts of file systems, including file structure, directory
management, and access methods.

1. File Structure

A file is a named collection of related information stored on a secondary


storage device (e.g., hard disk drive, solid-state drive). The file structure
defines how data is organized within a file.

• Logical vs. Physical Structure:


o Logical Structure: How users perceive the file (e.g., text
document, image file).
o Physical Structure: How the data is actually stored on the
storage device (e.g., data blocks, sectors).

Diagram of File Structure:

+-----------------+

| File Name | (Logical View)

+-----------------+

| Header | (Optional - file metadata)

+-----------------+

| Data | (Actual content of the file)

+-----------------+

42
• File Metadata: Optional header information stored at the beginning of
a file containing details like file size, creation date, access permissions,
etc.
• Data: The actual content of the file, which can be text, images, videos,
programs, or any other digital information.

2. Directory Management
Directories (also called folders) are used to organize and group files within a
file system. They act like a hierarchical tree structure, allowing users to create
subdirectories within parent directories for nested organization.

Directory Structure:

/ (Root Directory)

|- Folder1

| |- File1.txt

| |- File2.jpg

| |- Subfolder1

| |- File3.doc

|- Folder2

| |- File4.mp4

|- File5.exe

Directory Operations:

• Creating Directories: Makes new directories within the existing


directory structure.
• Deleting Directories: Removes empty directories.
• Renaming Directories: Changes the name of existing
directories.
• Moving Directories: Repositions directories within the directory
structure.

3. Access Methods

Access methods determine how data is retrieved from files. Here are two
common access methods:

43
• Sequential Access:
o Data is accessed sequentially, one unit (e.g., byte) after another,
similar to reading a book page by page.
o Suitable for files where data needs to be read or written in a
specific order (e.g., log files, text documents).
o Not efficient for randomly accessing specific parts of a file.
• Indexed Access:
o Files are divided into fixed-size blocks, and an index table keeps
track of the location of each block.
o Allows for random access of any part of the file by directly
referencing the block address in the index table.
o More efficient for accessing specific parts of a large file quickly.

Diagram of Indexed Access:

Choosing an Access Method:

The choice of access method depends on the type of data and how it will be
accessed. Sequential access is suitable for reading data in a specific order, while
indexed access is ideal for random access to any part of the file.

44
• 🔷File System Implementation: Disk Scheduling Algorithms (FCFS,
SCAN, C-SCAN).

Disk scheduling algorithms are an essential part of file system implementation


in operating systems. They determine the order in which the disk head serves
read/write requests for data on the storage disk. Here, we will explore three
common disk scheduling algorithms: First Come First Served (FCFS), SCAN
(Elevator Algorithm), and C-SCAN (Circular SCAN).

1. First Come First Served (FCFS)

FCFS is the simplest scheduling algorithm. It processes requests in the order


they arrive in the queue. The disk head simply moves to the location of the
next request on the disk, regardless of its direction or distance from the
previous request.

Advantages:

• Easy to understand and implement.


• Fair to all requests as they are served in the arrival order.

Disadvantages:

• Can lead to long seek times if requests are scattered across the disk.
• Can cause "starvation" for requests located far from the current head
position, especially if many small requests are issued in quick
succession.

45
Diagram of FCFS Scheduling:

2. SCAN (Elevator Algorithm)

SCAN, also known as the Elevator Algorithm, aims to minimize seek time by
servicing requests in a specific direction. The disk head moves in one
direction (e.g., from inner tracks to outer tracks) until it reaches the end, then
reverses direction and services remaining requests in the opposite direction.

Advantages:

• Reduces seek time compared to FCFS by minimizing head movement.


• More efficient when requests are clustered in a specific area of the
disk.

Disadvantages:

• May cause starvation for requests at the opposite end of the current
head position if there are a lot of requests in the other direction.

46
Diagram of SCAN Scheduling:

3. C-SCAN (Circular SCAN)

C-SCAN is a variation of SCAN where the head movement is circular. After


reaching the end in one direction, the head immediately jumps back to the
beginning of the disk and services the remaining requests moving in the same
direction.

Advantages:

• Guarantees that all requests are served eventually, avoiding starvation.


• Similarly seek time efficiency as SCAN when requests are relatively
evenly distributed.

Disadvantages:

• Might have higher seek times than SCAN if most requests are
clustered in one half of the disk.

47
Diagram of C-SCAN Scheduling:

• 🔷File Allocation Methods: Contiguous Allocation, Linked Allocation,


Indexed Allocation.

File allocation methods are techniques used by operating systems to manage


disk space and store files efficiently. These methods determine how data is
physically organized on a storage device (like a hard drive) and how the
operating system keeps track of those locations. Here, we will explore three
common file allocation methods: Contiguous Allocation, Linked Allocation, and
Indexed Allocation.

1. Contiguous Allocation

In contiguous allocation, a file is stored as a continuous block of sectors on


the disk. The entire file occupies a single, uninterrupted segment of storage
space.

Advantages:

• Fast access: Since the file is stored in a contiguous manner,


accessing any part of the file requires minimal head movement for the
disk drive, leading to faster read and write operations.
• Simple implementation: The logic for keeping track of file location is
straightforward as the starting and ending sectors of the file suffice.

48
Disadvantages:

• External Fragmentation: Over time, as files are created, deleted, and


modified, unused gaps (holes) can appear between allocated and
unallocated sectors, leading to external fragmentation. This can waste
storage space as even small files may not fit into these fragmented
gaps.
• Difficulty in dynamic allocation: Allocating space for a new file
requires finding a large enough contiguous block to accommodate the
entire file. This can become challenging as the disk fills up with
fragmented free space.

Diagram of Contiguous Allocation:

49
2. Linked Allocation

In linked allocation, each file is stored in a series of non-contiguous sectors on


the disk. Each sector holds the data for the file and a pointer to the next sector
in the sequence. This pointer links the separate sectors together to form the
complete file.

Advantages:

• Reduced external fragmentation: Since files don't need to be stored


contiguously, free space can be utilized more efficiently, minimizing
external fragmentation.
• Dynamic allocation: Allocating space for a new file is simpler as the
OS only needs to find enough free sectors, regardless of their location
on the disk.

Disadvantages:

• Slower access: Accessing random locations within a file may require


following multiple pointers and head movements across the disk,
leading to slower read and write operations compared to contiguous
allocation.
• Overhead: Extra storage space is needed to store the pointers within
each sector, which can be inefficient for small files.
• Pointer loss: If a pointer is corrupted or lost, it can lead to difficulty or
even impossibility in accessing the remaining parts of the file.

50
Diagram of Linked Allocation:

51
• 🔷File Sharing and Access Control: Mechanisms for secure file access
and sharing.

This section dives into the mechanisms used by operating systems to enable
secure file access and sharing between users and applications. Effective file
sharing and access control are crucial for protecting sensitive data and
maintaining system integrity.

1. File System Permissions


Operating systems typically rely on a permission-based access control
system. Permissions determine who (users, groups) can access a file or
directory and what actions they can perform (read, write, execute). These
permissions are assigned to the file owner, group, and others (world).

• File Owner: The user who created the file.


• Group: A collection of users with shared access privileges.

Common Permissions:

• Read (r): Allows viewing the contents of the file.


• Write (w): Allows modifying the contents of the file.
• Execute (x): Allows running the file if it's an executable program.

Permission Modes:

• User (u): Permissions for the file owner.


• Group (g): Permissions for the file's group members.
• Other (o): Permissions for all other users.

Permission Representation:

Permissions are often represented using a three-character string, where each


character represents the permissions for user, group, and other, respectively.
Each character can be:

• r: Read permission is granted.


• -: Read permission is denied.
• w: Write permission is granted.
• -: Write permission is denied.
• x: Execute permission is granted (for executable files).
• -: Execute permission is denied.

Changing Permissions:

Most operating systems provide tools for users with appropriate privileges to
change file permissions. This allows for granular control over file access.

52
2. User Management

Operating systems typically employ user accounts to identify and authenticate


users accessing the system. Each user account has a unique username and
password (or other authentication mechanism). User accounts can be
assigned to groups for easier permission management.

• Authentication: Verifying the user's identity through credentials like


username and password.
• Authorization: Determining what actions a user is allowed to perform
based on their assigned permissions.

Benefits of User Management:

• Accountability: Tracks user activity and identifies who accessed


specific files.
• Security: Restricts unauthorized access to sensitive data.
• Access Control: Allows for assigning different permission levels to
different users.

3. Access Control Lists (ACLs)

ACLs provide a more flexible way to manage file access control by explicitly
specifying which users and groups can access a file and their corresponding
permissions. ACLs offer finer-grained control compared to basic owner-group-
other permissions.

Structure of an ACL:

• Entries: Each entry specifies a user or group and their associated


permissions (read, write, execute).
• Inheritance: ACL permissions can be inherited from parent directories
to child files, simplifying access control management.

Benefits of ACLs:

• Granular Control: Enables assigning specific permissions to individual


users and groups.
• Flexibility: Allows for complex access control scenarios.
• Inheritance: Reduces administrative overhead by inheriting
permissions from parent directories.

53
Diagram of an ACL Entry:

4. File Sharing Mechanisms

Operating systems provide various mechanisms for sharing files between


users and allowing collaborative access. Here are some common
approaches:

• Shared Folders: Designating specific directories as shared and


granting access permissions to other users.

54
• Network File Systems (NFS): A distributed file system protocol that
allows users on different machines to access shared files over a
network.
• Cloud Storage: Storing files on remote servers (cloud) and sharing
them with others via access links or permissions.

Security Considerations for File Sharing:

• Authentication: Ensure users are who they claim to be before


granting access.
• Authorization: Assign appropriate permission levels based on user
needs.
• Encryption: Consider encrypting sensitive files for additional
protection.
• Auditing: Monitor file access logs to detect unauthorized activity.

By effectively utilizing file sharing and access control mechanisms, operating


systems enable secure collaboration while safeguarding

• 🔷File System Protection: Techniques to prevent unauthorized access


and data corruption.

File systems are critical components of an operating system, responsible for


storing, organizing, and retrieving data. However, these systems are
vulnerable to various threats that can compromise data integrity and security.
This section explores various techniques employed by operating systems to
protect file systems from unauthorized access and data corruption.

1. User Authentication and Access Control

• User Authentication: This is the first line of defense, ensuring only


authorized users can access the system and its files. Common
authentication methods include:
o Passwords: Users are required to provide a secret password to
gain access.
o Biometrics: Utilizes fingerprint scanners, facial recognition, or
iris scans for user identification.
o Multi-Factor Authentication (MFA): Requires a combination of
factors like password and a code from a security token or mobile
app for additional verification.
• Access Control: Defines what actions authorized users can perform
on specific files or directories. This is typically implemented through:
o Access Control Lists (ACLs): Explicitly define access
permissions (read, write, execute) for individual users or user
groups to specific files or directories.
o Capabilities: Grant users specific capabilities to perform certain
actions on files, limiting their ability to perform unauthorized
operations.

55
2. File System Permissions

• Operating systems assign permissions to files and directories,


controlling what users can do with them. Standard permissions include:
o Read: Allows users to view the contents of a file.
o Write: Allows users to modify the contents of a file.
o Execute: Allows users to run a file as a program (if applicable).
• Permissions are typically assigned based on user roles or groups. For
example, a system administrator might have full read/write/execute
permissions on all files, while regular users might only have read
access to their own documents.

Diagram of File Permissions:

3. Encryption
• Encryption scrambles the contents of a file using a secret key, making
it unreadable to anyone without the proper decryption key. This
protects data confidentiality, even if unauthorized users gain access to
the file system.
• Different encryption techniques are available:
o Symmetric Encryption: Uses a single key for both encryption
and decryption.
o Asymmetric Encryption: Uses a public key for encryption and
a private key for decryption, enhancing security as the private
key remains confidential.
• File system encryption can be implemented at different levels:

56
o Full Disk Encryption: Encrypts the entire disk volume,
protecting all files.
o Individual File Encryption: Encrypts specific files, allowing
selective protection of sensitive data.

4. Auditing and Logging


• Auditing tracks user activity on the file system, recording information
like who accessed what files and when. This allows administrators to
identify potential security breaches or suspicious behavior.
• System logs record important events related to the file system, such as
file creation, deletion, and permission changes. Logs can be analyzed
for security audits and troubleshooting purposes.

5. Data Integrity and Recovery Techniques

• Checksums: A mathematical value calculated based on the file's


content. Any changes in the file will alter the checksum, allowing
detection of data corruption.
• Journaling File Systems: Maintain a log of file system changes,
enabling rollback to a previous state in case of errors or crashes.
• Disk Mirroring and RAID (Redundant Array of Independent Disks):
Creates redundant copies of data across multiple disks, providing fault
tolerance and allowing data recovery in case of disk failure.

These techniques, implemented in combination, create a layered defense


system to protect file systems from unauthorized access, data corruption, and
accidental deletions. The specific techniques employed by an operating
system may vary depending on the platform and its security requirements.

• 🔷Programming Assignment 3: Implement basic file system operations


(create, read, write, delete) in a chosen programming language.

File management is a crucial component of an operating system (OS)


responsible for organizing, storing, retrieving, and manipulating files on
secondary storage devices (hard disk drives, solid-state drives). This section
dives into core file system concepts, implementation details, and explores
how the OS manages files efficiently.

1. File System Concepts

A file system provides a structured way to organize data on a storage device.


It defines:

• File Structure: The logical organization of a file, including its name,


attributes (size, creation date, permissions), and data content.

57
• Directory Management: A hierarchical structure using directories
(folders) to group related files and subdirectories for better
organization.
• Access Methods: Techniques for accessing file content. Common
methods include:
o Sequential Access: Reading/writing data sequentially from the
beginning of the file.
o Indexed Access: Using an index to locate specific data blocks
within the file efficiently (e.g., accessing a particular record in a
database file).

Diagram of a File System:

2. File System Implementation


The OS implements the file system using various techniques to manage data
storage and retrieval on physical storage devices. Here's a breakdown of
some key aspects:

• Disk Scheduling Algorithms: Since disk access is slower than main


memory access, the OS employs disk scheduling algorithms (FCFS -
First Come First Served, SCAN - head moves back and forth, C-SCAN
- head moves only in one direction) to optimize the order of I/O
(Input/Output) requests, minimizing seek time (time it takes for the disk
head to move to the desired location).
• File Allocation Methods: Techniques for allocating storage space for
files on the disk. Common methods include:
o Contiguous Allocation: Files are stored in a single, contiguous
block of sectors on the disk. This is efficient for sequential
access but can lead to external fragmentation (wasted space)
over time.

58
o Linked Allocation: Files are stored in non-contiguous sectors,
with each sector containing a pointer to the next sector in the
file. This eliminates external fragmentation but can introduce
internal fragmentation (wasted space within sectors) and
overhead for managing pointers.
o Indexed Allocation: A separate index table keeps track of the
data block locations for a file. This allows efficient random
access but adds complexity to the file system structure.

Diagram of File Allocation Methods:

• Contiguous Allocation:

59
• Linked Allocation:

• Indexed Allocation:

60
5: Security and I/O Management
• 🔷System Security Concepts: User Authentication, Access Control
Mechanisms, Security Policies.

This section dives into the crucial aspects of operating system security,
focusing on user authentication, access control mechanisms, and security
policies. Understanding these concepts is essential for protecting computer
systems from unauthorized access and maintaining data integrity.

1. User Authentication

User authentication verifies the identity of a user attempting to access a


computer system. Here are some common authentication mechanisms:

• Password-based Authentication: Users provide a username and


password combination to gain access. Passwords should be complex
and unique to each system.
• Multi-factor Authentication (MFA): Requires two or more
authentication factors for added security. Examples include passwords
combined with fingerprint scanners, one-time codes sent via SMS, or
security tokens.
• Biometric Authentication: Utilizes unique physical or behavioral
characteristics like fingerprints, facial recognition, or iris scans for user
identification.

Diagram of User Authentication:

61
2. Access Control Mechanisms
Access control mechanisms regulate how users and processes can access
system resources (files, devices, memory) based on their permissions. Here
are some common methods:

• Discretionary Access Control (DAC): File owners determine access


permissions (read, write, execute) for other users and groups.
• Mandatory Access Control (MAC): A centralized system enforces
access control policies, overriding user-defined permissions. Often
used in high-security environments.
• Role-Based Access Control (RBAC): Users are assigned roles with
predefined permissions, simplifying access control management.

62
Diagram of Discretionary Access Control Mechanism (DAC):

63
3. Security Policies

Security policies are formal documents that outline the rules and procedures
for maintaining system security. These policies define acceptable use,
password management practices, data classification, and incident response
procedures.

Security Policy Components:

• Acceptable Use Policy (AUP): Specifies what activities are permitted


and prohibited on the system.
• Password Policy: Defines password complexity requirements and
frequency of password changes.
• Data Classification Policy: Classifies data based on sensitivity and
defines access control levels for each category.
• Incident Response Policy: Outlines procedures for handling security
incidents (e.g., data breaches, malware attacks).

Benefits of Security Policies:

• Improved Security Posture: Clearly defined policies provide a


framework for secure system use.
• Reduced Risk: Policies help to prevent accidental or intentional
security breaches.
• Compliance: Security policies may be required for compliance with
industry regulations or organizational standards.

By implementing robust user authentication, access control mechanisms, and


security policies, organizations can significantly enhance their system security
and protect sensitive data.

• 🔷Security Threats: Malware, Viruses, Worms, Denial-of-Service


attacks, Mitigation Strategies.

This section covers two crucial aspects of operating systems: security and
Input/Output (I/O) management.

1 System Security Concepts

Operating systems are responsible for protecting computer systems from


unauthorized access, data breaches, and malicious software. Here are some
key security concepts:

• User Authentication: The process of verifying a user's identity before


granting access to system resources. This can involve usernames,
passwords, multi-factor authentication (MFA), and biometrics
(fingerprint, facial recognition).
• Access Control Mechanisms: Techniques to restrict access to
specific resources based on user privileges. Common mechanisms
include Access Control Lists (ACLs), Role-Based Access Control
(RBAC), and capabilities.

64
• Security Policies: Defined guidelines outlining acceptable behavior
and security practices for users and administrators.

2 Security Threats
Operating systems are constantly under threat from malicious software and
attacks. Here are some common threats to be aware of:

• Malware (Malicious Software): A broad term encompassing various


software designed to harm a system. Examples include:
o Viruses: Self-replicating programs that infect files and spread to
other systems.
o Worms: Similar to viruses, but propagate independently without
infecting files.
o Trojan Horses: Disguised software that appears legitimate but
performs malicious actions upon execution.
• Denial-of-Service (DoS) Attacks: Attempts to overwhelm a system
with traffic, making it unavailable to legitimate users.
• Social Engineering: Exploiting human psychology to trick users into
revealing sensitive information or installing malware.

Diagram of a Typical Security Threat Scenario:

3 I/O Management

Operating systems manage communication between the CPU and peripheral


devices like printers, disks, and keyboards. Here's an overview:

I/O Devices: Hardware components that enable interaction with the


external environment (e.g., input devices like keyboards, output
devices like monitors).

65
Device Drivers: Software programs that act as translators between the
operating system and specific I/O devices, allowing the OS to
communicate and control them.

I/O Operations: Actions performed on I/O devices, such as reading


data from a disk or sending output to a printer. These operations can
be:

Synchronous: The CPU waits for the I/O operation to complete before
proceeding.

Asynchronous: The CPU initiates the I/O operation and continues


processing other tasks while waiting for completion.

**Diagram of I/O Management:**

4 I/O Scheduling Algorithms

When multiple I/O requests are pending, the operating system employs
scheduling algorithms to optimize their execution:

FCFS (First-Come-First-Serve): Processes requests in the order they arrive,


potentially leading to inefficient head movement for disk drives.

SCAN: The I/O head moves back and forth across the disk, servicing
requests in the order they are encountered.

66
C-SCAN: Similar to SCAN, but the head only moves in one direction
(minimizes head movement).

These algorithms aim to balance factors like response time, throughput


(requests served per unit time), and disk arm movement.

• 🔷I/O Management: I/O Devices, Device Drivers, I/O Operations


(Synchronous, Asynchronous).

In computer systems, Input/Output (I/O) Management refers to the essential


task of controlling communication between the Central Processing Unit (CPU)
and various peripheral devices. This section will delve into I/O devices, device
drivers, and the two main approaches to handling I/O operations:
synchronous and asynchronous.

1. I/O Devices

I/O devices are hardware components that enable a computer to interact with
the external world. These devices can be categorized into various types
based on their function:

• Storage Devices: Hard Disk Drives (HDDs), Solid-State Drives


(SSDs), USB flash drives, etc. Used for storing and retrieving data.
• Input Devices: Keyboard, mouse, scanner, microphone, etc. Provide
input data to the computer system.
• Output Devices: Monitor, printer, speakers, etc. Used to display or
output information from the computer.
• Communication Devices: Network adapters, modems, etc. Facilitate
communication with other computers or networks.

Characteristics of I/O Devices:

• Data Transfer Rate: Speed at which data is transferred between the


device and the CPU (measured in Mbps, Gbps).
• Latency: Time it takes for the device to respond to a request
(measured in milliseconds, microseconds).
• Seek Time (for storage devices): Time it takes for the device to
locate a specific piece of data on the storage media.
• Device Drivers: Software programs that act as interpreters between
the OS and the device.

67
2. Device Drivers

Device drivers are specialized software programs that act as interfaces


between the operating system (OS) and specific I/O devices. They translate
generic OS commands into device-specific instructions that the I/O device can
understand.

Functions of Device Drivers:

• Device Initialization: Prepares the device for operation during system


startup.
• Data Transfer: Handles the transfer of data between the device and
memory.
• Error Handling: Detects and recovers from errors that may occur
during data transfer.
• Interrupt Handling: Responds to interrupt signals sent by the device
to notify the CPU of events.

Diagram of I/O System with Device Driver:

3. I/O Operations: Synchronous vs. Asynchronous

There are two main approaches for handling I/O operations in an operating
system: synchronous and asynchronous.

Synchronous I/O:

• The CPU initiates an I/O operation and waits for it to complete before
proceeding.
• The CPU is blocked and cannot perform other tasks while waiting for
the I/O operation.
• This approach is simpler to implement but can lead to inefficient use of
CPU resources.

68
Diagram of Synchronous Vs Asynchronous I/O:

69
• The CPU initiates an I/O operation and then continues to execute other
instructions without waiting for the I/O to finish.
• The I/O device signals the CPU (through an interrupt) when the
operation is complete.
• This approach allows the CPU to utilize its time more efficiently while
I/O operations are ongoing.

• 🔷I/O Scheduling Algorithms: FCFS, SCAN, C-SCAN for handling I/O


requests.

I/O (Input/Output) scheduling algorithms are critical components of operating


systems responsible for managing requests from devices like hard disks,
printers, and network interfaces. These algorithms determine the order in
which the operating system will service these requests, aiming to optimize
performance and minimize waiting times. Here, we will explore three common
I/O scheduling algorithms: First Come First Served (FCFS), SCAN (Elevator
Algorithm), and C-SCAN (Circular SCAN).

1. First Come First Served (FCFS)

FCFS is a simple and intuitive algorithm that serves requests in the order they
arrive. The first request submitted gets processed first, followed by the
second, and so on.

Pros:

• Easy to understand and implement.


• Fair - all requests get a chance in the order they arrive.

Cons:

• Can lead to starvation for requests placed far away from the current head
position on the disk (seek time can be high).
• Ignores the seek time between requests, potentially leading to inefficient head
movement.

70
Diagram of FCFS:

2. SCAN (Elevator Algorithm)


SCAN, also known as the Elevator Algorithm, works similarly to an elevator
servicing different floors. The head starts at a specific position and moves in
one direction (e.g., towards the highest cylinder number) servicing all pending
requests it encounters. Once it reaches the end (highest or lowest cylinder), it
reverses direction and services remaining requests in the opposite direction.

Pros:

• Reduces seek time compared to FCFS by minimizing head movement


in one direction.
• Favors requests closer to the current head position, providing some
fairness.

71
Cons:

• Requests placed far away from the initial head position and then in the
opposite direction may experience significant delays.

Example:

For the same set of requests (170, 43, 140, 24, 60, and 85) with a starting
head position of 50:

1. Head moves towards the highest cylinder (170), servicing requests


(140, 85, 60, 43, 24).
2. Head reverses direction and services the remaining request (170).

SCAN offers better performance than FCFS in this scenario due to minimized
head movement in one direction.

3. C-SCAN (Circular SCAN)

C-SCAN is a variant of SCAN that improves upon it by eliminating the return


movement at the end. The head continuously scans from one end of the disk
to the other, servicing all requests in its path, and then immediately jumps
back to the starting position to begin the next scan cycle.

Pros:

• Minimizes overall head movement compared to SCAN and FCFS.


• Ensures all requests are serviced eventually without long delays in
specific regions.

Cons:

• May introduce longer waiting times for requests placed near the
starting position if the head needs to make a full sweep before
servicing them.

72
Diagram of C-SCAN:

Continuous Scanning (One Direction) Jump Back to Starting Position

• 🔷Case Studies: Exploring different types of operating systems (e.g.,


Windows, Linux, Android)

In the previous sections of this course, we explored the fundamental concepts


and functionalities of operating systems. Now, let's delve deeper into specific
types of operating systems commonly used today. We will examine three
prominent examples - Windows, Linux, and Android - highlighting their
characteristics, design choices, and areas of application.
1. Windows Operating System

Overview:

Microsoft Windows is a family of proprietary operating systems developed by


Microsoft Corporation. It is the dominant desktop operating system globally,
known for its user-friendly graphical user interface (GUI) and extensive
software ecosystem.

Characteristics:

• User Interface: Primarily uses a graphical user interface (GUI) with


windows, icons, menus, and a pointer (mouse).
• Multitasking: Allows multiple programs to run concurrently and switch
between them easily.
• Memory Management: Uses virtual memory to provide processes with
more memory than physically available.
• Security: Implements various security features such as user accounts,
access control, and firewalls.
• Hardware Compatibility: Designed to work with a wide range of
hardware components.

73
Design Choices:

• Focus on User Friendliness: Windows prioritizes a user-friendly


interface with intuitive features for ease of use.
• Closed Source: Windows is a proprietary operating system with its
source code not publicly available.

Areas of Application:

• Personal Computers: Primarily used on desktops and laptops for


personal and professional tasks.
• Gaming: Popular platform for PC gaming due to its compatibility with
various hardware and software.
• Business Environments: Widely used in businesses due to its
extensive software ecosystem and familiarity for users.

Diagram of Windows Desktop:

Windows Desktop UI

2. Linux Operating System


Overview:

Linux is a family of open-source operating systems based on the Linux kernel.


It is known for its flexibility, security, and customization options. Linux
distributions (distros) combine the Linux kernel with additional software
packages tailored for specific needs.

Characteristics:

• Open Source: Linux source code is freely available, allowing


modification and customization.
• Command-Line Interface (CLI): Primarily uses a command-line
interface (CLI) for interaction, though graphical user interfaces (GUIs)
are also available.

74
• Multitasking: Similar to Windows, Linux allows for multitasking with
multiple programs running concurrently.
• Security: Known for its robust security features due to its open-source
nature and continuous community scrutiny.
• Hardware Compatibility: Supports a wide range of hardware, but
compatibility can vary depending on the specific distro.

Design Choices:

• Openness and Customization: Prioritizes open-source development,


allowing for customization and adaptation to different needs.
• Flexibility: Offers a wide range of distributions with varying
functionalities and user interfaces.

Areas of Application:

• Servers: Widely used as server operating systems due to their


reliability, security, and scalability.
• Web Development: Popular platform for web development and
hosting due to its open-source nature and powerful tools.
• Embedded Systems: Used in embedded systems like routers,
network devices, and smart devices due to its efficiency and flexibility.

Diagram of Linux Terminal:

Linux Terminal UI

3. Android Operating System

Overview:

Android is a mobile operating system based on the Linux kernel, developed


by Google primarily for smartphones and tablets. It is an open-source platform
with a large developer community creating a vast library of applications.

Characteristics:

75
• Touchscreen Interface: Designed for touchscreens, with a user
interface optimized for finger interaction.
• Multitasking: Allows multiple apps to run in the background, and users
can switch between them easily.
• Resource Management: Optimized for mobile devices with limited
resources, focusing on efficient memory and power usage.
• App Ecosystem: Offers a vast library of applications (apps) available
through the Google Play Store.
• Open Source (with modifications): While the Android platform is
based on open-source Linux, specific modifications by manufacturers
and service providers can affect the user experience.

Design Choices:

• Mobile Optimization: Prioritizes a user-friendly touch interface and


efficient resource management for mobile devices.
• Openness and Customization: Leverages open-source development
while allowing manufacturers some customization.

Areas of Application:

• Smartphones and Tablets: The dominant operating system for


smartphones and tablets globally.
• Wearables: Increasingly used in smartwatches and other wearable
devices.

Diagram of Android Home Screen:

Android Home Screen UI

Conclusion:

These three case studies showcase a variety of operating system designs


and applications. Each type has its strengths and weaknesses, making them
suitable for different needs. Understanding the distinctions between these
operating systems is crucial for appreciating the

76
77
78

You might also like