CSC 303 - Operating Systems - 78 Pages
CSC 303 - Operating Systems - 78 Pages
––
–– –– ––
1
Course Description:
Course Outline:
2: Process Management
• 🔷Processes and Threads: Concepts of Processes, Multiprogramming,
Context Switching.
• 🔷Process States: Running, Waiting, Ready, Blocked, Terminated states
and their transitions.
• 🔷Process Scheduling: Scheduling algorithms (FCFS, SJF, Priority,
Round Robin, Multilevel Queue), Scheduling Criteria.
• 🔷Inter-process Communication (IPC): Shared Memory, Semaphores,
Message Passing for process coordination.
• 🔷Deadlocks: Concepts, Prevention, Detection, and Recovery
mechanisms.
• 🔷Programming Assignment 1: Implement basic process scheduling
algorithms in a chosen programming language.
2
3: Memory Management
• 🔷Memory Hierarchy: Cache, Main Memory, Secondary Storage
(HDD/SSD).
• 🔷Memory Allocation Techniques: Contiguous Allocation, Paging,
Segmentation, Virtual Memory.
• 🔷Address Translation: Virtual Memory concepts, Translation
Lookaside Buffer (TLB).
• 🔷Memory Protection: Memory Protection mechanisms (Memory
Management Units - MMU) to prevent process interference.
• 🔷Memory Replacement Policies: FIFO, LRU, Optimal Page
Replacement Algorithms.
• 🔷Programming Assignment 2: Simulate memory allocation and
replacement algorithms in a chosen programming language.
4: File Management
• 🔷File System Concepts: File Structure, Directory Management, Access
Methods (Sequential, Indexed).
• 🔷File System Implementation: Disk Scheduling Algorithms (FCFS,
SCAN, C-SCAN).
• 🔷File Allocation Methods: Contiguous Allocation, Linked Allocation,
Indexed Allocation.
• 🔷File Sharing and Access Control: Mechanisms for secure file access
and sharing.
• 🔷File System Protection: Techniques to prevent unauthorized access
and data corruption.
• 🔷Programming Assignment 3: Implement basic file system operations
(create, read, write, delete) in a chosen programming language.
3
Course Objectives:
• Define the concept of an operating system and its role in a computer system.
• Explain the different types of operating systems (e.g., batch, interactive, real-
time, distributed).
• Describe the core functionalities of an OS, including process management,
memory management, file management, I/O management, and device
management.
• Analyze different process scheduling algorithms (e.g., FCFS, SJF, priority,
round-robin).
• Apply memory management techniques like paging and segmentation.
• Explain file system concepts like directory structures, file allocation methods,
and access control.
• Understand the principles of I/O management, including device drivers,
interrupts, and buffering.
• Implement basic operating system functionalities in a chosen programming
language (e.g., simulate process scheduling algorithms).
• Evaluate the performance and trade-offs of different operating system design
choices.
• Appreciate the historical development of operating systems.
4
__________________________________
1: Introduction to Operating Systems
• 🔷What is an Operating System (OS)? Definitions, Functions, and
History of Operating Systems.
An operating system (OS) acts as the core software that manages computer
hardware resources and provides a platform for running application programs.
It essentially acts as an intermediary between the hardware and the user,
facilitating communication and resource utilization.
5
1.2 Definitions of Operating Systems
6
• Time-Sharing Systems (1960s-1970s): Enabled multiple users to
share a single computer system by dividing processing time among
them, giving the illusion of simultaneous use.
• Personal Computer Operating Systems (1970s-Present):
Development of operating systems specifically designed for personal
computers, with user-friendly interfaces like the Apple II's command
line and later the graphical user interface (GUI) popularized by the
Macintosh.
• Modern Operating Systems (Present): Operating systems have
become increasingly sophisticated, supporting networking,
multitasking, security features, and virtual memory management for
efficient resource utilization.
7
• Characteristics:
o Suitable for high-volume, repetitive tasks (e.g., payroll
processing, scientific calculations).
o Less user interaction compared to interactive systems.
o Efficient for utilizing system resources by minimizing idle time.
2. Multitasking Systems:
3. Real-Time Systems:
4. Distributed Systems:
8
• 🔷System Architecture: Layered architecture of an OS, Kernel Mode vs.
User Mode.
This section dives into the fundamental structure of operating systems (OS)
by exploring the layered architecture and the distinction between kernel mode
and user mode.
9
Description of Layers:
10
Diagram of Kernel Mode vs. User Mode:
Summary:
The layered architecture and distinction between kernel mode and user mode
are fundamental concepts in operating systems. This layered approach
promotes modularity, security, and efficient resource management.
Understanding these concepts is crucial for appreciating how operating
systems provide a platform for user applications to interact with the underlying
hardware.
1. Process Management
11
• Process Creation: Spawning new processes based on user requests
or program execution.
• Process Scheduling: Deciding which process gets access to the CPU
for execution at any given time. Scheduling algorithms like FCFS (First
Come, First Served), SJF (Shortest Job First), and Priority scheduling
determine the order of process execution.
• Process Termination: Ending processes that have completed
execution, encountered errors, or been terminated by the user.
• Inter-process Communication (IPC): Facilitating communication and
resource sharing between different processes. Mechanisms like shared
memory, semaphores, and message passing enable processes to
coordinate and exchange data.
2. Memory Management
12
• Virtual Memory: Creating a virtual memory space that allows
processes to utilize more memory than physically available on the
system. This is achieved by using secondary storage (hard disk) as an
extension of main memory.
• Memory Protection: Preventing processes from interfering with each
other's memory space by employing Memory Management Units
(MMU) to isolate processes and enforce memory access restrictions.
+---------+---------+---------+---------+
+---------+---------+---------+---------+
+----+----+----+----+ +----+----+----+----+
| P1 | P1 | P2 | P3 | | P2 | P3 | Free | P1 |
+----+----+----+----+ +----+----+----+----+
Paging Table
13
Process 1: Spreads across non-contiguous pages in physical memory
3. File Management
/ (root directory)
+--- Documents
| +--- report.txt
| +--- images
| +--- picture1.jpg
| +--- picture2.png
+--- Downloads
| +--- software.exe
14
2: Process Management
• 🔷Processes and Threads: Concepts of Processes, Multiprogramming,
Context Switching.
This section dives into the fundamental concepts of processes and threads, a
cornerstone of operating system functionality. We will explore how processes
and threads manage program execution, enabling multitasking and efficient
resource utilization.
1. Processes
Process States:
15
2. Multiprogramming
Operating systems employ multiprogramming to manage multiple processes
concurrently. While only one process can actively execute on the CPU at a
time, the OS can rapidly switch between processes, creating the illusion of
simultaneous execution.
Benefits of Multiprogramming:
16
• Improved System Response Time: Users perceive faster response
times as the OS can quickly switch to a ready process when the
current process requires I/O.
• Efficient Resource Management: System resources like memory are
utilized effectively by running multiple processes concurrently.
3. Threads
A thread is a lightweight unit of execution within a process. Multiple threads
can exist and execute concurrently within a single process, sharing the same
address space and resources like open files. This allows for finer-grained
control within a process compared to separate heavyweight processes.
Benefits of Threads:
Context Switching:
Context switching refers to the process of saving the state of one process (or
thread) and restoring the state of another when the CPU needs to switch
execution. This involves saving and restoring registers, program counters,
and other process-specific data. Context switching between threads is
generally faster than context switching between processes due to the shared
address space.
This section dives into the concept of process states, a fundamental aspect of
process management in operating systems. We will explore the different
states a process can be in and the transitions that occur between them.
17
1. Processes and Process States
A process goes through various states throughout its lifecycle. These states
define the current activity or execution status of the process:
1. Scheduling Concepts
• Process: A program in execution. Each process requires CPU,
memory, and other resources to run.
• Scheduling Queue: A data structure that holds processes waiting for
CPU allocation.
• Context Switching: The process of saving the state of the current
running process and loading the state of the newly selected process for
execution.
• Scheduling Criteria: Factors considered when selecting a process for
CPU allocation, such as:
o Waiting Time: The amount of time a process has spent waiting
for the CPU.
o Turnaround Time: The total time taken for a process to
complete its execution (from submission to completion).
o Response Time: The time it takes for a process to start running
after it submits a request for CPU access.
o Throughput: The number of processes completed per unit time.
19
2. Common Scheduling Algorithms
(Ready Queue)
o
▪ Advantages: Minimizes average waiting time and turnaround
time.
▪ Disadvantages: Preemptive scheduling required (knowing
process execution times in advance may not be feasible).
o Priority Scheduling:
▪ Concept: Processes are assigned priorities. Higher priority
processes are executed first.
▪ Diagram:
20
▪ Advantages: Useful for real-time systems where certain
processes require guaranteed execution.
▪ Disadvantages: Starvation can occur for lower priority
processes if high priority processes are constantly arriving.
o Round Robin (RR):
▪ Concept: Processes are allocated a fixed time slice (quantum).
When a process's time slice is complete, it is preempted and
placed at the back of the ready queue. The next process in the
queue is then given the CPU.
o
▪ Advantages: Ensures fairness for all processes, good for
interactive systems.
▪ Disadvantages: Context switching overhead can reduce overall
performance for CPU-bound processes.
o Multilevel Queue Scheduling:
▪ Concept: Combines multiple scheduling algorithms. Processes
are organized into different queues with different priorities and
scheduling algorithms applied to each queue.
o
▪ Advantages: Provides flexibility to handle different types of
processes effectively.
▪ Disadvantages: Complexity increases
1. Shared Memory
Shared memory is a technique where processes can directly access and
modify the same portion of memory. This allows for efficient data exchange
between processes without the need for explicit data copying.
21
Diagram of Shared Memory IPC:
2. Semaphores
22
Types of Semaphores:
Advantages of Semaphores:
Disadvantages of Semaphores:
3. Message Passing
23
• Direct: Messages are sent directly to a specific destination process.
• Indirect: Messages are sent to a mailbox or queue, and the receiving
process retrieves them.
24
Diagram of a Deadlock Scenario:
2. Deadlock Prevention
Despite prevention measures, deadlocks can still occur. Here's how to handle
them:
25
• Deadlock Detection: Use algorithms like resource-wait graphs or
dependency matrices to detect circular wait situations.
• Deadlock Recovery: Once a deadlock is detected, various
approaches can be taken:
o Process Termination: Terminate one or more processes
involved in the deadlock, releasing their held resources. This
should be a last resort due to potential data loss.
o Resource Preemption: Forcefully take away resources from
some processes and allocate them to resolve the deadlock. This
can be risky if the process hasn't completed its critical section
yet.
o Rollback: Roll back processes involved in the deadlock to a
safe state and restart them, potentially losing some progress.
Learning Objectives:
You can choose a language you're comfortable with, such as C, C++, Java, or
Python. Each language has its specific libraries or functions for simulating
process execution. Refer to your chosen language's documentation for details
on process management functionalities.
26
Implementation Steps:
Remember:
27
3: Memory Management
• 🔷Memory Hierarchy: Cache, Main Memory, Secondary Storage
(HDD/SSD).
The memory hierarchy typically consists of the following levels, arranged from
fastest and smallest to slowest and largest:
1. Registers: These are built-in memory locations within the CPU that
can be accessed very quickly. They are used to store temporary data
and instructions that the CPU is currently working on. (Size: Kilobytes
(KB))
2. Cache: This is a small, high-speed memory located between the CPU
and main memory. It stores frequently accessed data and instructions
from main memory, allowing the CPU to retrieve them much faster.
(Size: Megabytes (MB))
3. Main Memory (RAM): This is the primary memory of the computer,
where programs and data are stored while they are actively being
used. It is faster than secondary storage but slower than cache and
registers. (Size: Gigabytes (GB))
4. Secondary Storage (HDD/SSD): This is non-volatile storage that
retains data even when the computer is powered off. It is slower than
main memory but provides much larger storage capacity for data and
programs that are not actively in use. Secondary storage includes:
o Hard Disk Drive (HDD): Uses magnetic platters to store data,
offering high capacity but slower access times.
o Solid State Drive (SSD): Uses flash memory chips and
provides faster access times than HDDs but often has lower
capacity and higher cost per gigabyte. (Size: Terabytes (TB) and
beyond)
28
Diagram of Memory Hierarchy:
The operating system plays a crucial role in managing the memory hierarchy.
When the CPU needs data, it first checks the registers. If the data is not
found, it moves on to the cache. If the data is not present in the cache (cache
miss), the OS retrieves it from main memory. If the data is not in main
memory (main memory miss), the OS must then fetch it from secondary
storage (HDD/SSD), which is the slowest access. This process involves disk
I/O operations, which can significantly impact performance.
29
• 🔷Memory Allocation Techniques: Contiguous Allocation, Paging,
Segmentation, Virtual Memory.
1. Contiguous Allocation
Contiguous allocation is a straightforward approach where a process is
allocated a single contiguous block of memory to hold its entire code, data,
and stack. The OS maintains a free memory list to track available memory
blocks.
Advantages:
• Simple to implement
• Fast memory access (contiguous block allows for sequential access)
Disadvantages:
30
• Internal fragmentation: Memory allocated to a process may not be
fully utilized, leaving unused space within the allocated block.
• Difficulty in loading new processes: Finding a large enough
contiguous block to fit a new process can be challenging, especially
with external fragmentation.
2. Paging
Paging is a memory management technique that divides both physical
memory (RAM) and logical memory (process address space) into fixed-size
blocks called frames and pages, respectively. The OS maintains a page table
that maps logical page numbers from a process's address space to physical
frame numbers in RAM.
Diagram of Paging:
Advantages:
31
Disadvantages:
3. Segmentation
Segmentation is another memory allocation technique that divides a process's
logical memory into variable-sized segments based on its logical structure
(e.g., code, data, stack). Each segment has its own base address and length.
A segment table maps logical segment addresses to physical memory
addresses.
Diagram of Segmentation:
32
Advantages:
Disadvantages:
4. Virtual Memory
• Process Isolation: Each process has its own virtual address space,
preventing interference between processes.
• Efficient Memory Utilization: Allows processes to use more memory
than physically available, improving memory utilization.
33
• Simplified Memory Management: Programs can access memory
using virtual addresses without needing to know the physical layout.
• Page Table Lookup: The virtual address is divided into two parts: a
Virtual Page Number (VPN) and an Offset. The VPN is used as an
index to lookup a page table entry in memory. The page table entry
contains a Physical Frame Number (PFN), which points to the physical
memory frame where the corresponding virtual page is located.
• Adding Offset: The offset part of the virtual address remains
unchanged. It is added to the physical frame number obtained from the
page table to get the final physical memory address.
Benefits of TLB:
34
TLB Management:
• TLB Entries: The TLB has a limited number of entries, and entries are
filled dynamically based on recently accessed virtual addresses.
• TLB Misses: If the virtual address is not found in the TLB, a page table
lookup is required, and the TLB may be updated with the new
translation.
35
**Explanation:**
36
5. The MMU also checks the access permissions associated with the
memory location based on the MMU table.
6. If the access is permitted, the MMU allows the memory access to
proceed.
7. If the access violates permissions (e.g., trying to write to a read-only
memory region), the MMU raises a memory protection fault, and the
operating system intervenes.
1. First-In-First-Out (FIFO)
37
Diagram of FIFO Replacement Policy:
Advantages of FIFO:
• Simple to implement.
• Fair to all pages, preventing starvation (a page being constantly
replaced before it can be used).
Disadvantages of FIFO:
• Can suffer from the "Belady's Anomaly," where recently used pages
are evicted even though they might be needed again soon, leading to
unnecessary page faults.
LRU (Least Recently Used) replaces the page that hasn't been used for the
longest duration. The OS keeps track of page usage and prioritizes keeping
recently used pages in memory based on the assumption that recently used
pages are more likely to be accessed again soon.
Implementation of LRU:
• Each page has a "use" bit that gets set whenever the page is
accessed.
• A "clock" or timestamp mechanism can also be used to track the last
access time for each page.
• When a page replacement is needed, the OS identifies the page with
the least recently set "use" bit or the oldest timestamp and evicts it.
38
Diagram of LRU Replacement Policy:
Advantages of LRU:
Disadvantages of LRU:
Concept of OPT:
Disadvantages of OPT:
39
• System workload characteristics (e.g., random vs. sequential access
patterns).
• Hardware limitations (e.g., complexity of maintaining additional data
structures).
• Performance trade-offs (balancing simplicity with efficiency).
FIFO is simple to implement but can suffer from Belady's Anomaly. LRU offers
better performance but adds some overhead for tracking usage. While OPT is
the best-case scenario, it's not practical for real systems. In practice,
variations and combinations of these basic policies are often used to achieve
optimal performance for specific systems.
Objectives:
Tasks:
40
2. Memory Replacement Simulation:
3. Performance Evaluation:
• Track and record the number of page faults (for paging and
segmentation) that occur during the simulation for each replacement
algorithm. A page fault happens when a required page/segment is not
currently in memory and needs to be loaded from secondary storage.
• Calculate the page fault ratio (number of page faults / total memory
access requests) for each algorithm. This metric indicates the
efficiency of the memory replacement strategy. Lower page fault ratios
are desirable.
4. Code Structure:
41
Remember:
4: File Management
• 🔷File System Concepts: File Structure, Directory Management, Access
Methods (Sequential, Indexed).
1. File Structure
+-----------------+
+-----------------+
+-----------------+
+-----------------+
42
• File Metadata: Optional header information stored at the beginning of
a file containing details like file size, creation date, access permissions,
etc.
• Data: The actual content of the file, which can be text, images, videos,
programs, or any other digital information.
2. Directory Management
Directories (also called folders) are used to organize and group files within a
file system. They act like a hierarchical tree structure, allowing users to create
subdirectories within parent directories for nested organization.
Directory Structure:
/ (Root Directory)
|- Folder1
| |- File1.txt
| |- File2.jpg
| |- Subfolder1
| |- File3.doc
|- Folder2
| |- File4.mp4
|- File5.exe
Directory Operations:
3. Access Methods
Access methods determine how data is retrieved from files. Here are two
common access methods:
43
• Sequential Access:
o Data is accessed sequentially, one unit (e.g., byte) after another,
similar to reading a book page by page.
o Suitable for files where data needs to be read or written in a
specific order (e.g., log files, text documents).
o Not efficient for randomly accessing specific parts of a file.
• Indexed Access:
o Files are divided into fixed-size blocks, and an index table keeps
track of the location of each block.
o Allows for random access of any part of the file by directly
referencing the block address in the index table.
o More efficient for accessing specific parts of a large file quickly.
The choice of access method depends on the type of data and how it will be
accessed. Sequential access is suitable for reading data in a specific order, while
indexed access is ideal for random access to any part of the file.
44
• 🔷File System Implementation: Disk Scheduling Algorithms (FCFS,
SCAN, C-SCAN).
Advantages:
Disadvantages:
• Can lead to long seek times if requests are scattered across the disk.
• Can cause "starvation" for requests located far from the current head
position, especially if many small requests are issued in quick
succession.
45
Diagram of FCFS Scheduling:
SCAN, also known as the Elevator Algorithm, aims to minimize seek time by
servicing requests in a specific direction. The disk head moves in one
direction (e.g., from inner tracks to outer tracks) until it reaches the end, then
reverses direction and services remaining requests in the opposite direction.
Advantages:
Disadvantages:
• May cause starvation for requests at the opposite end of the current
head position if there are a lot of requests in the other direction.
46
Diagram of SCAN Scheduling:
Advantages:
Disadvantages:
• Might have higher seek times than SCAN if most requests are
clustered in one half of the disk.
47
Diagram of C-SCAN Scheduling:
1. Contiguous Allocation
Advantages:
48
Disadvantages:
49
2. Linked Allocation
Advantages:
Disadvantages:
50
Diagram of Linked Allocation:
51
• 🔷File Sharing and Access Control: Mechanisms for secure file access
and sharing.
This section dives into the mechanisms used by operating systems to enable
secure file access and sharing between users and applications. Effective file
sharing and access control are crucial for protecting sensitive data and
maintaining system integrity.
Common Permissions:
Permission Modes:
Permission Representation:
Changing Permissions:
Most operating systems provide tools for users with appropriate privileges to
change file permissions. This allows for granular control over file access.
52
2. User Management
ACLs provide a more flexible way to manage file access control by explicitly
specifying which users and groups can access a file and their corresponding
permissions. ACLs offer finer-grained control compared to basic owner-group-
other permissions.
Structure of an ACL:
Benefits of ACLs:
53
Diagram of an ACL Entry:
54
• Network File Systems (NFS): A distributed file system protocol that
allows users on different machines to access shared files over a
network.
• Cloud Storage: Storing files on remote servers (cloud) and sharing
them with others via access links or permissions.
55
2. File System Permissions
3. Encryption
• Encryption scrambles the contents of a file using a secret key, making
it unreadable to anyone without the proper decryption key. This
protects data confidentiality, even if unauthorized users gain access to
the file system.
• Different encryption techniques are available:
o Symmetric Encryption: Uses a single key for both encryption
and decryption.
o Asymmetric Encryption: Uses a public key for encryption and
a private key for decryption, enhancing security as the private
key remains confidential.
• File system encryption can be implemented at different levels:
56
o Full Disk Encryption: Encrypts the entire disk volume,
protecting all files.
o Individual File Encryption: Encrypts specific files, allowing
selective protection of sensitive data.
57
• Directory Management: A hierarchical structure using directories
(folders) to group related files and subdirectories for better
organization.
• Access Methods: Techniques for accessing file content. Common
methods include:
o Sequential Access: Reading/writing data sequentially from the
beginning of the file.
o Indexed Access: Using an index to locate specific data blocks
within the file efficiently (e.g., accessing a particular record in a
database file).
58
o Linked Allocation: Files are stored in non-contiguous sectors,
with each sector containing a pointer to the next sector in the
file. This eliminates external fragmentation but can introduce
internal fragmentation (wasted space within sectors) and
overhead for managing pointers.
o Indexed Allocation: A separate index table keeps track of the
data block locations for a file. This allows efficient random
access but adds complexity to the file system structure.
• Contiguous Allocation:
59
• Linked Allocation:
• Indexed Allocation:
60
5: Security and I/O Management
• 🔷System Security Concepts: User Authentication, Access Control
Mechanisms, Security Policies.
This section dives into the crucial aspects of operating system security,
focusing on user authentication, access control mechanisms, and security
policies. Understanding these concepts is essential for protecting computer
systems from unauthorized access and maintaining data integrity.
1. User Authentication
61
2. Access Control Mechanisms
Access control mechanisms regulate how users and processes can access
system resources (files, devices, memory) based on their permissions. Here
are some common methods:
62
Diagram of Discretionary Access Control Mechanism (DAC):
63
3. Security Policies
Security policies are formal documents that outline the rules and procedures
for maintaining system security. These policies define acceptable use,
password management practices, data classification, and incident response
procedures.
This section covers two crucial aspects of operating systems: security and
Input/Output (I/O) management.
64
• Security Policies: Defined guidelines outlining acceptable behavior
and security practices for users and administrators.
2 Security Threats
Operating systems are constantly under threat from malicious software and
attacks. Here are some common threats to be aware of:
3 I/O Management
65
Device Drivers: Software programs that act as translators between the
operating system and specific I/O devices, allowing the OS to
communicate and control them.
Synchronous: The CPU waits for the I/O operation to complete before
proceeding.
When multiple I/O requests are pending, the operating system employs
scheduling algorithms to optimize their execution:
SCAN: The I/O head moves back and forth across the disk, servicing
requests in the order they are encountered.
66
C-SCAN: Similar to SCAN, but the head only moves in one direction
(minimizes head movement).
1. I/O Devices
I/O devices are hardware components that enable a computer to interact with
the external world. These devices can be categorized into various types
based on their function:
67
2. Device Drivers
There are two main approaches for handling I/O operations in an operating
system: synchronous and asynchronous.
Synchronous I/O:
• The CPU initiates an I/O operation and waits for it to complete before
proceeding.
• The CPU is blocked and cannot perform other tasks while waiting for
the I/O operation.
• This approach is simpler to implement but can lead to inefficient use of
CPU resources.
68
Diagram of Synchronous Vs Asynchronous I/O:
69
• The CPU initiates an I/O operation and then continues to execute other
instructions without waiting for the I/O to finish.
• The I/O device signals the CPU (through an interrupt) when the
operation is complete.
• This approach allows the CPU to utilize its time more efficiently while
I/O operations are ongoing.
FCFS is a simple and intuitive algorithm that serves requests in the order they
arrive. The first request submitted gets processed first, followed by the
second, and so on.
Pros:
Cons:
• Can lead to starvation for requests placed far away from the current head
position on the disk (seek time can be high).
• Ignores the seek time between requests, potentially leading to inefficient head
movement.
70
Diagram of FCFS:
Pros:
71
Cons:
• Requests placed far away from the initial head position and then in the
opposite direction may experience significant delays.
Example:
For the same set of requests (170, 43, 140, 24, 60, and 85) with a starting
head position of 50:
SCAN offers better performance than FCFS in this scenario due to minimized
head movement in one direction.
Pros:
Cons:
• May introduce longer waiting times for requests placed near the
starting position if the head needs to make a full sweep before
servicing them.
72
Diagram of C-SCAN:
Overview:
Characteristics:
73
Design Choices:
Areas of Application:
Windows Desktop UI
Characteristics:
74
• Multitasking: Similar to Windows, Linux allows for multitasking with
multiple programs running concurrently.
• Security: Known for its robust security features due to its open-source
nature and continuous community scrutiny.
• Hardware Compatibility: Supports a wide range of hardware, but
compatibility can vary depending on the specific distro.
Design Choices:
Areas of Application:
Linux Terminal UI
Overview:
Characteristics:
75
• Touchscreen Interface: Designed for touchscreens, with a user
interface optimized for finger interaction.
• Multitasking: Allows multiple apps to run in the background, and users
can switch between them easily.
• Resource Management: Optimized for mobile devices with limited
resources, focusing on efficient memory and power usage.
• App Ecosystem: Offers a vast library of applications (apps) available
through the Google Play Store.
• Open Source (with modifications): While the Android platform is
based on open-source Linux, specific modifications by manufacturers
and service providers can affect the user experience.
Design Choices:
Areas of Application:
Conclusion:
76
77
78