0% found this document useful (0 votes)
39 views37 pages

LSSC Comp. Sc. Chapter 5 Operating Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views37 pages

LSSC Comp. Sc. Chapter 5 Operating Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

LSSC COMPUTER SCIENCE

CHAPTER 5: OPERATING SYSTEMS


I. Introduction to Operating Systems
A. Definition and Purpose of an Operating System:
- An operating system (OS) is a software that acts as an intermediary between the hardware and software
applications of a computer system.

- It provides a set of services and functions that enable the efficient and effective management of computer
resources and facilitates user interaction.

- The primary purpose of an operating system is to provide a convenient and reliable environment for users to
execute their programs and utilize system resources.

Example: When you turn on your computer, the operating system (e.g., Windows, macOS, Linux) is
responsible for initializing the hardware components, such as the CPU and memory, and providing a user-
friendly interface for you to interact with.

B. Overview of the Key Functions of an Operating System:


1. Process Management:
Process Management is one of the key functions of an operating system. It involves managing and executing
multiple processes concurrently, ensuring efficient utilization of system resources and providing a responsive
environment for users. The main aspects of process management include:

1.1. Process Scheduling:


The operating system is responsible for scheduling and allocating CPU time to different processes. It decides
which process should run next and how long each process can utilize the CPU. Various scheduling algorithms
are used to prioritize processes based on factors such as priority levels, deadlines, and fairness.

1.2. Process Creation and Termination:


The operating system facilitates the creation and termination of processes. It provides mechanisms for creating
new processes from existing ones, allowing for the execution of multiple tasks concurrently. It also handles
the proper termination of processes, ensuring that resources are released and system integrity is maintained.

1.3 Process Communication and Synchronization:


Processes often need to communicate and synchronize with each other to share data, coordinate activities, and
avoid conflicts. The operating system provides inter-process communication mechanisms, such as shared
memory, message passing, and synchronization primitives (e.g., semaphores, mutexes), to facilitate
communication and coordination between processes.

1.4 Process State and Control:


The operating system keeps track of the state of each process, including information such as process identifier,
memory allocation, CPU registers, and I/O status. It is responsible for managing the state transitions of
processes, such as when a process is created, waiting for I/O, executing, or terminated. The operating system
ensures that processes are properly scheduled, synchronized, and managed throughout their lifecycle.

KAMADJE SADJIE DJIENA ALLEN 1


LSSC COMPUTER SCIENCE
1.5 Process Protection and Security:
The operating system enforces process protection and security mechanisms to prevent unauthorized access or
interference between processes. It ensures that processes are isolated from each other and that their resources
are protected, maintaining the overall system security and integrity.

1.6 Process Resource Management:


The operating system manages system resources required by processes, including memory, CPU time, I/O
devices, and network connections. It allocates and deallocates resources based on process requirements,
priorities, and fairness policies to ensure optimal utilization and prevent resource contention.

1.7 Process Fault Handling:


In the event of process failures or errors, the operating system handles process faults, such as crashes or
exceptions. It may perform actions like terminating the faulty process, generating error reports, and taking
necessary steps to prevent cascading failures or data corruption.

Process management plays a crucial role in maintaining system efficiency, responsiveness, and stability. The
operating system's effective management of processes allows for multitasking, resource sharing, and
coordination among various tasks running simultaneously.

2. Memory Management:
Memory Management is another essential function of an operating system. It involves managing the
allocation, utilization, and deallocation of memory resources in a computer system. The primary aspects of
memory management include:

2.1 Memory Allocation:


The operating system is responsible for allocating memory to processes and managing the memory space
available in the system. It tracks the free and occupied memory blocks, ensuring that processes are assigned
appropriate memory regions for their execution.

2.2 Memory Addressing:


The operating system provides mechanisms for translating logical addresses used by processes into physical
addresses in the computer's memory. It performs address mapping and ensures that processes can access the
correct memory locations.

2.3 Memory Protection:


The operating system enforces memory protection mechanisms to prevent processes from accessing memory
areas that they do not have permission to access. It assigns access rights to memory regions and enforces
memory protection policies to maintain the security and integrity of the system.

2.4 Memory Sharing:


The operating system allows processes to share memory regions to facilitate inter-process communication and
data sharing. It provides mechanisms such as shared memory, allowing multiple processes to access and
modify shared data.

KAMADJE SADJIE DJIENA ALLEN 2


LSSC COMPUTER SCIENCE
2.5 Memory Paging and Swapping:
To efficiently utilize memory resources, the operating system implements techniques like paging and
swapping. Paging divides the memory into fixed-size pages, and processes are allocated memory in page-
sized units. Swapping involves moving parts of a process's memory from the main memory to secondary
storage (e.g., hard disk) when the memory becomes scarce.

2.6 Memory Cleanup and Deallocation:


When a process terminates or no longer requires a specific memory region, the operating system deallocates
the memory and makes it available for other processes. It ensures efficient memory reuse to prevent memory
fragmentation and maximize memory utilization.

2.7 Memory Fragmentation Management:


The operating system handles memory fragmentation, which can occur due to the allocation and deallocation
of memory blocks over time. It employs techniques like compaction and memory compaction to reduce
fragmentation and ensure efficient memory allocation.

2.8 Memory Hierarchy Management:


The operating system coordinates the management of various levels of memory hierarchy, including cache
memory, main memory (RAM), and secondary storage (hard disk, solid-state drives). It optimizes memory
access, data caching, and data transfer between different levels of memory to improve overall system
performance.

Memory management is crucial for efficient utilization of memory resources, ensuring that processes have
sufficient memory for execution, preventing memory conflicts, and optimizing performance. The operating
system's effective memory management allows for multitasking, efficient memory allocation, and seamless
data sharing among processes.

Example: If you have multiple applications running simultaneously, the operating system allocates memory
space to each process, ensuring they don't interfere with each other and cause crashes or errors.

3. Devices Management:
Device Management is a key function of an operating system that involves managing and controlling
peripheral devices connected to the computer system. These devices can include input/output (I/O) devices
such as keyboards, mice, printers, disks, network interfaces, and other hardware components. The main aspects
of device management include:

3.1 Device Recognition and Configuration:


The operating system detects and recognizes connected devices, identifying their type, capabilities, and
characteristics. It automatically configures the devices for proper operation, including setting up appropriate
device drivers and allocating necessary system resources.

3.2 Device Driver Management:


The operating system manages device drivers, which are software components that facilitate communication
between the operating system and the connected devices. It provides a driver interface that allows device
drivers to interact with the operating system and provides a standardized mechanism for device driver
installation, upgrade, and removal.

KAMADJE SADJIE DJIENA ALLEN 3


LSSC COMPUTER SCIENCE
3.3 Device Allocation and Deallocation:
The operating system handles the allocation of devices to processes or users, ensuring that multiple processes
can use devices concurrently without conflicts. It manages device access and enforces access control policies
to prevent unauthorized access or interference.

3.4 Device Input and Output Handling:


The operating system provides mechanisms for handling input from devices (e.g., keyboard, mouse) and
output to devices (e.g., displays, printers). It manages device interrupts, polling, and buffering to efficiently
handle device I/O operations, ensuring smooth and responsive interaction with the devices.

3.5 Device Scheduling:


When multiple processes or users require access to the same device, the operating system schedules and
prioritizes the device access requests. It employs scheduling algorithms to ensure fair and efficient device
utilization, preventing device starvation and optimizing overall system performance.

3.6 Device Error Handling:


The operating system monitors and handles device errors, such as hardware failures, communication errors,
or timeouts. It detects and reports device errors, takes appropriate actions (e.g., retrying operations, notifying
users or processes), and manages fault recovery to maintain system stability.

3.7 Device Power Management:


The operating system manages the power states of devices, including powering them on, off, or into low-
power modes when they are not in use. It employs power management techniques to conserve energy, extend
device lifespan, and optimize power usage in the system.

3.8 Device Virtualization:


In virtualized environments, the operating system provides mechanisms for virtualizing devices, allowing
multiple virtual machines or guest operating systems to share and access physical devices efficiently. It ensures
proper device isolation, resource allocation, and virtual device abstraction.

Device management enables the operating system to effectively control and coordinate the operation of
various peripheral devices connected to the computer system. It ensures proper device recognition,
configuration, allocation, and error handling, facilitating seamless interaction between users, processes, and
the connected devices.

Example: When you plug in a USB flash drive, the operating system detects the device, installs the necessary
drivers, and provides access to the files stored on the drive.

4. File Management:
File Management is a crucial function of an operating system that involves the organization, storage, retrieval,
and manipulation of files and directories. It provides a structured and efficient way to store and manage data
on storage devices. The key aspects of file management include:

4.1 File Creation and Deletion:


The operating system allows users and processes to create new files and directories. It provides mechanisms
for specifying file attributes such as name, location, size, and permissions. Similarly, it facilitates the deletion
of files and directories, freeing up storage space and removing references to the deleted files.

KAMADJE SADJIE DJIENA ALLEN 4


LSSC COMPUTER SCIENCE
4.2 File Naming and Directory Structure:
The operating system manages file naming conventions and directory structures. It provides rules and
guidelines for naming files, ensuring uniqueness and compatibility across the system. It also organizes files in
a hierarchical directory structure, allowing for efficient organization, navigation, and management of files.

4.3 File Access Control:


The operating system enforces access control mechanisms to protect files from unauthorized access. It assigns
permissions (read, write, execute) to files and directories, allowing or restricting access based on user or group
permissions. It ensures data security and privacy by managing file-level access control.

4.4 File Organization and Storage:


The operating system determines how files are stored on storage devices such as hard disks or solid-state
drives. It manages file allocation methods (e.g., contiguous, linked, indexed) to optimize storage space
utilization and access performance. It tracks file metadata (such as location, size, timestamps) for efficient
retrieval and management.

4.5 File Retrieval and Manipulation:


The operating system provides APIs and system calls that allow users and processes to retrieve, read, write,
and modify files. It handles file I/O operations, buffering, and caching to improve performance and ensure
data integrity. It also supports file locking mechanisms to manage concurrent access to shared files, preventing
conflicts and data corruption.

4.6 File Metadata Management:


The operating system maintains file metadata, including attributes such as file type, size, creation date,
modification date, and ownership information. It allows users and processes to access and modify file
metadata, enabling efficient file search, sorting, and organization.

4.7 File Backup and Recovery:


The operating system provides mechanisms for file backup and recovery. It allows users or system
administrators to create file backups, ensuring data protection and disaster recovery. It supports features like
versioning and incremental backups to save storage space and track file changes over time.

4.8 File System Maintenance:


The operating system performs file system maintenance tasks, including disk space management, error
detection, and disk defragmentation. It handles tasks like file system consistency checks, disk cleanup, and
repair to maintain the integrity and performance of the file system.

Effective file management provided by the operating system ensures efficient storage, organization, and
retrieval of data. It allows users and processes to create, access, and manipulate files and directories, ensuring
data security, reliability, and optimal utilization of storage resources.

Example: When you ssystem andent or download a file, the operating system manages the storage location,
organizes the file system, and ensures that the file is stored and accessible for future use.

KAMADJE SADJIE DJIENA ALLEN 5


LSSC COMPUTER SCIENCE
5. User Interface:
The operating system provides a user interface that allows users to interact with the computer system. It can
be a graphical user interface (GUI) with icons, windows, and menus, or a command-line interface (CLI) where
users enter commands.

C. Importance of Operating Systems in Managing Computer Resources:


- Efficient Resource Allocation: Operating systems ensure optimal utilization of computer resources such as
CPU time, memory, and storage.

- Conflict Prevention: They prevent conflicts and contention for resources among multiple software
applications running concurrently.

- Fair Resource Access: Operating systems provide fair access to resources, ensuring that each application
receives its required share of system resources.

- Process Synchronization: They enable applications to communicate and coordinate their activities, allowing
for efficient sharing of data and resources.

- Error Handling: Operating systems handle errors and exceptions, preventing crashes and providing
mechanisms for error recovery and system stability.

- Security Measures: They implement security features such as access controls, encryption, and authentication
to protect system resources and data.

- Stability and Reliability: Operating systems provide a stable and reliable environment for software
applications to execute, minimizing the risk of system failures or unexpected behavior.

- Device Management: They handle device drivers and facilitate communication between software
applications and hardware devices, ensuring proper utilization and control of peripherals.

- Virtualization: Operating systems support virtualization, enabling the creation and management of virtual
machines or environments, which enhances resource utilization and flexibility.

- Scalability: Operating systems allow for the scaling of resources to accommodate changing workload
demands, ensuring efficient resource management in dynamic environments.

Example: Without an operating system, each program would have to directly interact with the hardware,
leading to inefficiencies, conflicts, and potential security vulnerabilities. The operating system provides a
unified and controlled environment for software applications to run smoothly and utilize system resources in
a coordinated manner.

II. Management of System Resources


A. Role of an operating system in managing system resources
The operating system plays a crucial role in managing system resources to ensure efficient and fair utilization
of available hardware and software resources. It acts as an intermediary between user applications and the
underlying hardware, providing a layer of abstraction and control over system resources. Here are some key
aspects of the operating system's role in managing system resources:

KAMADJE SADJIE DJIENA ALLEN 6


LSSC COMPUTER SCIENCE
1. CPU Management:
The operating system manages the central processing unit (CPU) resources in the system. It schedules and
allocates CPU time to different processes, ensuring fair and efficient utilization. It employs scheduling
algorithms to determine the order in which processes are executed and the amount of CPU time allocated to
each process. The operating system also handles CPU interrupts, context switching, and thread management
to facilitate multitasking and concurrency.

2. Memory Management:
The operating system is responsible for managing the system's memory resources. It allocates and deallocates
memory to processes, ensuring efficient utilization and preventing memory conflicts. It handles memory
paging, swapping, and virtual memory management to provide a larger address space for processes than the
physical memory available. The operating system also manages memory protection, ensuring that processes
cannot access memory regions they are not authorized to access.

3. Disk and File System Management:


The operating system manages disk and file system resources, including storage devices and file systems. It
handles disk I/O operations, buffering, and caching to optimize disk access and improve performance. The
operating system manages file allocation and organization on storage devices, ensuring efficient storage
utilization and providing file management capabilities such as file creation, deletion, and retrieval. It also
handles disk scheduling algorithms to optimize disk access and reduce disk fragmentation.

4. Device and I/O Management:


The operating system manages input/output (I/O) devices and their interactions with processes. It handles
device drivers, which are software components that facilitate communication between the operating system
and devices. The operating system coordinates device access, schedules I/O operations, and handles interrupts
and error handling. It provides mechanisms for device recognition, configuration, and access control, ensuring
proper utilization and coordination of I/O devices.

5. Network Management:
In networked systems, the operating system manages network resources and communication. It handles
network protocols, manages network connections, and coordinates data transmission and reception. The
operating system provides networking APIs and services for applications to communicate over the network,
ensuring secure and reliable network operations.

6. User and Process Management:


The operating system manages user accounts, authentication, and access control. It provides mechanisms for
user login, user session management, and user privilege management. It also manages processes, including
process creation, termination, scheduling, and synchronization. The operating system enforces process
isolation and protection, preventing unauthorized access or interference between processes.

7. System Security and Resource Protection:


The operating system plays a vital role in system security and resource protection. It enforces access control
policies, preventing unauthorized access to system resources. It handles user authentication, data encryption,

KAMADJE SADJIE DJIENA ALLEN 7


LSSC COMPUTER SCIENCE
and security mechanisms to ensure the confidentiality, integrity, and availability of system resources. The
operating system also manages system-wide resource allocation and prioritization, preventing resource
contention and ensuring fair resource sharing among processes.

Overall, the operating system's role in managing system resources is to provide a controlled and efficient
environment for user applications, ensuring optimal utilization of hardware and software resources while
maintaining system stability, security, and performance.

B. Types of system resources (CPU, memory, devices, etc.)


System resources can be broadly categorized into several types. Here are some of the key types of system
resources:

1. CPU (Central Processing Unit):


The CPU is the central processing unit of a computer system. It performs the actual processing of instructions
and data, executing tasks and calculations. The CPU is a critical system resource, and the operating system
manages its allocation and scheduling to ensure efficient utilization among processes.

2. Memory (RAM):
Memory, often referred to as Random Access Memory (RAM), is used by the computer system to store data
and instructions temporarily during program execution. The operating system manages memory allocation,
deallocation, and swapping to ensure that processes have sufficient memory for execution.

3. Storage Devices:
Storage devices, such as hard disk drives (HDDs), solid-state drives (SSDs), and optical drives, are used for
long-term storage of data and program files. The operating system manages file systems and handles I/O
operations with storage devices, including reading and writing data to and from storage media.

4. Input/Output (I/O) Devices:


I/O devices include devices used for input (e.g., keyboards, mice, scanners) and output (e.g., displays, printers,
speakers). The operating system manages device drivers and handles I/O operations, ensuring proper
communication and interaction between processes and I/O devices.

5. Network Resources:
In networked systems, network resources include network interfaces, routers, switches, and communication
protocols. The operating system manages network resources, handles network connections, and facilitates data
transmission and reception over the network.

6. System Clock:
The system clock is a resource used to keep track of time and synchronize operations within the computer
system. The operating system manages the system clock and provides time-related services and functions.

KAMADJE SADJIE DJIENA ALLEN 8


LSSC COMPUTER SCIENCE
7. System Files and Configuration:
System files and configuration information are also considered system resources. These include operating
system files, configuration files, registry settings, and other system-specific data required for the proper
functioning of the operating system and its components.

8. User Accounts and Permissions:


User accounts, access rights, and permissions are system resources managed by the operating system. The
operating system enforces user authentication, access control policies, and privileges to ensure proper user
management and resource protection.

These are some of the primary types of system resources managed by the operating system. Each type of
resource requires appropriate allocation, coordination, and control to ensure the smooth operation of the
computer system and the efficient execution of user applications.

C. Techniques and algorithms used for resource allocation and optimization


Resource allocation and optimization techniques and algorithms vary depending on the specific system
resource being managed. Here are some commonly used techniques and algorithms for different types of
resource allocation and optimization:

1. CPU Scheduling:
- First-Come, First-Served (FCFS): Assigns the CPU to the process that arrives first and waits for its turn.

- Shortest Job Next (SJN): Selects the process with the shortest burst time next to execute, minimizing
waiting time.

- Round Robin (RR): Allocates a fixed time slice to each process in a cyclic manner, ensuring fair CPU time
sharing.

- Priority Scheduling: Assigns priorities to processes and allocates the CPU to the highest priority process
first.

- Multilevel Queue Scheduling: Divides processes into multiple priority queues and assigns different
scheduling algorithms to each queue based on priority.

2. Memory Management:
- Paging: Divides memory into fixed-size pages and processes into fixed-size frames, allowing for efficient
memory allocation and virtual memory management.

- Segmentation: Divides memory into variable-sized segments based on logical structures of programs,
providing flexibility in memory allocation.

- Demand Paging: Loads only necessary pages into memory, swapping in additional pages as needed, to
optimize memory usage.

- Page Replacement Algorithms: Various algorithms, such as Optimal, FIFO (First-In, First-Out), and LRU
(Least Recently Used), determine which pages to evict from memory when it becomes full.

KAMADJE SADJIE DJIENA ALLEN 9


LSSC COMPUTER SCIENCE
3. Disk Scheduling:
- First-Come, First-Served (FCFS): Processes disk requests in the order they arrive.

- Shortest Seek Time First (SSTF): Selects the request with the shortest seek time to minimize disk head
movement.

- SCAN: Services requests in one direction, servicing all requests in that direction before reversing.

- C-SCAN: Similar to SCAN, but the disk arm returns to the beginning of the disk after servicing the last
request.

- LOOK: Services requests in one direction, but reverses direction when there are no more requests in that
direction.

4. Network Resource Management:


- Quality of Service (QoS): Assigns priorities to network traffic based on parameters such as bandwidth,
latency, and packet loss, ensuring certain services receive preferential treatment.

- Traffic Shaping: Controls the rate of data transmission to smooth out network traffic and prevent
congestion.

- Routing Algorithms: Algorithms such as Shortest Path, Distance Vector, and Link State determine the
optimal routes for data packets in a network.

5. User and Process Management:


- Fair Share Scheduling: Ensures that system resources are allocated fairly among users or groups based on
predefined resource allocation policies.

- Resource Reservation: Allows processes to reserve system resources in advance to ensure availability and
avoid resource contention.

- Deadlock Detection and Avoidance: Algorithms, such as Banker's algorithm, detect and prevent resource
deadlocks by dynamically allocating resources to avoid circular wait conditions.

These are just a few examples of techniques and algorithms used for resource allocation and optimization in
operating systems. The choice of technique depends on the specific requirements, characteristics, and
constraints of the system resources being managed.

III. Management of Processes in a Computer


A. Role of an Operating System in Process Management:
The operating system plays a crucial role in process management, which involves the creation, execution, and
termination of processes within a computer system. Here are some key aspects of the operating system's role
in process management:

1. Process Creation:
The operating system provides mechanisms for creating new processes. It allows users or applications to
initiate the creation of new processes, either through system calls or by executing new programs. The operating

KAMADJE SADJIE DJIENA ALLEN 10


LSSC COMPUTER SCIENCE
system assigns a unique process identifier (PID) to each new process, creates the necessary data structures,
and allocates resources for the process.

2. Process Scheduling:
The operating system is responsible for scheduling processes for execution on the CPU. It employs scheduling
algorithms to determine the order in which processes are executed and the amount of CPU time allocated to
each process. The scheduling algorithms consider factors such as process priorities, CPU burst times, and
fairness in resource allocation.

3. Process Execution:
The operating system manages the execution of processes on the CPU. It allocates CPU time to processes
based on the scheduling decisions and switches the CPU context between processes when necessary. The
operating system handles process state transitions, such as from running to waiting or from waiting to ready,
in response to events or system calls.

4. Process Synchronization:
The operating system provides synchronization mechanisms to coordinate the execution of concurrent
processes. It offers tools such as semaphores, mutexes, and condition variables that allow processes to safely
access shared resources and communicate with each other. These mechanisms prevent race conditions, data
inconsistencies, and conflicts in accessing critical resources.

5. Process Communication:
The operating system facilitates inter-process communication (IPC) to enable processes to exchange data and
coordinate their activities. It provides various IPC mechanisms such as shared memory, message passing,
pipes, and sockets. These mechanisms allow processes to collaborate, coordinate, and share information
efficiently.

6. Process Termination:
The operating system handles the termination of processes. It frees up the resources allocated to a process
when it finishes execution or is explicitly terminated. The operating system updates the process state, releases
memory, closes open files, and performs any necessary cleanup tasks associated with the terminated process.

7. Process Monitoring and Control:


The operating system monitors the status and performance of processes. It collects information about resource
usage, CPU time, memory usage, and I/O operations performed by processes. The operating system provides
tools and utilities for process control, such as process suspension, resumption, and termination. It also enforces
policies and restrictions on process behavior, such as limiting CPU usage or preventing unauthorized access
to system resources.

8. Process Protection and Security:


The operating system enforces process protection and security measures. It ensures that processes are isolated
from each other, preventing unauthorized access or interference. The operating system manages user
privileges, access control, and authentication to ensure that processes operate within their authorized
boundaries and do not compromise system security.

KAMADJE SADJIE DJIENA ALLEN 11


LSSC COMPUTER SCIENCE
Overall, the operating system's role in process management is to provide a controlled and efficient
environment for the creation, execution, synchronization, and termination of processes. It ensures fair
allocation of resources, coordination among processes, and system stability while maximizing CPU utilization
and responsiveness.

B. Concepts in Process Management:


1. Process Creation, Execution, and Termination:
1.1. Process Creation:
When a new process is created, the operating system gives it a unique identifier (PID) and allocates resources
like memory and files. The execution environment is set up, and the program code is loaded into memory.

1.2. Process Execution:


Once a process starts running, it fetches instructions from its program code and executes them. It performs
tasks like calculations, data manipulation, and interacting with other processes. It may also make requests to
the operating system for services like I/O operations.

1.3. Process Termination:


A process terminates when it finishes its tasks or is explicitly terminated. The operating System releases the
resources allocated to the process and updates its state to indicate termination. The process may also perform
cleanup actions like closing files or freeing memory before it ends.

In summary, process creation is when a new process is set up with its resources and program code. Process
execution is the actual running of the process, executing instructions and performing tasks. Process
termination is when a process finishes or is ended, and its resources are released.

2. Process States (Ready, Running, Waiting, etc.):


Processes in an operating system can exist in different states, representing their various stages of execution
and interaction with system resources. The common process states in an operating system are as follows:

2.1. New:
When a process is first created, it enters the "new" state. In this state, the operating system initializes the
necessary data structures for the process, assigns a unique process identifier (PID), and allocates resources
required by the process.

2.2. Ready:
A process in the "ready" state is prepared to execute but is waiting for the CPU to be allocated. It is in a queue
of processes that are eligible for execution, and the operating system's scheduler determines when the process
will be selected to run on the CPU.

2.3. Running:
When a process is selected from the ready queue and given the CPU for execution, it enters the "running"
state. The process's instructions are executed on the CPU, and it proceeds with its tasks until it voluntarily
releases the CPU or is preempted by a higher-priority process.

KAMADJE SADJIE DJIENA ALLEN 12


LSSC COMPUTER SCIENCE
2.4. Waiting:
A process enters the "waiting," also known as "blocked" or "blocked on I/O" state when it is unable to proceed
further until a certain event or condition occurs. Typically, this occurs when a process requests an I/O
operation, such as reading data from a disk, and the operating system places the process in a waiting state until
the I/O operation is completed.

2.5. Terminated:
When a process completes its execution or is explicitly terminated by the operating system or another process,
it enters the "terminated" state. In this state, the operating system releases the resources held by the process,
updates accounting information, and removes the process from the system.

2.6. Suspended/Blocked:
In some operating systems, a process may enter a "suspended" state if it is temporarily removed from active
execution. This can happen when the process is waiting for certain resources or when the system needs to
prioritize other processes. A suspended process may transition back to the ready state once the necessary
conditions are met.

It's important to note that the exact naming and number of process states may vary depending on the specific
operating system and its process management model. Some operating systems may have additional states or
variations of the states mentioned above.

The transitions between these states depend on events such as the completion of I/O operations, scheduling
decisions made by the operating system, process requests, and various synchronization mechanisms. The
operating system manages these state transitions and ensures the efficient execution and coordination of
processes in the system.

3. Process Control Block (PCB) and Its Components:


The Process Control Block (PCB) is a data structure used by the operating system to manage and control
individual processes. It contains essential information about a process, allowing the operating system to
manage and track its state. The specific components of a PCB may vary depending on the operating system,
but here are some common components:

1. Process Identifier (PID): A unique identifier assigned to each process to distinguish it from others in the
system.

2. Process State: Represents the current state of the process, such as "new," "ready," "running," "waiting," or
"terminated."

3. Program Counter (PC): A pointer that keeps track of the address of the next instruction to be executed by
the process.

4. CPU Registers: Stores the values of CPU registers, including general-purpose registers, stack pointers, and
program status registers.

5. Memory Management Information: Contains information about the memory allocated to the process, such
as the base address and limit registers.

KAMADJE SADJIE DJIENA ALLEN 13


LSSC COMPUTER SCIENCE
6. Process Priority: Indicates the priority assigned to the process, which can be used by the scheduler to
determine its order of execution.

7. Scheduling Information: Includes details relevant to process scheduling, like the time spent executing, the
time remaining, and the scheduling algorithm used.

8. Open Files: Keeps track of the files opened by the process, including file descriptors and pointers to the file
table.

9. I/O Information: Stores information related to I/O devices used by the process, such as the list of devices
allocated to it.

10. Parent Process Identifier (PPID): Identifies the parent process that created the current process.

11. Inter-Process Communication (IPC) Mechanisms: Contains information about the inter-process
communication mechanisms used by the process, such as message queues or shared memory segments.

These components provide the necessary information for the operating system to manage and control
processes effectively. The PCB is typically stored in the operating system's process table, allowing quick
access to process information when needed.

KAMADJE SADJIE DJIENA ALLEN 14


LSSC COMPUTER SCIENCE

C. Scheduling Categories and Strategies in Process Management:


1. Categories
In process management, scheduling strategies can be broadly categorized into three main categories:

1.1. Preemptive Scheduling:


Preemptive scheduling allows the operating system to interrupt a running process and allocate the CPU to
another process. This interruption is typically based on priorities, time quantum, or other factors. Preemptive
scheduling ensures fairness and responsiveness but introduces overhead due to context switching. Examples

KAMADJE SADJIE DJIENA ALLEN 15


LSSC COMPUTER SCIENCE
of preemptive scheduling strategies include Round Robin (RR), Shortest Remaining Time (SRT), and Priority
Scheduling with preemption.

1.2. Non-Preemptive Scheduling:


Non-preemptive scheduling, also known as cooperative scheduling, allows a running process to keep the CPU
until it voluntarily releases it (e.g., by blocking, completing, or yielding). The operating system does not
forcefully interrupt the process. Non-preemptive scheduling is simpler and has lower overhead but can result
in poor responsiveness if a long-running process monopolizes the CPU. Examples of non-preemptive
scheduling strategies include First-Come, First-Served (FCFS), Shortest Job Next (SJN), and Priority
Scheduling without preemption.

1.3. Hybrid Scheduling:


Hybrid scheduling combines elements of both preemptive and non-preemptive scheduling. It allows processes
to voluntarily release the CPU but also provides mechanisms for the operating system to preempt processes if
necessary. Hybrid scheduling strategies aim to strike a balance between fairness and responsiveness while
minimizing overhead. An example of a hybrid scheduling strategy is Multilevel Feedback Queue Scheduling.

These scheduling categories and strategies provide flexibility in managing processes, balancing resource
utilization, responsiveness, and overall system performance based on specific requirements and priorities. The
choice of a scheduling strategy depends on factors such as the nature of the workload, system characteristics,
and desired performance goals.

2. Strategies
Scheduling strategies play a crucial role in process management by determining the order in which processes
are executed and allocated CPU time. Here are some common scheduling strategies used in process
management:

2.1. First-Come, First-Served (FCFS):


In FCFS scheduling, the processes are executed in the order they arrive. The CPU is allocated to the first
process in the ready queue, and it continues executing until it completes or gets blocked. This strategy is
simple but can result in poor utilization of CPU time when long-running processes are scheduled first (known
as the "convoy effect").

2.2. Shortest Job Next (SJN) or Shortest Job First (SJF):


SJN scheduling selects the process with the shortest burst time (execution time) next for execution. This
strategy aims to minimize the average waiting time and provides optimal scheduling in terms of minimizing
the total execution time. However, the actual burst time of processes is usually unknown in advance, making
practical implementation challenging.

2.3. Round Robin (RR):


Round Robin scheduling assigns a fixed time slice, called a time quantum, to each process in the ready queue.
The CPU is allocated to a process for a time quantum, and then it is preempted, and the next process in the
queue receives the CPU. This strategy ensures fairness by giving each process an equal opportunity to execute
but can result in increased overhead due to frequent context switching.

KAMADJE SADJIE DJIENA ALLEN 16


LSSC COMPUTER SCIENCE
2.4. Priority Scheduling:
Priority scheduling assigns a priority value to each process, and the CPU is allocated to the process with the
highest priority. Processes with the same priority are scheduled using another strategy like FCFS or RR.
Priority can be static or dynamic, where it may change during the execution based on factors like aging or
process behavior. However, priority scheduling can suffer from priority inversion and starvation issues if not
carefully implemented.

2.5. Multilevel Queue Scheduling:


Multilevel queue scheduling divides processes into multiple queues, each with a different priority level. Each
queue can have its own scheduling algorithm, such as FCFS, SJN, or RR. Processes are initially placed in a
particular queue based on their priority or characteristics. This strategy allows different treatment for processes
with different priorities or requirements.

2.6. Multilevel Feedback Queue Scheduling:


Multilevel feedback queue scheduling is an extension of multilevel queue scheduling. It allows processes to
move between different queues based on their behavior and resource needs. Processes that use excessive CPU
time may be moved to a lower priority queue, while interactive processes may move to a higher priority queue.
This strategy provides flexibility and adaptability to varying process requirements.

2.7. Lottery Scheduling:


Lottery scheduling assigns tickets to processes based on their priority or resource requirements. The CPU is
allocated to a process through a lottery mechanism where tickets are drawn randomly. Processes with more
tickets have a higher chance of winning the CPU lottery. This strategy provides a probabilistic approach to
scheduling, allowing for flexible allocation of CPU time based on process requirements.

These scheduling strategies aim to optimize system performance, response time, fairness, and resource
utilization based on different criteria and requirements. The choice of scheduling strategy depends on the
specific characteristics of the system and the desired goals of process management.

IV. Memory Management


A. Role of an Operating System in Memory Management:
In memory management, the operating system plays a crucial role in efficiently allocating and managing the
computer's memory resources. Here are some key roles of an operating system in memory management:

1. Memory Allocation:
The operating system is responsible for allocating memory to processes. It keeps track of the available memory
and assigns portions of it to processes when they are created or request additional memory. The operating
system uses memory allocation algorithms to determine the most suitable blocks of memory to allocate.

2. Memory Protection:
The operating system ensures memory protection by enforcing boundaries between processes' memory spaces.
It prevents one process from accessing or modifying the memory assigned to another process, thus maintaining
data integrity and security.

KAMADJE SADJIE DJIENA ALLEN 17


LSSC COMPUTER SCIENCE
3. Memory Mapping:
The operating system facilitates memory mapping, allowing processes to access files and devices as if they
were part of the process's memory. By mapping files and devices into memory, the operating system simplifies
I/O operations and improves performance.

4. Virtual Memory Management:


Virtual memory is a technique that allows processes to use more memory than physically available by utilizing
secondary storage (e.g., hard disk) as an extension of the main memory. The operating system manages virtual
memory by mapping virtual addresses used by processes to physical addresses in the main memory or
secondary storage. It handles page faults (when a requested page is not in main memory) by swapping pages
in and out of memory.

5. Memory Sharing:
The operating system facilitates memory sharing among processes. It allows multiple processes to access the
same portion of memory, enabling efficient communication and data sharing between processes.

6. Memory Cleanup and Reclamation:


When a process terminates or releases memory, the operating system is responsible for reclaiming the memory
and making it available for other processes. It performs memory cleanup tasks such as releasing allocated
memory blocks and updating memory management data structures.

7. Memory Fragmentation Handling:


The operating system manages memory fragmentation, which occurs when the memory becomes divided into
small, non-contiguous blocks over time. It uses techniques like compaction or memory allocation algorithms
(e.g., best fit or worst fit) to minimize fragmentation and optimize memory utilization.

8. Swapping and Paging:


Swapping and paging are techniques used by the operating system to move pages or entire processes between
main memory and secondary storage. This allows efficient usage of limited physical memory and enables
processes to execute even when physical memory is insufficient.

Overall, the operating system's role in memory management is to ensure efficient utilization of memory
resources, provide memory protection and security, enable virtual memory techniques, facilitate memory
sharing and mapping, and handle memory cleanup and fragmentation. These functions are essential for the
smooth execution of processes and efficient utilization of computer memory.

B. Memory Hierarchy and Organization:


In memory management, the memory hierarchy refers to the organization of memory resources in a computer
system, arranged in different levels based on their proximity to the CPU and their access speeds. The memory
hierarchy consists of multiple levels, each with varying capacities, access times, and costs. Here are the
common levels of the memory hierarchy:

1. CPU Registers:
CPU registers are the fastest and smallest storage units located directly within the CPU. They are used to store
intermediate results, operands, and control information during the execution of instructions. Registers have
the fastest access times, measured in nanoseconds or even picoseconds.

KAMADJE SADJIE DJIENA ALLEN 18


LSSC COMPUTER SCIENCE
2. Cache Memory:
Cache memory is a small but faster memory located closer to the CPU than main memory. It serves as a
temporary storage for frequently accessed data and instructions. Caches exploit the principle of locality, which
states that recently accessed data is likely to be accessed again in the near future. The cache memory is divided
into multiple levels, such as L1, L2, and L3 caches, with each level having larger capacity but slower access
times.

3. Main Memory:
Main memory, also known as RAM (Random Access Memory), is the primary memory where programs and
data are stored during execution. It is larger than cache memory but slower in terms of access times. Main
memory provides a direct interface with the CPU and is volatile, meaning its contents are lost when power is
turned off.

4. Secondary Storage:
Secondary storage devices, such as hard disk drives (HDDs) and solid-state drives (SSDs), provide long-term
storage for programs, data, and the operating system. They have larger capacities compared to main memory
but much slower access times. Secondary storage is non-volatile, meaning data remains stored even when
power is turned off.

The memory hierarchy is organized in a way that optimizes the tradeoff between speed, capacity, and cost.
The faster and smaller memory levels (registers and cache) are more expensive per unit of storage, while the
larger and slower levels (main memory and secondary storage) provide more storage capacity at a lower cost.

The operating system and hardware work together to manage the memory hierarchy efficiently. The CPU and
cache management hardware handle cache operations to reduce cache misses and improve performance. The
operating system is responsible for managing the allocation and deallocation of memory in main memory,
handling virtual memory techniques, and coordinating the movement of data between main memory and
secondary storage.

By utilizing the memory hierarchy effectively, the system can maximize performance by minimizing the time
spent on memory access, reducing data transfer bottlenecks, and optimizing the usage of different memory
levels based on the access patterns and requirements of running processes.

C. Memory Allocation Techniques (e.g., Partitioning, Paging, Segmentation):


Memory allocation techniques are used in memory management to allocate memory resources to processes
efficiently. Here are some common memory allocation techniques:

1. Fixed Partitioning:

Fixed partitioning, also known as static partitioning, divides the available memory into fixed-size partitions
or regions. Each partition is assigned to a process, and the size of the partition remains constant. Processes are
allocated to partitions based on their size, and only processes that fit within a partition can be loaded. Fixed
partitioning is simple to implement but can lead to internal fragmentation, where a partition may have unused
memory space.

KAMADJE SADJIE DJIENA ALLEN 19


LSSC COMPUTER SCIENCE
2. Variable Partitioning:

Variable partitioning, also known as dynamic partitioning, allocates memory to processes based on their size.
The available memory is divided into variable-sized partitions, and each partition is allocated to a process
upon request. When a process is completed or terminated, the partition is deallocated and becomes available
for new processes. Variable partitioning reduces internal fragmentation but can lead to external fragmentation,
where free memory becomes scattered in small, non-contiguous blocks.

3. Paging:

Paging is a memory allocation technique that divides the physical memory and processes into fixed-size blocks
called pages. The logical memory of a process is divided into fixed-size blocks called page frames. Pages are
loaded into available page frames as needed. Paging allows for efficient memory utilization and enables the
use of virtual memory. It helps to overcome external fragmentation but can introduce overhead due to page
table management and page faults.

4. Segmentation:

Segmentation is a memory allocation technique that divides the logical memory of a process into variable-
sized segments. Each segment represents a logically related portion of the process, such as code, data, stack,
or heap. Segments are loaded into non-contiguous memory locations. Segmentation allows for flexible
memory allocation and sharing but can lead to external fragmentation and requires complex memory
management algorithms.

5. Combined Paging and Segmentation:

This technique combines the benefits of paging and segmentation. The logical memory of a process is divided
into segments, and each segment is further divided into fixed-size pages. This technique provides flexibility
in memory allocation, allows for sharing of segments, and helps in managing external fragmentation.

It's important to note that the choice of memory allocation technique depends on the system requirements, the
characteristics of processes, and the available hardware resources. Different techniques have their advantages
and trade-offs in terms of memory utilization, fragmentation, overhead, and ease of implementation. Modern
operating systems often use a combination of these techniques to optimize memory management and provide
efficient memory allocation for processes.

D. Virtual Memory Concepts and Techniques:


Virtual memory is a memory management technique that allows processes to use more memory than what is
physically available in the main memory (RAM). It provides an illusion of a large, contiguous address space
to processes while efficiently utilizing the available physical memory and secondary storage (e.g., hard disk).

Here are some key concepts and techniques associated with virtual memory:

1. Address Translation:
In virtual memory, processes use virtual addresses, which are translated to physical addresses by the memory
management unit (MMU) hardware. The MMU maintains a page table that maps virtual addresses to physical

KAMADJE SADJIE DJIENA ALLEN 20


LSSC COMPUTER SCIENCE
addresses. This translation allows processes to access memory locations that may be located in either the main
memory or secondary storage.

2. Paging:
Paging is a virtual memory technique that divides the logical address space of a process into fixed-size blocks
called pages. The physical memory is divided into page frames of the same size. The page table maintains the
mapping between virtual pages and physical page frames. Pages are loaded into available page frames when
needed. Paging allows for efficient use of physical memory, as pages can be swapped in and out of secondary
storage as required.

3. Page Faults:
When a process tries to access a page that is not currently in the main memory (i.e., a page fault occurs), the
operating system is notified. The operating system then fetches the required page from secondary storage and
updates the page table to reflect the new mapping. This process is transparent to the process and allows it to
access a larger address space than what is available in physical memory.

4. Demand Paging:
Demand paging is a technique where pages are loaded into memory only when they are actually required by
a process. This reduces the initial memory requirements for processes and allows for more efficient memory
utilization. However, it can introduce latency when a page fault occurs and a required page needs to be fetched
from secondary storage.

5. Page Replacement:
When the main memory becomes full and a new page needs to be loaded, a page replacement algorithm is
used to select a victim page to be evicted from the memory. Common page replacement algorithms include
Optimal, Least Recently Used (LRU), First-In-First-Out (FIFO), and Clock algorithms. The goal of these
algorithms is to minimize the number of page faults and optimize the overall system performance.

6. Working Set:
The working set of a process refers to the set of pages that it actively uses during its execution. The operating
system monitors the working set of each process and adjusts the page allocation accordingly to improve
performance. The working set concept helps in reducing page faults and improving locality of reference.

7. Memory-Mapped Files:
Virtual memory allows files to be mapped directly into the address space of a process. This technique, known
as memory-mapped files, enables efficient file I/O operations by treating files as if they were portions of the
process's memory. It eliminates the need for explicit read and write operations and simplifies file access.

Virtual memory provides several benefits, including increased address space for processes, efficient memory
utilization, memory protection, and the ability to run larger programs. However, it also introduces additional
overhead due to address translation, page faults, and page swapping. The operating system's virtual memory
management subsystem is responsible for implementing these concepts and techniques to provide efficient
and transparent memory management for processes.

KAMADJE SADJIE DJIENA ALLEN 21


LSSC COMPUTER SCIENCE
E. Memory Protection and Address Translation Mechanisms:
Memory protection and address translation mechanisms are essential components of modern computer
systems to ensure the security, integrity, and isolation of processes. These mechanisms work together to
enforce memory access permissions and translate virtual addresses to physical addresses. Here's an overview
of memory protection and address translation:

1. Memory Protection:
Memory protection refers to the mechanisms in a computer system that enforce access control and ensure the
security and integrity of memory. It involves assigning access permissions to different regions of memory,
such as read, write, and execute, to prevent unauthorized access or modification of memory. Memory
protection mechanisms are crucial for maintaining the isolation between processes and protecting sensitive
data.

Memory protection mechanisms enforce access control and prevent unauthorized access or modification of
memory. They include:

- Access Control: Memory protection mechanisms enforce access control by assigning access permissions
to different regions of memory. Common permissions include read, write, and execute. Each process has its
own memory protection settings to prevent unauthorized access or modification of memory.

- Segmentation: Segmentation divides the virtual address space of a process into logical segments such as
code, data, stack, and heap. Memory protection settings are associated with each segment, allowing fine-
grained control over memory access permissions for different parts of a process.

- Page-Level Protection: In virtual memory systems, memory protection is often implemented at the page
level. Access permissions are associated with each page, allowing the operating system to control access to
individual pages of memory. This provides more granular control over memory protection.

2. Address Translation:
Address translation is the process of converting virtual addresses used by processes into physical addresses
where the corresponding data is stored in the physical memory. It is a key component of virtual memory
systems. Address translation is performed by hardware components, such as the Memory Management Unit
(MMU) or software mechanisms in the operating system. The translation is based on a data structure called
the page table, which maps virtual pages to physical page frames. By performing address translation, the
system enables processes to use virtual addresses while efficiently managing physical memory resources.

Address translation mechanisms convert virtual addresses used by processes into physical addresses. They
include:

- Virtual Memory Translation: The virtual memory system translates virtual addresses used by processes
into physical addresses where the data is stored in the physical memory. This translation is performed by the
Memory Management Unit (MMU) hardware or by software mechanisms in the operating system.

- Page Table: The page table is a data structure used for address translation. It maintains the mapping between
virtual pages and physical page frames. Each entry in the page table contains the virtual page number and the

KAMADJE SADJIE DJIENA ALLEN 22


LSSC COMPUTER SCIENCE
corresponding physical page frame number. The MMU uses the page table to perform the translation of virtual
addresses to physical addresses.

- Address Translation Lookaside Buffer (TLB): The TLB is a cache within the MMU that stores recently
used translations. It helps to speed up the address translation process by avoiding frequent accesses to the page
table. When a virtual address is encountered, the MMU first checks the TLB for a matching translation. If
found, the physical address is directly obtained from the TLB. Otherwise, the MMU consults the page table
to perform the translation and updates the TLB for future use.

The combination of memory protection and address translation mechanisms ensures that processes can access
only the memory regions they are authorized to access. Memory protection prevents unauthorized access to
sensitive data and prevents processes from interfering with each other's memory. Address translation allows
processes to use virtual addresses, providing an abstraction that simplifies memory management and enables
efficient utilization of physical memory and secondary storage.

These mechanisms are implemented by the operating system and hardware working together to enforce
memory access permissions, manage page tables, and handle address translation efficiently. By providing
memory protection and address translation mechanisms, computer systems can provide a secure and isolated
execution environment for processes while maximizing memory utilization and system performance.

V. Devices Management
A. Role of an Operating System in Devices Management:
The role of an operating system in devices management is to facilitate the interaction between the computer
system and its various hardware devices. It provides a layer of abstraction and a set of services that allow
applications and users to access and control devices efficiently. Here are the key roles of an operating system
in devices management:

1. Device Recognition and Configuration:

- The operating system is responsible for recognizing and configuring the hardware devices connected to
the computer system. It identifies the devices present in the system, determines their characteristics (such as
device type, capabilities, and available resources), and sets them up for proper operation.

- Device drivers, which are software components specific to each device, are typically employed by the
operating system to interact with the devices. The drivers provide the necessary instructions for the operating
system to communicate with and control the devices effectively.

2. Device Allocation and Scheduling:

- The operating system manages the allocation and scheduling of devices to different processes and
applications. It ensures that multiple processes can access devices concurrently without conflicts or
interference.

- Devices may be shared among multiple processes through techniques such as time-sharing or priority-
based scheduling. The operating system regulates access to devices to prevent contention and ensure fair and
efficient utilization of device resources.

KAMADJE SADJIE DJIENA ALLEN 23


LSSC COMPUTER SCIENCE

3. Device Abstraction:

- The operating system provides a layer of abstraction to shield applications and users from the low-level
details of device interaction. It presents a uniform interface, known as an Application Programming Interface
(API), that hides the complexities of different devices and enables applications to access devices using a
standardized set of commands or system calls.

- By offering device abstraction, the operating system simplifies the development of applications and
promotes portability. Applications can be written to interact with the operating system's API, making them
independent of the specific hardware devices installed in the system.

4. Device Monitoring and Control:

- The operating system monitors the status and performance of devices. It tracks device availability, detects
errors or failures, and takes appropriate actions to handle device-related issues.

- The operating system also provides mechanisms for device control, allowing applications or users to
configure device settings, initiate device operations, and handle device-specific functionalities.

5. Device I/O Management:

- The operating system manages input and output (I/O) operations involving devices. It provides services
and interfaces for applications to perform I/O operations, such as reading from or writing to devices.

- The operating system handles device interrupts and manages data transfers between devices and memory.
It ensures the efficient flow of data between applications and devices by employing buffering, caching, and
I/O scheduling techniques.

In summary, the operating system plays a crucial role in devices management by recognizing and configuring
hardware devices, allocating and scheduling device resources, providing device abstraction, monitoring and
controlling devices, and managing device input and output operations. These functions enable applications to
interact with devices in a consistent and efficient manner, abstracting away the complexities of device-specific
interactions and promoting system stability and usability.

B. Device Types and Their Characteristics:


In devices management, various types of devices are used in computer systems to facilitate input, output, and
storage operations. Each device type has its own characteristics and functionalities. Here are some common
device types and their key characteristics:

1. Input Devices:
- Input devices are used to provide data and commands to the computer system. They allow users to interact
with the system and input information. Examples of input devices include:

- Keyboard: A keyboard is a common input device that allows users to enter text, numbers, and commands
by pressing keys.

KAMADJE SADJIE DJIENA ALLEN 24


LSSC COMPUTER SCIENCE
- Mouse: A mouse is a pointing device that enables users to move a cursor on the screen and select objects
by clicking buttons.

- Touchscreen: A touchscreen is a display device that allows users to input commands or interact with the
system by touching the screen directly.

- Scanner: A scanner converts physical documents or images into digital format, allowing them to be stored
or processed by the computer.

- Microphone: A microphone captures audio input, enabling users to record sounds or provide voice
commands to the system.

2. Output Devices:
- Output devices are used to present information or results to users. They display or provide output generated
by the computer system. Examples of output devices include:

- Monitor: A monitor or display provides visual output by presenting text, images, and graphical user
interfaces to users.

- Printer: A printer produces hard copies of documents or images on paper or other media.

- Speakers: Speakers generate audio output, allowing users to hear sounds, music, or other audio content
produced by the system.

- Projector: A projector displays computer-generated content on a larger screen or projection surface.

3. Storage Devices:
- Storage devices are used to store and retrieve data persistently. They provide long-term storage for
programs, files, and other information. Examples of storage devices include:

- Hard Disk Drive (HDD): An HDD uses magnetic storage to store data on spinning disks. It provides high-
capacity storage for operating systems, applications, and user files.

- Solid-State Drive (SSD): An SSD uses flash memory and provides faster access times and improved
durability compared to HDDs.

- Optical Disc Drives: Optical drives, such as CD, DVD, or Blu-ray drives, read and write data on optical
discs for storing and retrieving information.

- USB Flash Drives: USB flash drives, also known as thumb drives or USB sticks, use flash memory to
provide portable and removable storage.

4. Communication Devices:
- Communication devices facilitate communication and data transfer between computer systems or
networks. They enable connectivity and exchange of information. Examples of communication devices
include:

- Network Interface Card (NIC): A NIC enables a computer system to connect to a network, such as
Ethernet or Wi-Fi, to transmit and receive data.

KAMADJE SADJIE DJIENA ALLEN 25


LSSC COMPUTER SCIENCE
- Modem: A modem converts digital signals from a computer into analog signals suitable for transmission
over telephone lines or other communication channels.

- Router: A router connects multiple networks and directs data packets between them, enabling
communication between different devices or networks.

These are just a few examples of device types and their characteristics. Advancements in technology continue
to introduce new devices and functionalities to computer systems, expanding the possibilities for input, output,
storage, and communication. The operating system plays a vital role in managing these devices, providing the
necessary abstractions and services for applications and users to interact with them effectively.

C. Device Drivers and Their Functions:


Device drivers are software components that facilitate communication between the operating system and
specific hardware devices. They act as an interface between the higher-level software, such as applications or
the operating system kernel, and the low-level functionality of the devices. Device drivers play a crucial role
in devices management. Here are the functions performed by device drivers:

1. Device Initialization and Configuration:


- Device drivers initialize and configure the hardware devices during system startup. They perform tasks
such as detecting the presence of devices, verifying their capabilities, and setting up the necessary resources
for their operation. This includes configuring device registers, allocating memory buffers, and establishing
communication channels.

2. Device Communication:
- Device drivers enable communication between the operating system and the hardware devices they are
designed for. They provide a set of functions, known as APIs (Application Programming Interfaces), that allow
higher-level software to send commands, retrieve data, and control device operations. These functions abstract
the complexities of device-specific protocols and provide a standardized interface for software to interact with
the devices.

3. Interrupt Handling:
- Device drivers handle interrupts generated by hardware devices. When a device requires attention or
completes an operation, it generates an interrupt signal to the operating system. Device drivers intercept these
interrupts, determine their source, and execute the appropriate actions. This may involve processing incoming
data, updating device status, or initiating further device operations.

4. Data Transfer and Buffering:


- Device drivers manage the transfer of data between the hardware devices and the computer's memory.
They handle data buffering, ensuring efficient and reliable transfer of data. Device drivers may employ
techniques such as double buffering or DMA (Direct Memory Access) to optimize data transfer and reduce
CPU overhead.

5. Error Handling and Recovery:


- Device drivers are responsible for detecting and handling errors that may occur during device operation.
They monitor the status of devices, detect errors or failures, and take appropriate actions to recover from or

KAMADJE SADJIE DJIENA ALLEN 26


LSSC COMPUTER SCIENCE
mitigate these issues. This may involve error logging, resetting devices, reconfiguring settings, or notifying
the operating system and applications about the errors.

6. Power Management:
- Device drivers may also incorporate power management functionality to optimize energy usage and
increase battery life in portable devices. They implement power-saving features such as device sleep modes,
idle state management, and dynamic power scaling, allowing devices to conserve power when not in active
use.

7. Compatibility and Updates:


- Device drivers ensure compatibility between the hardware devices and the operating system. They are
often specific to a particular device model or manufacturer and need to be regularly updated to support new
devices, fix bugs, or enhance performance. Operating system updates may include new or updated device
drivers to improve compatibility and functionality.

In summary, device drivers are essential software components in devices management. They enable the
operating system to communicate with hardware devices, handle device initialization and configuration,
manage data transfer and buffering, handle interrupts, handle errors and recovery, implement power
management features, and ensure compatibility between devices and the operating system. Device drivers
bridge the gap between the hardware and software layers, allowing applications and the operating system to
effectively utilize the capabilities of the devices.

D. Techniques for Device Allocation and Scheduling:


Device allocation and scheduling techniques are used by the operating system to manage the allocation of
devices to different processes and applications. These techniques ensure that multiple processes can access
devices concurrently without conflicts or interference. Here are some common techniques for device
allocation and scheduling:

1. First-Come, First-Served (FCFS):


- FCFS is a simple device scheduling technique where devices are allocated to processes in the order in
which they request access. The operating system maintains a queue of pending requests, and devices are
assigned to processes based on their arrival time. However, FCFS can lead to poor device utilization if a
process with a long service time occupies a device for an extended period, causing other processes to wait.

2. Round-Robin (RR):
- Round-robin is a preemptive scheduling technique typically used for time-sharing of devices. Each process
is allocated a fixed time quantum to access the device. If a process does not complete its operation within the
time quantum, it is preempted, and the next process in the queue is granted access. This technique ensures fair
sharing of device resources among processes but may introduce some overhead due to frequent context
switching.

3. Priority-Based Scheduling:
- Priority-based scheduling assigns different priorities to processes based on their importance or urgency.
The device is allocated to the highest priority process that requests access. This technique allows critical or

KAMADJE SADJIE DJIENA ALLEN 27


LSSC COMPUTER SCIENCE
time-sensitive processes to gain immediate access to devices. However, care must be taken to avoid starvation
of lower priority processes, as they may experience significantly delayed access.

4. Deadline-Based Scheduling:
- Deadline-based scheduling assigns deadlines to processes requesting device access. The device is allocated
to the process with the earliest deadline. This technique is commonly used in real-time systems where meeting
specific deadlines is crucial. The operating system ensures that the device is allocated in a manner that allows
processes to complete their operations within their specified deadlines.

5. Resource Reservation:
- Resource reservation involves allocating devices based on specific resource requirements specified by
processes or applications in advance. Processes make reservations for device access, and the operating system
guarantees the availability of the requested devices at the specified times. This technique is often used in
scenarios where strict guarantees and predictability are required, such as in multimedia streaming or real-time
systems.

6. Priority Inversion Avoidance:


- Priority inversion occurs when a low-priority process holds a resource required by a higher-priority
process, causing the higher-priority process to wait unnecessarily. Priority inversion avoidance techniques,
such as priority inheritance or priority ceiling protocols, are employed to prevent priority inversions in device
allocation. These techniques temporarily elevate the priority of the low-priority process to avoid blocking
higher-priority processes.

7. Device Reservation:
- Device reservation allows processes or applications to reserve exclusive access to a device for a specific
period. The operating system ensures that the device is not allocated to any other process during the reservation
period. This technique is useful when a process requires uninterrupted or dedicated access to a device for a
prolonged duration.

These are some common techniques employed by the operating system for device allocation and scheduling.
The choice of technique depends on the specific system requirements, the nature of the devices, and the
characteristics of the processes or applications utilizing the devices. The goal is to optimize device utilization,
ensure fairness, meet deadlines, and provide efficient access to devices for processes and applications.

E. Input/Output (I/O) Operations and Handling:


Input/Output (I/O) operations involve the transfer of data between the computer system and its external
devices. These operations are essential for interacting with input devices, displaying output, and performing
data storage and retrieval. Here are some key aspects of I/O operations and their handling:

1. I/O Devices and Controllers:


- I/O devices, such as keyboards, mice, displays, printers, and storage devices, have their own specific
interfaces and protocols for data transfer. Each device is typically associated with a controller, which acts as
an intermediary between the device and the rest of the computer system. The controller manages the low-level
details of device communication, translating commands and data between the device and the system.

KAMADJE SADJIE DJIENA ALLEN 28


LSSC COMPUTER SCIENCE
2. I/O Ports and Addresses:
- I/O devices are connected to the computer system through specific input/output ports or addresses. These
ports or addresses serve as communication endpoints for the devices. The operating system interacts with
devices by reading from or writing to these ports or memory addresses, using device-specific protocols and
commands.

3. I/O Operations:
- I/O operations involve data transfer between the computer system's memory and the I/O devices. There are
two types of I/O operations:

a. Input Operations:
Input operations involve transferring data from an external device to the computer system. For example, when
a user types on a keyboard or a scanner reads a document, the data is input to the system for processing. The
operating system handles input operations by using device drivers to read data from the device and transfer it
to the appropriate memory location.

b. Output Operations:
Output operations involve transferring data from the computer system to an external device. For instance,
when the system sends data to a printer or displays information on a monitor, it performs an output operation.
The operating system uses device drivers to send data from the memory to the device for output.

4. Synchronous and Asynchronous I/O:


- Synchronous I/O operations block the executing process until the operation completes. The process waits
for the I/O operation to finish before continuing its execution. This approach ensures that the process has the
required data or output available before proceeding further.

- Asynchronous I/O operations, also known as non-blocking I/O, allow the executing process to continue its
execution while the I/O operation proceeds in the background. The process initiates the I/O operation and
receives a notification or callback when the operation is complete. Asynchronous I/O can enhance system
responsiveness by enabling concurrent execution of multiple processes or allowing the execution of other
tasks while waiting for I/O completion.

5. I/O Buffers and Caching:


- I/O buffers are temporary storage areas used to hold data during I/O operations. Buffers help decouple the
speed of data transfer between devices and the system's memory. They allow for efficient data transfer by
minimizing the number of small, frequent I/O operations.

- Caching is a technique used to improve I/O performance by storing frequently accessed data in a faster
intermediate storage, such as a cache memory. Caching reduces the need for accessing slower storage devices,
such as hard drives, by keeping frequently used data readily available in the faster cache.

6. I/O Scheduling:
- I/O scheduling involves prioritizing and ordering I/O requests from different processes or applications. The
operating system determines the order in which pending I/O requests are serviced to optimize device
utilization and system performance. Scheduling algorithms consider factors such as fairness, throughput,
response time, and minimizing disk head movements in the case of storage devices.

KAMADJE SADJIE DJIENA ALLEN 29


LSSC COMPUTER SCIENCE
7. Error Handling:
- I/O operations can encounter errors due to various reasons, such as device failures, communication issues,
or data corruption. The operating system and device drivers are responsible for detecting and handling these
errors. Error handling mechanisms may include retrying the operation, notifying the user or application about
the error, logging error information, and taking appropriate recovery actions.

Efficient and effective handling of I/O operations is crucial for overall system performance and user
experience. The operating system provides the necessary abstractions, device drivers, and I/O management
techniques to facilitate seamless data transfer between the computer system and its external devices.

VI. File Management


File management is a critical function of an operating system that involves organizing, manipulating, and
controlling files and directories. It ensures efficient storage, access, and retrieval of data. Here are the key
aspects related to file management:

A. Role of an Operating System in File Management:


The operating system plays a crucial role in file management by providing a unified interface and set of
services for handling files. Its responsibilities include:

1. File System Creation and Management:


The operating system is responsible for creating and managing the file system, which is a logical structure that
organizes and stores files on storage devices such as hard drives. It defines the rules and data structures
necessary for file storage, access, and metadata management.

2. File Naming and Directory Structure:


The operating system provides mechanisms for naming files and organizing them into directories or folders.
It ensures that file names are unique and supports hierarchical directory structures to facilitate efficient
organization and retrieval of files.

3. File Access Control:


The operating system enforces access control mechanisms to protect files from unauthorized access. It
manages file permissions, determining who can read, write, or execute files, and ensures that only authorized
users or processes can access or modify files based on the defined permissions.

4. File Operations:
The operating system provides interfaces and services for performing various file operations, including file
creation, deletion, reading, writing, copying, moving, and renaming. It handles these operations and maintains
the integrity and consistency of the file system throughout the process.

5. File Metadata Management:


The operating system maintains metadata associated with files, such as file size, creation date, modification
date, file type, and ownership information. This metadata is used for file identification, organization, and
access control.

KAMADJE SADJIE DJIENA ALLEN 30


LSSC COMPUTER SCIENCE
B. File System Organization and Structure:
The file system organization and structure determine how files are stored, managed, and accessed. Common
file system organization structures include:

1. Hierarchical File System: This structure organizes files in a tree-like hierarchy with directories or folders
containing files and subdirectories. The top-level directory is the root directory, and subsequent directories
branch out from it.

2. Flat File System: In a flat file system, files are stored in a single directory without any subdirectories. Each
file has a unique name to differentiate it from others.

3. Indexed File System: An indexed file system uses an index to maintain a separate data structure that maps
file names to their physical locations on the storage device. This allows for faster file access and retrieval.

4. Distributed File System: In a distributed file system, files are stored across multiple networked devices or
servers. The file system provides a unified view of the distributed files and handles the distribution, replication,
and synchronization of files across the network.

C. File Operations:
File operations involve various actions performed on files within the file system. Some common file
operations include:

1. File Creation: The process of creating a new file within the file system. It involves assigning a unique name
to the file and allocating storage space for its data.

2. File Deletion: The action of permanently removing a file from the file system. This typically involves
freeing up the allocated storage space and updating the file system metadata.

3. File Reading: The process of retrieving data from a file. It involves accessing the file's content and
transferring it to a program or user for processing or display.

4. File Writing: The process of storing data in a file. It involves appending or overwriting the content of a file
with new data.

5. File Copying: Creating a duplicate of a file, either within the same directory or in a different location. This
operation preserves the original file while creating an identical copy.

6. File Moving: Relocating a file from one directory to another within the file system. This operation updates
the file's metadata to reflect its new location.

7. File Renaming: Changing the name of a file while keeping its content and location intact. This operation
updates the file's metadata to reflect the new name.

These file operations are essential for managing files effectively, organizing data, and facilitating data
processing and retrieval within an operating system.

D. File Attributes and Metadata:

KAMADJE SADJIE DJIENA ALLEN 31


LSSC COMPUTER SCIENCE
File attributes and metadata provide additional information about files stored in a file system. They describe
various characteristics of files, such as their size, type, permissions, timestamps, and ownership.

Here are the key aspects of file attributes and metadata:


1. File Name: The name of the file that uniquely identifies it within the file system. The file name is used to
locate and access the file.

2. File Size: The size of the file in bytes, which indicates the amount of storage space occupied by the file on
the storage device.

3. File Type: The type or format of the file, such as text, image, audio, video, executable, or document. The
file type helps determine how the file can be processed or interpreted by applications.

4. File Location: The physical or logical location of the file within the file system. It includes information
about the directory path or the storage device and sector addresses.

5. File Creation Timestamp: The timestamp indicating when the file was created or initially saved to the file
system.

6. File Modification Timestamp: The timestamp indicating the last time the file's content or attributes were
modified.

7. File Access Timestamp: The timestamp indicating the last time the file was accessed or read.

8. File Ownership: The user or group that owns the file. Ownership information is used to determine access
control and permission settings.

9. File Permissions: The access rights or permissions that control who can perform specific operations on the
file. Common permissions include read, write, execute, and delete.

10. File Metadata: Additional descriptive information about the file, such as author, title, description, tags,
keywords, or version number. Metadata helps provide context and facilitate file organization and searchability.

File attributes and metadata are stored and managed by the file system, enabling efficient file identification,
organization, and access control. They help users and applications understand and interact with files
effectively.

E. File Access Control and Permissions:


File access control ensures that only authorized users or processes can access, modify, or execute files within
a file system. It involves setting permissions and enforcing access restrictions based on user identities, groups,
or roles. Here are the key aspects of file access control and permissions:

1. User-Based Access Control:


The file system associates each file with user accounts and assigns specific permissions to them. Users are
authenticated through their login credentials, and access to files is granted or denied based on their assigned
permissions.

KAMADJE SADJIE DJIENA ALLEN 32


LSSC COMPUTER SCIENCE
2. Group-Based Access Control:
Users can be organized into groups, and permissions can be assigned to groups collectively. This simplifies
the management of permissions for multiple users who share similar access requirements.

3. Access Control Lists (ACLs):


ACLs are lists of permissions associated with each file. They define access rights for individual users or
groups, specifying whether they can read, write, execute, or delete the file.

4. Permission Levels:
Permissions are typically categorized into read, write, execute, and delete permissions. Read permission
allows users to view the file's content, write permission enables modification or creation of the file, execute
permission allows executing the file as a program, and delete permission permits the removal of the file.

5. Owner and Group Permissions:


Each file has permissions assigned to the owner (user) and the group associated with the file. These
permissions control the access rights of the owner and the group members.

6. Other (World) Permissions:


Permissions can also be set for other users who are not the owner or part of the group. These permissions
control access for all other users who do not fall into the owner or group categories.

7. Permission Representation:
Permissions are often represented using symbolic notation (e.g., rwx) or numeric notation (e.g., 755).
Symbolic notation uses letters (r for read, w for write, x for execute) to represent permissions for the owner,
group, and others. Numeric notation assigns a three-digit octal number to represent the combination of
permissions.

File access control and permissions ensure data security, privacy, and integrity within a file system. They
prevent unauthorized access, accidental modifications, and unauthorized execution of files, protecting
sensitive information and maintaining system stability.

VII. Maintenance of an Operating System


A. Importance of regular maintenance for an operating system:
Regular maintenance is crucial for an operating system to ensure its optimal performance, reliability, security,
and longevity. Here are some reasons why regular maintenance is important for an operating system:

1. Performance Optimization:
Over time, an operating system can accumulate temporary files, unnecessary software, and other clutter that
can impact system performance. Regular maintenance, such as disk cleanup, defragmentation, and optimizing
startup processes, helps remove these unnecessary elements and improve overall system performance. It
ensures that the operating system and applications run smoothly and efficiently.

2. Security Enhancements:
Operating systems are a primary target for malware and security threats. Regular maintenance, including
installing security updates, patches, and antivirus software, helps protect the system against emerging threats.

KAMADJE SADJIE DJIENA ALLEN 33


LSSC COMPUTER SCIENCE
By keeping the operating system up to date and applying security fixes promptly, the risk of security breaches,
data loss, and unauthorized access is minimized.

3. Stability and Reliability:


Over time, an operating system can develop issues such as software conflicts, driver problems, or corrupted
system files. Regular maintenance, such as running system diagnostics, checking for hardware issues, and
repairing system files, helps maintain system stability and reliability. It reduces the chances of system crashes,
freezes, and unexpected errors, ensuring a smooth and uninterrupted user experience.

4. Compatibility with New Software and Hardware:


New software applications and hardware devices are continually released, and they often require specific
system requirements or updated drivers to function properly. Regular maintenance, including updating device
drivers, firmware, and the operating system itself, ensures compatibility with new software and hardware. This
allows users to take advantage of the latest features and improvements offered by new technologies.

5. Data Integrity and Backup:


Data loss can occur due to hardware failures, software errors, or accidental deletions. Regular maintenance
includes implementing data backup strategies to protect important files and ensure data integrity. Regularly
backing up files and verifying the backup integrity helps safeguard against data loss and provides a way to
recover files in case of emergencies or system failures.

6. Resource Management:
Operating systems manage system resources such as memory, CPU usage, and disk space. Regular
maintenance includes monitoring resource usage, optimizing resource allocation, and managing system logs.
This helps prevent resource bottlenecks, optimize system performance, and ensure efficient utilization of
system resources.

7. User Experience and Satisfaction:


Regular maintenance contributes to an overall positive user experience. It reduces system downtime, enhances
system responsiveness, and minimizes the occurrence of errors or unexpected behaviors. A well-maintained
operating system provides a stable and reliable platform for users to perform their tasks efficiently, leading to
increased user satisfaction and productivity.

B. Types of maintenance tasks (hardware, software, security, etc.):


System maintenance tasks can be categorized into different types, including:

1. Hardware Maintenance:
This involves monitoring and maintaining the physical components of the computer system, such as checking
for hardware failures, cleaning dust from components, ensuring proper ventilation, and replacing faulty
hardware when necessary.

2. Software Maintenance:
This includes updating the operating system and software applications with the latest patches, bug fixes, and
security updates. It also involves uninstalling unnecessary or outdated software, managing software licenses,
and addressing software compatibility issues.

KAMADJE SADJIE DJIENA ALLEN 34


LSSC COMPUTER SCIENCE
3. Security Maintenance:
This focuses on implementing security measures to protect the operating system and data from unauthorized
access, malware, and other security threats. Tasks may include installing and updating antivirus software,
configuring firewalls, using strong passwords, implementing user access controls, and conducting security
audits.

4. Data Backup and Recovery:


This involves regularly backing up important files and data to prevent data loss in the event of hardware
failures, software errors, or accidental deletions. It includes selecting appropriate backup methods, scheduling
regular backups, verifying backup integrity, and testing the recovery process.

5. Performance Optimization:
This type of maintenance aims to enhance system performance by optimizing resource usage, managing
startup processes, cleaning up temporary files, defragmenting disks, and monitoring system performance
metrics.

6. System Monitoring and Diagnostics:


This involves continuously monitoring system health, performance, and resource usage. It includes analyzing
system logs, monitoring hardware temperatures, checking for disk errors, and running diagnostic tools to
identify and resolve issues proactively.

C. Best practices for system maintenance:


To ensure effective system maintenance, consider the following best practices:

1. Regularly Update the Operating System and Software:


Install the latest updates, patches, and security fixes for your operating system and software applications to
benefit from bug fixes, performance improvements, and security enhancements.

2. Use Reliable Antivirus and Security Software:


Install reputable antivirus software and keep it up to date to protect your system against malware, viruses, and
other security threats. Configure firewalls and enable built-in security features of your operating system for
additional protection.

3. Implement Data Backup Strategies:


Regularly back up your important files and data to external storage or cloud services. Ensure that backups are
performed regularly, and periodically test the backup restoration process to verify data integrity.

4. Optimize System Performance:


Take steps to optimize system performance, such as cleaning up temporary files, uninstalling unnecessary
software, managing startup programs, and regularly defragmenting the hard drive if applicable.

5. Monitor System Health:


Keep an eye on system performance metrics, hardware temperatures, and system logs. This helps identify
potential issues early on and allows for timely troubleshooting and resolution.

KAMADJE SADJIE DJIENA ALLEN 35


LSSC COMPUTER SCIENCE
6. Practice Safe Computing Habits:
Use strong and unique passwords, be cautious when downloading files or clicking on links from unknown
sources and avoid visiting suspicious websites. Educate yourself and your users about common security threats
and best practices for staying safe online.

7. Maintain Hardware:
Keep your computer hardware clean and free from dust. Ensure proper ventilation to prevent overheating.
Regularly check hardware components, such as hard drives and fans, for signs of wear or failure.

8. Remove Unnecessary Software:


Regularly review and uninstall software that is no longer needed. This helps declutter the system and improves
overall performance.

9. Educate Users:
Provide training and guidelines to users on system maintenance best practices, such as avoiding risky
behaviors, keeping software up to date, and reporting any unusual system behavior promptly.

10. Document Maintenance Procedures:


Maintain documentation of maintenance tasks, schedules, and procedures. This helps ensure consistency and
provides a reference for troubleshooting and future maintenance.

By following these best practices, you can ensure that your operating system remains secure, reliable, and
performs optimally over time. Regular maintenance will help extend the lifespan of your system and contribute
to a positive user experience.

D. Backup and Recovery Strategies:


Backup and recovery strategies are crucial for safeguarding data and ensuring business continuity in the face
of system failures, data loss, or disasters. Here are the definitions and best practices for implementing backup
and recovery strategies:

1. Backup:
Backup refers to the process of creating copies of data and storing them in a separate location or medium. This
is done to protect against data loss caused by hardware failures, accidental deletion, cyberattacks, or natural
disasters. Backups serve as a means of recovering and restoring data to its previous state.

2. Recovery:
Recovery refers to the process of restoring systems, applications, or data to a functional state following a
failure or disaster. It involves accessing and utilizing backups to rebuild or repair affected components and
recover lost or corrupted data.

E. Upgrades, Patches, and Updates:


Upgrades, patches, and updates are essential for keeping operating systems and software up to date, secure,
and equipped with the latest features and bug fixes.

1. Upgrades:
Upgrades involve transitioning to a newer version of an operating system or software. Upgrades often provide
significant enhancements, improved functionality, and new features compared to previous versions.

KAMADJE SADJIE DJIENA ALLEN 36


LSSC COMPUTER SCIENCE
2. Patches:
Patches are small updates that fix specific issues, vulnerabilities, or bugs in software or operating systems.
Patches are typically released by software vendors to address security vulnerabilities, improve stability, or
provide minor enhancements.

3. Updates:
Updates encompass a broader range of changes and modifications to software or operating systems. Updates
can include bug fixes, performance improvements, compatibility enhancements, and new features.

KAMADJE SADJIE DJIENA ALLEN 37

You might also like