Operating Module - 1
Operating Module - 1
1. Resource Allocator:
-File Storage: The OS organizes and manages data on long-term storage devices, such as
hard drives or SSDs. It handles file creation, deletion, reading, and writing, and keeps
track of where data is stored on the disk.
2. Control Program:
2. Memory (RAM):
- Role: Memory is where the system stores data and program instructions temporarily while
they are being used. It allows for quick access to this data by the CPU.
- Management: The OS manages memory allocation by tracking which parts of memory
are in use and which are available. Techniques like paging (dividing memory into fixed-
size blocks) and segmentation (dividing memory into variable-size segments) are used to
efficiently allocate and manage memory.
3. File Storage:
- Role: File storage holds data and programs on long-term storage devices. This includes
documents, applications, and system files that need to be preserved even when the computer
is turned off.
- Management: The OS organizes files into a file system, managing directories and file
permissions. It handles file operations such as opening, closing, reading, writing, and
deleting.
5. Network Connections:
- Role: Network connections enable communication between computers and access to remote
resources over networks.
-Management: The OS handles network protocols and communication tasks, managing
network connections, data transmission, and access to network resources. It ensures secure
and efficient network communication.
• What It Means: The OS manages and distributes the computer’s resources, such as
the CPU, memory, storage, and I/O devices.
• How It Works:
◦ CPU Allocation: Decides which process gets to use the CPU and for how long. It
schedules tasks so that multiple programs can run without interfering with
each other.
◦ Memory Allocation: Manages memory usage by allocating space for running
programs and data. It keeps track of which parts of memory are in use and
which are free.
◦ Storage Allocation: Organizes how files are stored and accessed on storage
devices like hard drives or SSDs. It ensures that data is saved and retrieved
efficiently.
• What It Means: The OS oversees all the programs and processes running on the
computer.
• How It Works:
◦ Process Management: Keeps track of all active processes, handles their
execution, and ensures they run without conflicts. This includes starting,
pausing, and stopping programs as needed.
◦ Error Handling: Monitors and manages errors that occur during program
execution to prevent crashes and ensure stable operation.
• What It Means: The OS allows multiple programs and users to share the computer’s
resources without interfering with each other.
• How It Works:
◦ Multi-Tasking: Enables running several programs at once by sharing CPU
time among them. It makes sure that no single program monopolizes the
resources.
◦ User Access Control: Manages permissions and access rights to ensure that
different users or programs can share resources like files and printers safely
and fairly.
• What It Means: The OS provides various ways for users to interact with the computer.
• How It Works:
◦ Graphical User Interface (GUI): Offers visual elements like windows, icons,
and menus that make it easy to navigate and use the computer.
◦ Command-Line Interface (CLI): Provides a text-based way to interact with the
computer, often used by advanced users for precise control and automation.
• What It Means: The OS ensures that the computer is user-friendly and performs well
according to the user’s needs.
• How It Works:
◦ User Convenience: Simplifies tasks and operations, making it easier for users
to perform everyday functions without needing deep technical knowledge.
◦ System Optimization: Adjusts settings and manages resources to ensure
that the computer runs smoothly and efficiently, delivering the best possible
performance for the user.
Components of Operating System (Primary)
1. Kernel
• What It Is: The kernel is the core, central part of the operating system. Think of it as
the main controller or "brain" of the system.
• What It Does:
◦ Resource Management: It controls how the computer’s resources (like the CPU,
memory, and input/output devices) are used.
◦ Low-Level Services: It provides essential services that allow other parts of the
operating system and software applications to work. For instance, it handles
system calls from programs that request services or resources.
• Why It Matters: Without the kernel, the operating system couldn’t function. It
manages all interactions between hardware and software, making sure that
everything works together smoothly.
2. Process Scheduler
• What It Is: The process scheduler is like a time manager for the CPU, ensuring that
different programs or tasks get a fair chance to use the CPU.
• What It Does:
◦ CPU Time Allocation: It decides which process (or program) gets to use the CPU
and for how long. It’s like managing a queue where each task gets its turn.
◦ Fairness: It ensures that no single process hogs the CPU, which helps in
multitasking (running multiple programs at once) and maintains overall
system responsiveness.
• Why It Matters: Effective scheduling ensures that all running processes get a fair
share of CPU time, which is crucial for smooth performance and preventing any one
application from slowing down the system.
3. Memory Manager
• What It Is: The memory manager is responsible for handling the computer’s RAM
(random access memory).
• What It Does:
◦ Memory Allocation: It assigns portions of memory to various programs and
processes. When a program needs to use memory, the memory manager
allocates it the space it needs.
◦ Protection: It ensures that one program doesn’t interfere with the memory of
another program, which helps in preventing crashes and data corruption.
• Why It Matters: Proper memory management ensures that each program gets the
memory it needs without causing conflicts, which is essential for system stability
and performance.
4. File System
• What It Is: The file system is like an organized filing cabinet that keeps track of all
the files and folders on your computer’s storage devices.
• What It Does:
◦ Data Organization: It arranges files and folders on storage devices like hard
drives and SSDs, making it easier to save, retrieve, and manage data.
◦ Access Control: It manages file permissions and security, ensuring that users
and programs have the appropriate access to files and directories.
• Why It Matters: Without a file system, you wouldn’t be able to organize or access
your data effectively. It’s crucial for storing and retrieving files in a structured
way.
5. Device Drivers
• What It Is: Device drivers are specialized software components that act as
intermediaries between the operating system and hardware devices.
• What It Does:
• Why It Matters: Device drivers enable the operating system to communicate with and
control hardware devices, making sure they function correctly and efficiently.
6. User Interface
• What It Is: The user interface (UI) is the part of the operating system that allows
users to interact with the computer.
• What It Does:
• Why It Matters: The user interface is crucial for user experience. It provides a way for
people to control and interact with the computer in a user-friendly manner, making
complex operations simpler and more accessible.
1. Power On
• What Happens: When you press the power button on your computer, it activates the
power supply. This power supply sends electricity to the motherboard and other
components.
• Purpose: The initial step is to wake up the computer and start the booting process.
2. BIOS/UEFI
3. Boot Loader
• What It Is: The boot loader is a small program stored in non-volatile memory, like
ROM (Read-Only Memory) or EPROM (Erasable Programmable Read-Only
Memory), on the motherboard.
• What It Does:
◦ Loading the Boot Loader: The BIOS/UEFI locates and runs the boot loader
program from its memory.
◦ Finding the Operating System: The boot loader's job is to find the operating
system on your computer’s storage device (like a hard drive or solid-state
drive). It then loads the operating system into the computer’s RAM (memory).
4. Kernel
• What It Is: The kernel is the core part of the operating system, responsible for
managing hardware and system resources.
• What It Does:
◦ Loading the Kernel: The boot loader loads the kernel into the computer’s RAM.
◦ Initialization: Once in memory, the kernel initializes itself and sets up
important system components, such as managing memory and starting
essential system processes.
◦ Starting Services: It begins to load drivers and system services that are
needed for the operating system to interact with hardware (like printers,
network cards) and manage system tasks.
5. User Space
• What It Is: User space is the part of the operating system where you interact with the
computer through applications and the user interface.
• What It Does:
◦ Starting the User Interface: The kernel sets up the user interface (like the
desktop environment in Windows, macOS, or Linux). This interface includes
elements like windows, icons, and menus.
◦ User Interaction: This allows you to log in, start programs, and perform
tasks. The user interface is what you use to interact with the operating system
and run applications.
1. Power On: You turn on the computer, which powers up the hardware.
2. BIOS/UEFI: Performs a check (POST) to ensure all hardware is working and
prepares to load the operating system.
3. Boot Loader: Loads from ROM and finds the operating system on the hard drive or
SSD, then loads it into memory.
4. Kernel: Initializes the operating system, sets up necessary services, and manages
hardware.
5. User Space: Launches the graphical user interface (GUI) or command line, allowing
you to interact with your computer and use applications.
This step-by-step process gets your computer ready for you to use it by ensuring everything
is properly set up and functioning.
What It Is:
Types:
• Command-Line Interface (CLI): You type commands to tell the computer what to do.
• Graphical User Interface (GUI): You use icons, buttons, and menus that you click on
with a mouse.
• Touch-Screen Interface: You interact with the computer by touching the screen.
Why It Matters:
2. Program Execution
What It Is:
Tasks:
Why It Matters:
3. I/O Operations
What It Is:
• Input and output operations are how programs interact with devices like keyboards,
mice, and printers.
Tasks:
• Reading/Writing Files: Programs need to read from or write to files and devices.
• Managing Devices: The OS helps the program use these devices effectively.
Why It Matters:
4. File-System Manipulation
What It Is:
• The OS manages how files and folders are stored and organized on the computer.
Tasks:
• Create/Delete Files/Folders: Allows you to make and remove files and directories.
• Read/Write Files: Lets you access and modify the contents of files.
• Search/List Information: Helps you find and view files.
• Manage Permissions: Controls who can access or modify files.
Why It Matters:
• Keeps data organized and secure, and allows you to manage your files.
5. Communications
What It Is:
• On the Same Computer: Programs can share data using shared memory.
• Over a Network: Programs on different computers communicate via messages or
packets.
Why It Matters:
6. Error Detection
What It Is:
Tasks:
Why It Matters:
7. Debugging Facilities
What It Is:
• Tools that help programmers find and fix problems in their code.
Why It Matters:
• Makes it easier for developers to improve and debug their software, leading to better
programs.
8. Resource Allocation
What It Is:
• Manages the distribution of the computer’s resources like CPU time and memory.
Tasks:
• Allocate Resources: Ensures each program or user gets a fair share of resources.
• Track Usage: Keeps a log of how resources are used.
Why It Matters:
What It Is:
Tasks:
• Control Access: Makes sure only authorized users or programs can access certain
data or resources.
• User Authentication: Verifies the identity of users to prevent unauthorized access.
• Defend Against Attacks: Protects the system from external threats and unauthorized
attempts to access data.
Why It Matters:
• Protects your data and keeps the system secure from threats.
Functioning of Operating System (Primary)
1. Resource Management
Purpose: The operating system is the steward of a computer’s resources, ensuring that
various hardware components such as the CPU, memory, and I/O devices are allocated
efficiently and fairly among competing processes.
Key Functions:
◦ Fair Allocation: Ensures that all processes receive a fair share of resources.
◦ Deadlock Prevention: Implements algorithms to avoid situations where
processes are stuck waiting indefinitely for resources held by each other.
2. Process Management
Purpose: The OS is responsible for handling the creation, execution, and termination of
processes, which are instances of running programs. It ensures that processes are managed
efficiently and that they can execute concurrently without interference.
Key Functions:
• Process Creation and Termination: The OS handles the instantiation and cleanup of
processes. This includes loading a process into memory and releasing resources when
it terminates.
• Process Scheduling: Determines the order in which processes are executed. It uses
scheduling algorithms to optimize CPU usage and system performance.
3. Memory Management
Purpose: The OS manages the computer’s memory, allocating space for processes and
ensuring that memory is used efficiently.
Key Functions:
◦ Paging: Divides memory into fixed-size pages and maps these pages to
physical memory.
◦ Segmentation: Divides memory into variable-sized segments based on logical
divisions of the program.
• Memory Protection: Ensures that processes do not interfere with each other’s memory,
preventing security breaches and system crashes.
4. Security
Purpose: The OS provides a secure environment to protect data and system integrity from
unauthorized access and malicious activities.
Key Functions:
• Access Control: Implements mechanisms to control which users and processes have
access to system resources. This includes:
• Encryption: Protects data by encoding it so that only authorized users can decrypt
and access it. This includes:
• Audit and Monitoring: Tracks system activities and access patterns to detect and
respond to security incidents.
5. File Management
Purpose: The OS organizes and manages files on storage devices, providing a structured
and efficient way to store, retrieve, and manage data.
Key Functions:
• File System Organization: Manages the hierarchy of directories and files. Common
file systems include NTFS (Windows), ext4 (Linux), and APFS (macOS).
6. Device Management
Purpose: The OS manages communication between the computer system and peripheral
devices, ensuring smooth interaction and functionality.
Key Functions:
• I/O Operations: Manages input and output operations, including buffering (storing
data temporarily), spooling (queuing jobs for sequential processing), and device
control (sending commands to devices).
7. Networking
Key Functions:
8. User Interface
Purpose: The OS provides an interface that allows users to interact with the computer
system, manage files, and run applications.
Key Functions:
• Graphical User Interface (GUI): Provides visual elements like windows, icons, and
menus to facilitate user interaction. Examples include Windows Explorer, macOS
Finder, and GNOME or KDE on Linux.
• Command-Line Interface (CLI): Allows users to interact with the system via text-
based commands. Examples include the Windows Command Prompt, PowerShell,
and Unix/Linux shells like Bash.
• Accessibility Features: Includes tools and settings to assist users with disabilities,
such as screen readers, magnifiers, and speech recognition.
Hardware Choice:
2. User Goals
Convenience:
• The system should be easy to use, requiring minimal effort from users to perform
tasks. For example, having a clean, intuitive interface helps users find what they
need quickly.
Reliability:
• The system should work consistently and correctly under normal usage conditions.
It should handle unexpected situations and errors without crashing or losing data.
Safety:
• The system should protect user data and ensure secure operation. This involves
implementing safeguards against unauthorized access and data breaches.
Speed:
• The system should be responsive, providing quick feedback and performing tasks
efficiently to avoid frustrating delays.
3. Developer Requirements
Flexibility:
• Reliability: The system should consistently perform as expected and handle errors in
a controlled manner.
• Efficiency: The system should make optimal use of resources like memory and
processing power, minimizing waste and ensuring good performance.
4. Variety of Solutions Based on Requirements
• Mechanisms: These are the methods or tools used to accomplish tasks. For example, a
timer mechanism determines how to keep track of time.
• Policies: These are the rules or decisions about how mechanisms should be used. For
example, a policy might specify that certain tasks should receive more CPU time than
others.
Example: A system uses a timer mechanism to manage CPU allocation. The policy might
decide that critical tasks get priority access to CPU time over less important ones. Changing
the policy (e.g., adjusting priorities) can be done without changing the underlying timer
mechanism.
6. Implementation Challenges
Complexity:
• Operating systems and large systems are complex and consist of many interacting
components developed over a long time by different teams. This makes it
challenging to generalize about their implementation.
Programming Languages:
• C and C++: Commonly used for their performance and control over hardware. They
provide efficient low-level access but can be more difficult to manage and debug.
• Assembly Language: Provides direct hardware control but is low-level and less
portable.
• Higher-Level Languages: Such as Python or Java, offer easier development and
maintenance and make it simpler to adapt the system to new hardware, though they
might not offer the same level of performance.
Portability:
Designing a system involves making thoughtful choices about hardware and system type,
understanding user needs, and addressing developer concerns. It requires balancing
mechanisms and policies to achieve the desired outcomes while ensuring that the system
remains adaptable, reliable, and efficient. The complexity of implementation and the choice
of programming languages further influence the system’s effectiveness and adaptability.
Monolithic Structure
1. Definition
A monolithic operating system architecture means that the kernel—the core component of
the OS—is a single, large, static binary. This kernel handles all fundamental OS
functions and operates within a single, unified memory space known as kernel space. The
entire system’s functionality is encapsulated in this one large program, which interacts
directly with hardware and manages all system resources. Think of it as a single big
program that handles all the tasks of the operating system.
2. Key Characteristics
• Single Binary File: The kernel is bundled into one large executable file. This file
contains all the necessary code for managing system operations.
• Unified Kernel Space: The kernel and all system services operate within the same
address space, meaning there is no separation between the OS core functions and
user-level processes. This contrasts with other architectures where there is a clear
distinction between kernel and user spaces.
3. Example
• File System: Handles the organization, storage, retrieval, and management of files
on disk drives. It provides system calls for file operations like reading, writing, and
deleting files.
• CPU Scheduling: Manages which processes are executed by the CPU and the order in
which they are executed. It optimizes the use of CPU resources by scheduling tasks
and switching between them efficiently.
• Memory Management: Allocates memory to processes, tracks used and free memory,
and manages virtual memory systems to ensure efficient and safe memory usage.
• Device Drivers: Interface with hardware devices (e.g., printers, disk drives, network
cards). Drivers translate general OS commands into device-specific operations.
• High Performance: System calls are handled directly by the kernel without the
overhead of additional layers or context switching. This direct access to kernel
functions improves the speed of operations and reduces latency.
• Simplicity: The monolithic design consolidates all kernel functions into a single
program, simplifying the development and maintenance of the kernel. There’s no
need for complex inter-process communication or context switching between different
modules.
• Security Risks: All code runs in kernel space, so a vulnerability or bug in any part
of the kernel can potentially lead to system-wide security breaches. If an attacker
exploits a kernel flaw, they can gain control over the entire system.
• Stability Issues: Since the kernel handles all system functions, a fault or crash in
any part of the kernel can destabilize or crash the entire system. There is no isolation
to protect the OS from a malfunction in kernel components.
Detailed Discussion
1. Performance Considerations
2. Design Simplicity
Since the kernel is a single, large program, there's less need for complex interaction between
different modules. This unified approach can make the OS simpler to design and
understand, especially in the early stages of development.
In summary, while a monolithic structure can offer high performance and simplicity, it
can also lead to potential security and stability issues due to its all-in-one design.
In a layered approach, an operating system (OS) is divided into multiple layers or modules,
each responsible for a specific part of the system's functionality. Imagine a layered cake
where each layer has a distinct flavor and role. Similarly, each layer in the OS has a
specific function and interacts with layers directly above and below it.
• Layers Explained:
◦ Bottom Layer (Layer 0): This is the hardware, including the physical
components like the CPU, memory, and storage devices.
◦ Middle Layers: These layers handle various system functions:
▪ Memory Management Layer: Manages the computer’s memory.
▪ File System Layer: Manages files and directories.
▪ Process Management Layer: Handles the execution of programs and
processes.
◦ Top Layer (Layer N): This is the user interface where you interact with the
system, such as through windows, icons, and commands.
• Layer Interaction:
◦ Each layer can only communicate with the layers directly above and below it.
◦ For example, if a program needs to access a file, it makes a request to the file
system layer, which then interacts with the memory management layer to get
the data from storage.
1. Modularity:
◦ Each layer is like a separate piece of software with a specific role. This makes it
easier to work on one part of the OS without affecting others. If you need to
update or fix a layer, it’s more straightforward and less risky.
2. Simplicity:
◦ Because each layer focuses on a specific function, it's easier to develop, test,
and debug. If something goes wrong, you know exactly where to look. For
instance, if there’s a problem with file access, you would check the file system
layer.
3. Maintenance:
◦ It can be tricky to decide what each layer should do. It can be challenging to
decide exactly what each layer should do. If layers are not well-defined, there
could be overlaps or gaps in functionality, making the system less efficient.
2. Performance Overhead:
◦ Every time a program requests a service, it might have to pass through several
layers. This can slow down performance because of the extra processing
required to move between layers.
Real-World Example
The request starts at the top (user interface) and works its way down to the hardware,
interacting with each layer along the way. Each layer does its part to fulfill the request
before passing it to the next layer.
"However, it can introduce performance overhead due to the multiple layers that requests
must navigate and can be complex to design effectively."
Performance Overhead
What It Means:
1. Multiple Layers: In a layered system, each request or operation (like opening a file or
executing a program) has to pass through several layers of the OS.
2. Navigation Through Layers: Each layer handles a specific part of the request. For
example, if you want to open a file, your request might pass through the user
interface layer, the file system layer, and the memory management layer before
finally reaching the hardware.
3. Processing Time: Because each layer has to process the request and pass it on to the
next layer, this adds extra steps and time. This can slow down the overall
performance of the system because the request has to be handled and translated
through multiple layers.
Microkernel Approach in Operating Systems
1. What is a Microkernel?
The microkernel approach is a way to design an operating system by keeping the core
functionality (the kernel) as small and minimal as possible. This means moving non-
essential functions out of the kernel and into separate programs that run in user space.
• Microkernel:
◦ Core Functionality: The microkernel itself handles only the most basic tasks
such as managing communication between programs and handling low-level
hardware interactions.
◦ Communication: It acts as a mediator, enabling different parts of the system
(such as device drivers, file systems, and other services) to communicate with
each other through message passing.
• User-Level Services:
◦ Separate Programs: All other services, like file systems, device drivers, and
network protocols, are moved out of the kernel and into separate programs that
run in user space.
◦ Address Spaces: These services run in their own address spaces, meaning they
operate independently of the kernel and other services.
• Message Passing:
1. Easier to Extend:
◦ Adding New Services: You can add new features or services to the operating
system without altering the core microkernel. This makes it easier to update or
expand the OS with new functionality.
2. Portability:
◦ Hardware Independence: Since the kernel only handles basic tasks and most
of the operating system’s functionality is in user space, it's easier to adapt the
OS to different hardware platforms. You only need to adjust the microkernel
and not the entire system.
◦ Isolated Services: Since most services run in user space, they are isolated from
the core kernel. If a service crashes or has a bug, it’s less likely to affect the
entire system. This isolation helps improve security and stability because
faults are contained within user space rather than affecting the kernel.
1. Performance Overhead:
Example to Illustrate:
• Microkernel: Manages basic tasks like hardware interactions and message passing
between processes.
• File System Service: Runs as a separate program in user space. When you want to
save a file, your application sends a message to the file system service via the
microkernel.
• Device Drivers: Also run in user space. When you print a document, the file system
service sends a message to the device driver for the printer, which then handles the
printing process.
In Summary:
The microkernel approach focuses on keeping the operating system’s core minimal,
handling only essential functions like communication between services. Non-essential
services, such as file systems and device drivers, run as separate programs in user space.
This design makes the OS easier to extend, more portable, and potentially more secure and
reliable. However, it can introduce performance overhead due to the additional steps required
for message passing and process switching.
Loadable Kernel Modules (LKMs) are special pieces of code that can be added to or removed
from the operating system's core (the kernel) while the system is running. Think of LKMs
like adding or removing apps on your phone; you can do it anytime without restarting the
phone. When the functionality provided by an LKM is no longer required, it can be
unloaded in order to free memory and other resources.
• Dynamic Loading:
◦ Adding Modules: When you need new features (like a new device driver for a
printer), you load a module into the kernel. This means the kernel can
instantly start handling new tasks without needing to be rebuilt or restarted.
◦ Unloading Modules: When the feature is no longer needed, you can unload
the module to free up memory and other resources.
• Integration:
◦ Working Together: Once a module is loaded, it becomes part of the kernel and
functions just like the rest of the kernel components. It helps the kernel
manage new features or devices.
1. Flexibility:
◦ On-Demand Features: You can add or remove features as needed. For example,
if you connect a new hardware device, you can load its driver module without
changing the entire system.
2. Modularity:
◦ Separate Parts: Different features are managed in separate modules. This
makes it easier to update or fix individual parts of the system without
affecting the whole kernel.
1. Complexity:
2. Security Risks:
◦ Increased Risk: Allowing modules to be loaded and unloaded can make the
system more vulnerable to security issues. Malicious or poorly designed
modules could compromise the system.
3. Potential Instability:
◦ System Crashes: If a module has bugs or errors, it can make the entire system
unstable. Because modules operate with high-level privileges, a mistake in a
module can affect the whole system.
4. Performance Overhead:
◦ Extra Work: The process of loading and unloading modules involves some
extra work, which might slow down the system a bit, especially if done
frequently.
Example to Illustrate
Imagine you’re using a computer and need to add support for a new printer:
• Loading a Module: You connect the printer, and the system loads the printer driver
module to make it work.
• Unloading a Module: If you disconnect the printer and no longer need the driver,
you unload the module to free up memory.
• Potential Issues: If the printer driver module is not compatible with your current
system version, it could cause problems. Also, if it has bugs, it could crash the
system.
In Summary:
The Loadable Kernel Modules approach lets you add or remove parts of the operating
system’s core as needed, which makes the system flexible and easier to manage. However, it
can also make the system more complex and potentially less secure, and it might cause
performance issues if not handled carefully.