0% found this document useful (0 votes)
20 views20 pages

Basic Environment 2

Uploaded by

gracious pezoh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views20 pages

Basic Environment 2

Uploaded by

gracious pezoh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Operating System Structure

Overview

An operating system is a design that enables user application programs


to communicate with the hardware of the machine. The operating
system should be built with the utmost care because it is such a
complicated structure and should be simple to use and modify. Partially
developing the operating system is a simple approach to accomplish
this. Each of these components needs to have distinct inputs, outputs,
and functionalities.

This article discusses many sorts of structures that implement operating


systems, as listed below, as well as how and why they work. It also
defines the operating system structure.

o Simple Structure

o Monolithic Structure

o Layered Approach Structure

o Micro-Kernel Structure

o Exo-Kernel Structure

o Virtual Machines

What is an operating System Structure?

We want a clear structure to let us apply an operating system to our


particular needs because operating systems have complex structures. It
is easier to create an operating system in pieces, much as we break
down larger issues into smaller, more manageable subproblems. Every
segment is also a part of the operating system. Operating system
structure can be thought of as the strategy for connecting and
incorporating various operating system components within the kernel.
Operating systems are implemented using many types of structures, as
will be discussed below:

SIMPLE STRUCTURE

It is the most straightforward operating system structure, but it lacks


definition and is only appropriate for usage with tiny and restricted
systems. Since the interfaces and degrees of functionality in this
structure are clearly defined, programs are able to access I/O routines,
which may result in unauthorized access to I/O procedures.

The following figure illustrates layering in simple structure:


Advantages of Simple Structure:

o Because there are only a few interfaces and levels, it is simple to


develop.

o Because there are fewer layers between the hardware and the
applications, it offers superior performance.

Disadvantages of Simple Structure:

o The entire operating system breaks if just one user program


malfunctions.

o Since the layers are interconnected, and in communication with


one another, there is no abstraction or data hiding.
o The operating system's operations are accessible to layers, which
can result in data tampering and system failure.

The kernel is a computer program at the core of a computer's


operating system and generally has complete control over
everything in the system. The kernel is also responsible for
preventing and mitigating conflicts between different processes.
[1] It is the portion of the operating system code that is always
resident in memory[2] and facilitates interactions between
hardware and software components. A full kernel controls all
hardware resources (e.g. I/O, memory, cryptography) via device
drivers, arbitrates conflicts between processes concerning such
resources, and optimizes the utilization of common resources e.g.
CPU & cache usage, file systems, and network sockets. On most
systems, the kernel is one of the first programs loaded on startup
(after the bootloader). It handles the rest of startup as well as
memory, peripherals, and input/output (I/O) requests from
software, translating them into data-processing instructions for
the central processing unit.

The critical code of the kernel is usually loaded into a separate


area of memory, which is protected from access by application
software or other less critical parts of the operating system. The
kernel performs its tasks, such as running processes, managing
hardware devices such as the hard disk, and handling interrupts,
in this protected kernel space. In contrast, application programs
such as browsers, word processors, or audio or video players use
a separate area of memory, user space. This separation prevents
user data and kernel data from interfering with each other and
causing instability and slowness,[1] as well as preventing
malfunctioning applications from affecting other applications or
crashing the entire operating system. Even in systems where the
kernel is included in application address spaces, memory
protection is used to prevent unauthorized applications from
modifying the kernel.

The kernel's interface is a low-level abstraction layer. When a


process requests a service from the kernel, it must invoke a
system call, usually through a wrapper function.

There are different kernel architecture designs. Monolithic


kernels run entirely in a single address space with the CPU
executing in supervisor mode, mainly for speed. Microkernels run
most but not all of their services in user space,[3] like user
processes do, mainly for resilience and modularity.[4] MINIX 3 is a
notable example of microkernel design. The Linux kernel is both
monolithic and modular, since it can insert and remove loadable
kernel modules at runtime.

This central component of a computer system is responsible for


executing programs. The kernel takes responsibility for deciding at
any time which of the many running programs should be allocated
to the processor or processors.

***System calls**

In computing, a system call is how a process requests a service


from an operating system's kernel that it does not normally have
permission to run. System calls provide the interface between a
process and the operating system. Most operations interacting
with the system require permissions not available to a user-level
process, e.g., I/O performed with a device present on the system,
or any form of communication with other processes requires the
use of system calls.

A system call is a mechanism that is used by the application


program to request a service from the operating system. They use
a machine-code instruction that causes the processor to change
mode. An example would be from supervisor mode to protected
mode. This is where the operating system performs actions like
accessing hardware devices or the memory management unit.
Generally the operating system provides a library that sits
between the operating system and normal user programs. Usually
it is a C library such as Glibc or Windows API. The library handles
the low-level details of passing information to the kernel and
switching to supervisor mode. System calls include close, open,
read, wait and write.

To actually perform useful work, a process must be able to access


the services provided by the kernel. This is implemented
differently by each kernel, but most provide a C library or an API,
which in turn invokes the related kernel functions.[7]

The method of invoking the kernel function varies from kernel to


kernel. If memory isolation is in use, it is impossible for a user
process to call the kernel directly, because that would be a
violation of the processor's access control rules. A few possibilities
are:
 Using a software-simulated interrupt. This method is available on
most hardware, and is therefore very common.

 Using a call gate. A call gate is a special address stored by the


kernel in a list in kernel memory at a location known to the
processor. When the processor detects a call to that address, it
instead redirects to the target location without causing an access
violation. This requires hardware support, but the hardware for it
is quite common.

 Using a special system call instruction. This technique requires


special hardware support, which common architectures
(notably, x86) may lack. System call instructions have been added
to recent models of x86 processors, however, and some operating
systems for PCs make use of them when available.

 Using a memory-based queue. An application that makes large


numbers of requests but does not need to wait for the result of
each may add details of requests to an area of memory that the
kernel periodically scans to find requests.

MONOLITHIC STRUCTURE

The monolithic operating system controls all aspects of the operating


system's operation, including file management, memory management,
device management, and operational operations.

The core of an operating system for computers is called the kernel (OS).
All other System components are provided with fundamental services
by the kernel. The operating system and the hardware use it as their
main interface. When an operating system is built into a single piece of
hardware, such as a keyboard or mouse, the kernel can directly access
all of its resources.

The monolithic operating system is often referred to as the monolithic


kernel. Multiple programming techniques such as batch processing and
time-sharing increase a processor's usability. Working on top of the
operating system and under complete command of all hardware, the
monolithic kernel performs the role of a virtual computer. This is an old
operating system that was used in banks to carry out simple tasks like
batch processing and time-sharing, which allows numerous users at
different terminals to access the Operating System.

The following diagram represents the monolithic structure:


Advantages of Monolithic Structure:

o Because layering is unnecessary and the kernel alone is


responsible for managing all operations, it is easy to design and
execute.

o Due to the fact that functions like memory management, file


management, process scheduling, etc., are implemented in the
same address area, the monolithic kernel runs rather quickly
when compared to other systems. Utilizing the same address
speeds up and reduces the time required for address allocation
for new processes.

Disadvantages of Monolithic Structure:

o The monolithic kernel's services are interconnected in address


space and have an impact on one another, so if any of them
malfunctions, the entire system does as well.

o It is not adaptable. Therefore, launching a new service is difficult.

MICRO-KERNEL STRUCTURE

The operating system is created using a micro-kernel framework that


strips the kernel of any unnecessary parts. Systems and user
applications are used to implement these optional kernel components.
So, Micro-Kernels is the name given to these systems that have been
developed.

Each Micro-Kernel is created separately and is kept apart from the


others. As a relt, the system is now more trustworthy and secure. If one
Micro-Kernel malfunctions, the remaining operating system is
unaffected and continues to function normally.

The image below shows Micro-Kernel Operating System Structure:

Advantages of Micro-Kernel Structure:

o It enables portability of the operating system across platforms.

o Due to the isolation of each Micro-Kernel, it is reliable and secure.

o The reduced size of Micro-Kernels allows for successful testing.

o The remaining operating system remains unaffected and keeps


running properly even if a component or Micro-Kernel fails.
Disadvantages of Micro-Kernel Structure:

o The performance of the system is decreased by increased inter-


module communication.
o The construction of a system is complicated

Process Management in os (Operating


Systems)
Introduction:

In an operating system, process management os( Operating


System) refers to the set of activities involved in creating,
scheduling, and terminating processes. A process management os
can be thought of as an instance of a program that is currently
executing on a computer system. Managing processes is an
important aspect of any modern operating system, as it allows
multiple programs to run concurrently and share system
resources efficiently.

The process management os subsystem is responsible for


allocating system resources, such as CPU time, memory, and
input/output devices, to running processes. Process management
os is also responsible for scheduling the execution of processes in
a way that maximizes system throughput and minimizes response
times.

Without proper process management os, a computer system can


become unresponsive or even crash if one or more processes
consume all available system resources. Therefore, process
management os is a critical component of any modern operating
system, and its design and implementation have a significant
impact on the overall performance and usability of the system.

Process management os is a crucial aspect of operating systems


that allows multiple processes to run efficiently on a single
system. The operating system kernel manages the processes and
resources of the system. In this article, we’ll take a look at process
management os, including the process control block (PCB), the
states of a process, and the different types of processes.

The process control block (PCB) is a data structure used by the


operating system to manage each process. It contains all the
information needed to manage the process, including the process
ID, program counter, register values, memory information, and
process state.

The process state refers to the current status of the process in the
system. A process can be in one of the following states:

 Running: The process is currently executing and using the CPU.

 Ready: The process is waiting to be executed and is currently


waiting in the queue.

 Blocked: The process is waiting for an event to occur, such as a


user input or the completion of an I/O operation.

The different types of processes include foreground, background,


and system processes.
Foreground processes are those that are directly initiated by the
user and require user interaction. They usually have higher
priority than background processes and are executed first.

Background processes are those that are initiated by the system


and do not require user interaction. They usually have lower
priority than foreground processes and are executed after
foreground processes.

System processes are those that are initiated by the operating


system kernel and are essential for the proper functioning of the
system.

Process Scheduling
Process scheduling is an essential part of process management os. It
refers to the mechanism that the operating system uses to determine
which process to execute next. The goal of process scheduling is to
improve the overall system performance by maximizing the processor
utilization, minimizing the turnaround time, and improving the
response time of the system.

There are different scheduling algorithms that an operating system can


use to schedule processes. Here are some of the commonly used
scheduling algorithms:

1. First-Come, First-Served (FCFS): This is the simplest scheduling


algorithm, where the process that arrives first is executed
first. FCFS is non-preemptive, which means that once a process
starts executing, it will continue to run until it completes or waits
for an I/O operation.

2. Shortest Job First (SJF): SJF is a preemptive scheduling algorithm


that selects the process with the shortest burst time. The burst
time is the amount of time a process takes to complete its
execution. SJF minimizes the average waiting time of the processes.

3. Round Robin (RR): RR is a preemptive scheduling algorithm that


allocates a fixed time slice to each process in a circular queue. If a
process does not complete its execution within the allocated time
slice, it is preempted and added to the end of the queue. RR
provides fair allocation of CPU time to all processes and avoids
starvation.
4. Priority Scheduling: This scheduling algorithm assigns a priority to
each process, and the process with the highest priority is executed
first. Priority can be determined based on the process type,
importance, or resource requirements.

1. Multilevel Queue Scheduling: This scheduling algorithm divides


the ready queue into several separate queues, with each queue
having a different priority. Processes are assigned to the queue
based on their priority, and each queue uses its own scheduling
algorithm. This scheduling algorithm is useful in scenarios where
different types of processes have different priorities.

What is an interrupt?
An interrupt is a signal emitted by a device attached to a computer or from a program within the
computer. It requires the operating system (OS) to stop and figure out what to do next. An interrupt
temporarily stops or terminates a service or a current process. Most I/O devices have a bus control line
called Interrupt Service Routine (ISR) for this purpose

Types of interrupts

Interrupts are classified into two types:

I. Hardware interrupt

A hardware interrupt is an electronic signal from an external hardware device that indicates it needs
attention from the OS. One example of this is moving a mouse or pressing a keyboard key. In these
examples of interrupts, the processor must stop to read the mouse position or keystroke at that instant.

In this type of interrupt, all devices are connected to the Interrupt Request Line (IRL). Typically, a
hardware IRQ has a value that associates it with a particular device. This makes it possible for the
processor to determine which device is requesting service by raising the IRQ, and then provide service
accordingly.

An interrupt signal might be planned (i.e., specifically requested by a program) or it may be unplanned
(i.e., caused by an event that may not be related to a program that's currently running on the system).
There are three types of hardware interrupts:

Maskable interrupts

In a processor, an internal interrupt mask register selectively enables and disables hardware requests.
When the mask bit is set, the interrupt is enabled. When it is clear, the interrupt is disabled. Signals that
are affected by the mask are maskable interrupts.

Non-maskable interrupts

In some cases, the interrupt mask cannot be disabled so it does not affect some interrupt signals. These
are non-maskable interrupts and are usually high-priority events that cannot be ignored.

Spurious interrupts

Also known as a phantom interrupt or ghost interrupt, a spurious interrupt is a type of hardware
interrupt for which no source can be found. These interrupts are difficult to identify if a system
misbehaves. If the ISR does not account for the possibility of such interrupts, it may result in a
system deadlock.

Software interrupts

A software interrupt occurs when an application program terminates or requests certain services from
the OS. Usually, the processor requests a software interrupt when certain conditions are met by
executing a special instruction. This instruction invokes the interrupt and functions like a subroutine call.
Software interrupts are commonly used when the system interacts with device drivers or when a
program requests OS services.

In some cases, software interrupts may be triggered unexpectedly by program execution errors rather
than by design. These interrupts are known as exceptions or traps.

Types of interrupt methods

There are two types of interrupt triggering methods or modules:

Level-triggered interrupts

This interrupt module generates an interrupt by holding the interrupt signal at a particular active logic
level. The signal gets negated once the processor commands it after the device has been serviced. The
processor samples the interrupt signal during each instruction cycle and recognizes it when it's inserted
during the sampling process. After servicing a device, the processor may service other devices before
exiting the ISR.

Level-triggered interrupts allow multiple devices to share a common signal using wired-OR connections.
However, it can be cumbersome for firmware to deal with them.

Edge-triggered interrupts
This module generates an interrupt only when it detects an asserting edge of the interrupt source. Here,
the device that wants to signal an interrupt creates a pulse which changes the interrupt source level.

Edge-triggered interrupts provide a more flexible way to handle interrupts. They can be serviced
immediately, regardless of how the interrupt source behaves. Plus, they reduce the complexity
of firmware code and reduce the number of conditions required for the firmware. The drawback is that
if the pulse is too short to be detected, special hardware may be required to detect and service the
interruption.

Context swapping
What is Swapping?

It is a memory management method in which the processes are switched from the RAM to the
secondary memory. It's done so that the RAM can be freed up to execute other tasks. It is used to
increase the utilization of main memory. The location where the swapped-out memories are stored in
the memory is called swap space.

It's only used when the data isn't in the main memory. Although the swapping technique affects the
system's performance, it also helps run multiple and larger processes. Swapping is also known as
memory compaction.

Swapping is divided into two parts which are;

1. Swap-in

2. Swap-out

Swap-in

It refers to removing the program process from a hard disk and placing it back in the RAM.

Swap-out

It refers to removing a process from the RAM and putting it on the hard disk.

Advantages and disadvantages of swapping

There are various advantages and disadvantages of swapping. Some of the advantages and
disadvantages of swapping are as follows:

Advantages

1. It is economical.

2. The swapping technique is primarily used to help the CPU manage several tasks within a single
main memory.
3. It may be easily applied to priority-based scheduling to increase its performance.

4. The CPU may execute multiple tasks simultaneously using the swapping technique. So, the
processes don't need to wait before their execution.

Disadvantages

1. There may occur inefficiency if a resource or variable is frequently used by the processes
involved in the swapping process.

2. If the computer system loses power during a period of high swapping activity, the user may lose
all of the data related to the program.

3. If the swapping technique is not good, the overall method may increase the number of page
faults and reduce processing performance.

Interprocess Communication (IPC)


Process management os involves managing multiple processes and enabling them to communicate with
each other efficiently. Interprocess Communication (IPC) is a mechanism that allows processes to
communicate and share resources with each other in an operating system. In this section, we will
discuss the different types of IPC mechanisms and their importance in process management os.

Definition

Interprocess Communication (IPC) is the process of exchanging data and messages between two or
more processes in a computer system. IPC i
s essential for the coordination and synchronization of processes that may be executing on different
CPUs or cores, and enables these processes to communicate and share resources with each other.

Types of IPC Mechanisms

There are three main types of IPC mechanisms: message passing, shared memory, and synchronization.

 Message Passing: This mechanism involves the exchange of messages between processes using
a message queue. The sender process writes a message to the queue, and the receiver process
reads the message from the queue.

 Shared Memory: In this mechanism, multiple processes share the same memory space, allowing
them to exchange data without having to copy it between processes. This mechanism is more
efficient than message passing but requires careful synchronization to avoid race conditions.
 Synchronization: This mechanism is used to manage access to shared resources, such as shared
memory or files, to avoid conflicts between processes. Synchronization mechanisms include
locks, semaphores, and monitors.

Explanation of how each mechanism works

In message passing, a sender process writes a message to a message queue, and the receiver process
reads the message from the queue. The message queue acts as a buffer, allowing the sender and
receiver to operate independently of each other.

In shared memory, multiple processes share the same memory space, which allows them to exchange
data without having to copy it between processes. To avoid conflicts between processes,
synchronization mechanisms are used to manage access to shared memory.

Synchronization mechanisms, such as locks and semaphores, are used to manage access to shared
resources. Locks are used to ensuring that only one process can access a shared resource at a time,
while semaphores are used to manage access to a limited number of resources.

Importance of IPC in process management os

IPC is crucial for process management os, as it allows processes to communicate and share resources
with each other, enabling more efficient and effective execution of tasks. Without IPC, processes would
have to be isolated from each other, making it more difficult to coordinate and synchronize their
activities. IPC allows operating systems to manage resources and allocate them to processes effectively,
improving system performance and reliability.

Semaphores in Process Synchronization

Semaphores are just normal variables used to coordinate the activities of multiple processes in a
computer system. They are used to enforce mutual exclusion, avoid race conditions, and implement
synchronization between processes.

The process of using Semaphores provides two operations: wait (P) and signal (V). The wait operation
decrements the value of the semaphore, and the signal operation increments the value of the
semaphore. When the value of the semaphore is zero, any process that performs a wait operation will
be blocked until another process performs a signal operation.
Semaphores are used to implement critical sections, which are regions of code that must be executed by
only one process at a time. By using semaphores, processes can coordinate access to shared resources,
such as shared memory or I/O devices.

A semaphore is a special kind of synchronization data that can be used only through specific
synchronization primitives. When a process performs a wait operation on a semaphore, the operation
checks whether the value of the semaphore is >0. If so, it decrements the value of the semaphore and
lets the process continue its execution; otherwise, it blocks the process on the semaphore. A signal
operation on a semaphore activates a process blocked on the semaphore if any, or increments the value
of the semaphore by 1. Due to these semantics, semaphores are also called counting semaphores. The
initial value of a semaphore determines how many processes can get past the wait operation.

Semaphores are of two types:

1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1. Its value is initialized to
1. It is used to implement the solution of critical section problems with multiple processes.

2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that
has multiple instances.

You might also like