Basic Environment 2
Basic Environment 2
Overview
o Simple Structure
o Monolithic Structure
o Micro-Kernel Structure
o Exo-Kernel Structure
o Virtual Machines
SIMPLE STRUCTURE
o Because there are fewer layers between the hardware and the
applications, it offers superior performance.
***System calls**
MONOLITHIC STRUCTURE
The core of an operating system for computers is called the kernel (OS).
All other System components are provided with fundamental services
by the kernel. The operating system and the hardware use it as their
main interface. When an operating system is built into a single piece of
hardware, such as a keyboard or mouse, the kernel can directly access
all of its resources.
MICRO-KERNEL STRUCTURE
The process state refers to the current status of the process in the
system. A process can be in one of the following states:
Process Scheduling
Process scheduling is an essential part of process management os. It
refers to the mechanism that the operating system uses to determine
which process to execute next. The goal of process scheduling is to
improve the overall system performance by maximizing the processor
utilization, minimizing the turnaround time, and improving the
response time of the system.
What is an interrupt?
An interrupt is a signal emitted by a device attached to a computer or from a program within the
computer. It requires the operating system (OS) to stop and figure out what to do next. An interrupt
temporarily stops or terminates a service or a current process. Most I/O devices have a bus control line
called Interrupt Service Routine (ISR) for this purpose
Types of interrupts
I. Hardware interrupt
A hardware interrupt is an electronic signal from an external hardware device that indicates it needs
attention from the OS. One example of this is moving a mouse or pressing a keyboard key. In these
examples of interrupts, the processor must stop to read the mouse position or keystroke at that instant.
In this type of interrupt, all devices are connected to the Interrupt Request Line (IRL). Typically, a
hardware IRQ has a value that associates it with a particular device. This makes it possible for the
processor to determine which device is requesting service by raising the IRQ, and then provide service
accordingly.
An interrupt signal might be planned (i.e., specifically requested by a program) or it may be unplanned
(i.e., caused by an event that may not be related to a program that's currently running on the system).
There are three types of hardware interrupts:
Maskable interrupts
In a processor, an internal interrupt mask register selectively enables and disables hardware requests.
When the mask bit is set, the interrupt is enabled. When it is clear, the interrupt is disabled. Signals that
are affected by the mask are maskable interrupts.
Non-maskable interrupts
In some cases, the interrupt mask cannot be disabled so it does not affect some interrupt signals. These
are non-maskable interrupts and are usually high-priority events that cannot be ignored.
Spurious interrupts
Also known as a phantom interrupt or ghost interrupt, a spurious interrupt is a type of hardware
interrupt for which no source can be found. These interrupts are difficult to identify if a system
misbehaves. If the ISR does not account for the possibility of such interrupts, it may result in a
system deadlock.
Software interrupts
A software interrupt occurs when an application program terminates or requests certain services from
the OS. Usually, the processor requests a software interrupt when certain conditions are met by
executing a special instruction. This instruction invokes the interrupt and functions like a subroutine call.
Software interrupts are commonly used when the system interacts with device drivers or when a
program requests OS services.
In some cases, software interrupts may be triggered unexpectedly by program execution errors rather
than by design. These interrupts are known as exceptions or traps.
Level-triggered interrupts
This interrupt module generates an interrupt by holding the interrupt signal at a particular active logic
level. The signal gets negated once the processor commands it after the device has been serviced. The
processor samples the interrupt signal during each instruction cycle and recognizes it when it's inserted
during the sampling process. After servicing a device, the processor may service other devices before
exiting the ISR.
Level-triggered interrupts allow multiple devices to share a common signal using wired-OR connections.
However, it can be cumbersome for firmware to deal with them.
Edge-triggered interrupts
This module generates an interrupt only when it detects an asserting edge of the interrupt source. Here,
the device that wants to signal an interrupt creates a pulse which changes the interrupt source level.
Edge-triggered interrupts provide a more flexible way to handle interrupts. They can be serviced
immediately, regardless of how the interrupt source behaves. Plus, they reduce the complexity
of firmware code and reduce the number of conditions required for the firmware. The drawback is that
if the pulse is too short to be detected, special hardware may be required to detect and service the
interruption.
Context swapping
What is Swapping?
It is a memory management method in which the processes are switched from the RAM to the
secondary memory. It's done so that the RAM can be freed up to execute other tasks. It is used to
increase the utilization of main memory. The location where the swapped-out memories are stored in
the memory is called swap space.
It's only used when the data isn't in the main memory. Although the swapping technique affects the
system's performance, it also helps run multiple and larger processes. Swapping is also known as
memory compaction.
1. Swap-in
2. Swap-out
Swap-in
It refers to removing the program process from a hard disk and placing it back in the RAM.
Swap-out
It refers to removing a process from the RAM and putting it on the hard disk.
There are various advantages and disadvantages of swapping. Some of the advantages and
disadvantages of swapping are as follows:
Advantages
1. It is economical.
2. The swapping technique is primarily used to help the CPU manage several tasks within a single
main memory.
3. It may be easily applied to priority-based scheduling to increase its performance.
4. The CPU may execute multiple tasks simultaneously using the swapping technique. So, the
processes don't need to wait before their execution.
Disadvantages
1. There may occur inefficiency if a resource or variable is frequently used by the processes
involved in the swapping process.
2. If the computer system loses power during a period of high swapping activity, the user may lose
all of the data related to the program.
3. If the swapping technique is not good, the overall method may increase the number of page
faults and reduce processing performance.
Definition
Interprocess Communication (IPC) is the process of exchanging data and messages between two or
more processes in a computer system. IPC i
s essential for the coordination and synchronization of processes that may be executing on different
CPUs or cores, and enables these processes to communicate and share resources with each other.
There are three main types of IPC mechanisms: message passing, shared memory, and synchronization.
Message Passing: This mechanism involves the exchange of messages between processes using
a message queue. The sender process writes a message to the queue, and the receiver process
reads the message from the queue.
Shared Memory: In this mechanism, multiple processes share the same memory space, allowing
them to exchange data without having to copy it between processes. This mechanism is more
efficient than message passing but requires careful synchronization to avoid race conditions.
Synchronization: This mechanism is used to manage access to shared resources, such as shared
memory or files, to avoid conflicts between processes. Synchronization mechanisms include
locks, semaphores, and monitors.
In message passing, a sender process writes a message to a message queue, and the receiver process
reads the message from the queue. The message queue acts as a buffer, allowing the sender and
receiver to operate independently of each other.
In shared memory, multiple processes share the same memory space, which allows them to exchange
data without having to copy it between processes. To avoid conflicts between processes,
synchronization mechanisms are used to manage access to shared memory.
Synchronization mechanisms, such as locks and semaphores, are used to manage access to shared
resources. Locks are used to ensuring that only one process can access a shared resource at a time,
while semaphores are used to manage access to a limited number of resources.
IPC is crucial for process management os, as it allows processes to communicate and share resources
with each other, enabling more efficient and effective execution of tasks. Without IPC, processes would
have to be isolated from each other, making it more difficult to coordinate and synchronize their
activities. IPC allows operating systems to manage resources and allocate them to processes effectively,
improving system performance and reliability.
Semaphores are just normal variables used to coordinate the activities of multiple processes in a
computer system. They are used to enforce mutual exclusion, avoid race conditions, and implement
synchronization between processes.
The process of using Semaphores provides two operations: wait (P) and signal (V). The wait operation
decrements the value of the semaphore, and the signal operation increments the value of the
semaphore. When the value of the semaphore is zero, any process that performs a wait operation will
be blocked until another process performs a signal operation.
Semaphores are used to implement critical sections, which are regions of code that must be executed by
only one process at a time. By using semaphores, processes can coordinate access to shared resources,
such as shared memory or I/O devices.
A semaphore is a special kind of synchronization data that can be used only through specific
synchronization primitives. When a process performs a wait operation on a semaphore, the operation
checks whether the value of the semaphore is >0. If so, it decrements the value of the semaphore and
lets the process continue its execution; otherwise, it blocks the process on the semaphore. A signal
operation on a semaphore activates a process blocked on the semaphore if any, or increments the value
of the semaphore by 1. Due to these semantics, semaphores are also called counting semaphores. The
initial value of a semaphore determines how many processes can get past the wait operation.
1. Binary Semaphore –
This is also known as a mutex lock. It can have only two values – 0 and 1. Its value is initialized to
1. It is used to implement the solution of critical section problems with multiple processes.
2. Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that
has multiple instances.