Operating Structure
Operating Structure
Because operating systems have complex structures, we want a structure that is easy to
understand so that we can adapt an operating system to meet our specific needs. Similar
to how we break down larger problems into smaller, more manageable subproblems,
building an operating system in pieces is simpler. The operating system is a component
of every segment. The strategy for integrating different operating system components
within the kernel can be thought of as an operating system structure. As will be discussed
below, various types of structures are used to implement operating systems.
1. Monolithic
2. Layered
3. Microkernels
4. Client-server model
5. Virtual machine
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation, including
file management, memory management, device management, and operational operations.
The core of an operating system for computers is called the kernel (OS). All other System
components are provided with fundamental services by the kernel. The operating system and the
hardware use it as their main interface. When an operating system is built into a single piece of
hardware, such as a keyboard or mouse, the kernel can directly access all of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple
programming techniques such as batch processing and time-sharing increase a processor's
usability. Working on top of the operating system and under complete command of all hardware,
the monolithic kernel performs the role of a virtual computer. This is an old operating system that
was used in banks to carry out simple tasks like batch processing and time-sharing, which allows
numerous users at different terminals to access the Operating System.
o Because layering is unnecessary and the kernel alone is responsible for managing all operations, it
is easy to design and execute.
o Due to the fact that functions like memory management, file management, process scheduling, etc.,
are implemented in the same address area, the monolithic kernel runs rather quickly when compared
to other systems. Utilizing the same address speeds up and reduces the time required for address
allocation for new processes.
o The monolithic kernel's services are interconnected in address space and have an impact on one
another, so if any of them malfunctions, the entire system does as well.
o It is not adaptable. Therefore, launching a new service is difficult.
Layered structure
An OS can be broken into pieces and retain much more control over the system. In this
structure, the OS is broken into a number of layers (levels). The bottom layer (layer 0) is
the hardware, and the topmost layer (layer N) is the user interface. These layers are so
designed that each layer uses the functions of the lower-level layers. This simplifies the
debugging process, if lower-level layers are debugged and an error occurs during
debugging, then the error must be on that layer only, as the lower-level layers have
already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be
modified and passed on which adds overhead to the system. Moreover, careful planning
of the layers is necessary, as a layer can use only lower level layers. UNIX is an
example of this structure.
This structure designs the operating system by removing all non-essential components
from the kernel and implementing them as system and user programs. This resultsfails, in
a smaller kernel called the micro-kernel. Advantages of this structure are that all new
services need to be added to user space and does not require the kernel to be modified.
Thus it is more secure and reliable as if a service fails, then rest of the operating system
remains untouched. Mac OS is an example of this type of OS.
Advantages of Micro-kernel structure
1. It makes the operating system portable to various platforms.
2. As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel structure
1. Increased level of inter module communication degrades system performance.
The client-server model is a fundamental concept in operating systems and networking that describes the
interaction between two types of entities: clients and servers. This model is commonly used in distributed
computing and networked systems. Here's an overview of the client-server model in operating systems:
Client: The client is a program or device that requests services or resources from a server. Clients initiate
communication by sending requests to servers. Clients are typically end-user devices like computers,
smartphones, or IoT devices, but they can also be software applications or processes running on these
devices.
Server: The server is a program or device that provides services or resources to clients. Servers are
designed to wait for incoming requests from clients, process those requests, and respond accordingly.
Examples of servers include web servers, file servers, email servers, and database servers.
Based on our needs, a virtual machine abstracts the hardware of our personal computer, including
the CPU, disc drives, RAM, and NIC (Network Interface Card), into a variety of different
execution contexts, giving us the impression that each execution environment is a different
computer. An illustration of it is a virtual box.
An operating system enables us to run multiple processes concurrently while making it appear
as though each one is using a different processor and virtual memory by using CPU
scheduling and virtual memory techniques.
The fundamental issue with the virtual machine technique is disc systems. Let’s say the physical
machine only has three disc drives, but it needs to host seven virtual machines. The programme
that creates virtual machines would need a significant amount of disc space in order to provide
virtual memory and spooling, so it should be clear that it is impossible to assign a disc drive to
every virtual machine. The answer is to make virtual discs available.
We can develop a virtual machine for a variety of reasons, all of which are fundamentally
connected to the capacity to share the same underlying hardware while concurrently supporting
various execution environments, i.e., various operating systems.
Disk systems are the fundamental problem with the virtual machine technique. If the actual
machine only has three-disc drives but needs to host seven virtual machines, let's imagine that. It
is obvious that it is impossible to assign a disc drive to every virtual machine because the program
that creates virtual machines would require a sizable amount of disc space in order to offer virtual
memory and spooling. The provision of virtual discs is the solution.
The result is that users get their own virtual machines. They can then use any of the operating
systems or software programs that are installed on the machine below. Virtual machine software
is concerned with programming numerous virtual machines simultaneously into a physical
machine; it is not required to take into account any user-support software. With this configuration,
it may be possible to break the challenge of building an interactive system for several users into
two manageable chunks.
o Due to total isolation between each virtual machine and every other virtual machine, there
are no issues with security.
o A virtual machine may offer an architecture for the instruction set that is different from
that of actual computers.
o Simple availability, accessibility, and recovery convenience.