Components types structure
Components types structure
There are various components of an Operating System to perform well defined tasks.
Though most of the Operating Systems differ in structure but logically they have similar
components. Each component must be a well-defined portion of a system that
appropriately describes the functions, inputs, and outputs.
There are following 8-components of an Operating System:
1. Process Management
2. I/O Device Management
3. File Management
4. Network Management
5. Main Memory Management
6. Secondary Storage Management
7. Security Management
8. Command Interpreter System
Process Management
A process is program or a fraction of a program that is loaded in main memory. A process
needs certain resources including CPU time, Memory, Files, and I/O devices to
accomplish its task. The process management component manages the multiple
processes running simultaneously on the Operating System.
A program in running state is called a process.
The operating system is responsible for the following activities in connection with
process management:
File Management
File management is one of the most visible services of an operating system. Computers
can store information in several different physical forms; magnetic tape, disk, and drum
are the most common forms.
A file is defined as a set of correlated information and it is defined by the creator of the
file. Mostly files represent data, source and object forms, and programs. Data files can
be of any type like alphabetic, numeric, and alphanumeric.
A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator
and user.
The operating system implements the abstract concept of the file by managing mass
storage device, such as types and disks. Also files are normally organized into directories
to ease their use. These directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection with file
management:
Network Management
The definition of network management is often broad, as network management involves
several different components. Network management is the process of managing and
administering a computer network. A computer network is a collection of various types
of computers connected with each other.
Network management comprises fault analysis, maintaining the quality of service,
provisioning of networks, and performance management.
Network management is the process of keeping your network healthy for an efficient
communication between different computers.
• Network administration
• Network maintenance
• Network operation
• Network provisioning
• Network security
The operating system is responsible for the following activities in connections with
memory management:
• Keep track of which parts of memory are currently being used and by whom.
• Decide which processes to load when memory space becomes available.
• Allocate and deallocate memory space as needed.
Disk scheduling
Security Management
The operating system is primarily responsible for all task and activities happen in the
computer system. The various processes in an operating system must be protected from
each other’s activities. For that purpose, various mechanisms which can be used to
ensure that the files, memory segment, cpu and other resources can be operated on only
by those processes that have gained proper authorization from the operating system.
Security Management refers to a mechanism for controlling the access of programs,
processes, or users to the resources defined by a computer controls to be imposed,
together with some means of enforcement.
For example, memory addressing hardware ensure that a process can only execute
within its own address space. The timer ensure that no process can gain control of the
CPU without relinquishing it. Finally, no process is allowed to do it’s own I/O, to protect
the integrity of the various peripheral devices.
Many commands are given to the operating system by control statements. A program
which reads and interprets control statements is automatically executed. This program
is called the shell and few examples are Windows DOS command window, Bash of
Unix/Linux or C-Shell of Unix/Linux.
Operating systems are there from the very first computer generation and they keep
evolving with time. In this chapter, we will discuss some of the important types of
operating systems which are most commonly used.
• Problem of reliability.
• Question of security and integrity of user programs and data.
• Problem of data communication.
• With resource sharing facility, a user at one site may be able to use the resources
available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially
continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
This article discusses many sorts of structures that implement operating systems, as listed
below, as well as how and why they work. It also defines the operating system structure.
o Simple Structure
o Monolithic Structure
o Layered Approach Structure
o Micro-Kernel Structure
o Exo-Kernel Structure
o Virtual Machines
SIMPLE STRUCTURE
It is the most straightforward operating system structure, but it lacks definition and is only
appropriate for usage with tiny and restricted systems. Since the interfaces and degrees of
functionality in this structure are clearly defined, programs are able to access I/O routines,
which may result in unauthorized access to I/O procedures.
o There are four layers that make up the MS-DOS operating system, and each has its own
set of features.
o These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
o The MS-DOS operating system benefits from layering because each level can be defined
independently and, when necessary, can interact with one another.
o If the system is built in layers, it will be simpler to design, manage, and update. Because of
this, simple structures can be used to build constrained systems that are less complex.
o When a user program fails, the operating system as whole crashes.
o Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.
o Because there are only a few interfaces and levels, it is simple to develop.
o Because there are fewer layers between the hardware and the applications, it offers
superior performance.
o The entire operating system breaks if just one user program malfunctions.
o Since the layers are interconnected, and in communication with one another, there is no
abstraction or data hiding.
o The operating system's operations are accessible to layers, which can result in data
tampering and system failure.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation,
including file management, memory management, device management, and operational
operations.
The core of an operating system for computers is called the kernel (OS). All other System
components are provided with fundamental services by the kernel. The operating system
and the hardware use it as their main interface. When an operating system is built into a
single piece of hardware, such as a keyboard or mouse, the kernel can directly access all
of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple
programming techniques such as batch processing and time-sharing increase a
processor's usability. Working on top of the operating system and under complete
command of all hardware, the monolithic kernel performs the role of a virtual computer.
This is an old operating system that was used in banks to carry out simple tasks like batch
processing and time-sharing, which allows numerous users at different terminals to access
the Operating System.
o Because layering is unnecessary and the kernel alone is responsible for managing all
operations, it is easy to design and execute.
o Due to the fact that functions like memory management, file management, process
scheduling, etc., are implemented in the same address area, the monolithic kernel runs
rather quickly when compared to other systems. Utilizing the same address speeds up and
reduces the time required for address allocation for new processes.
o The monolithic kernel's services are interconnected in address space and have an impact
on one another, so if any of them malfunctions, the entire system does as well.
o It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest
layer) contains the hardware, and layer 1 (the highest layer) contains the user interface
(layer N). These layers are organized hierarchically, with the top-level layers making use of
the capabilities of the lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an
option. Because layered structures are hierarchical, debugging is simpler, therefore all
lower-level layers are debugged before the upper layer is examined. As a result, the
present layer alone has to be reviewed since all the lower layers have already been
examined.
o Work duties are separated since each layer has its own functionality, and there is some
amount of abstraction.
o Debugging is simpler because the lower layers are examined first, followed by the top
layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of
any unnecessary parts. Systems and user applications are used to implement these optional
kernel components. So, Micro-Kernels is the name given to these systems that have been
developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the
system is now more trustworthy and secure. If one Micro-Kernel malfunctions, the
remaining operating system is unaffected and continues to function normally.
EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering
application-level management of hardware resources. The exokernel architecture's goal
is to enable application-specific customization by separating resource management from
protection. Exokernel size tends to be minimal due to its limited operability.
Because the OS sits between the programs and the actual hardware, it will always have an
effect on the functionality, performance, and breadth of the apps that are developed on
it. By rejecting the idea that an operating system must offer abstractions upon which to
base applications, the exokernel operating system makes an effort to solve this issue. The
goal is to give developers as few restriction on the use of abstractions as possible while
yet allowing them the freedom to do so when necessary. Because of the way the exokernel
architecture is designed, a single tiny kernel is responsible for moving all hardware
abstractions into unreliable libraries known as library operating systems. Exokernels differ
from micro- and monolithic kernels in that their primary objective is to prevent forced
abstraction.
o A decline in consistency
o Exokernel interfaces have a complex architecture.
VIRTUAL MACHINES (VMs)
The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC
(Network Interface Card), is abstracted by a virtual machine into a variety of various
execution contexts based on our needs, giving us the impression that each execution
environment is a separate computer. A virtual box is an example of it.
Using CPU scheduling and virtual memory techniques, an operating system allows us to
execute multiple processes simultaneously while giving the impression that each one is
using a separate processor and virtual memory. System calls and a file system are
examples of extra functionalities that a process can have that the hardware is unable to
give. Instead of offering these extra features, the virtual machine method just offers an
interface that is similar to that of the most fundamental hardware. A virtual duplicate of
the computer system underneath is made available to each process.
We can develop a virtual machine for a variety of reasons, all of which are fundamentally
connected to the capacity to share the same underlying hardware while concurrently
supporting various execution environments, i.e., various operating systems.
Disk systems are the fundamental problem with the virtual machine technique. If the
actual machine only has three-disc drives but needs to host seven virtual machines, let's
imagine that. It is obvious that it is impossible to assign a disc drive to every virtual
machine because the program that creates virtual machines would require a sizable
amount of disc space in order to offer virtual memory and spooling. The provision of
virtual discs is the solution.
The result is that users get their own virtual machines. They can then use any of the
operating systems or software programs that are installed on the machine below. Virtual
machine software is concerned with programming numerous virtual machines
simultaneously into a physical machine; it is not required to take into account any user-
support software. With this configuration, it may be possible to break the challenge of
building an interactive system for several users into two manageable chunks.
o Due to total isolation between each virtual machine and every other virtual machine, there
are no issues with security.
o A virtual machine may offer an architecture for the instruction set that is different from
that of actual computers.
o Simple availability, accessibility, and recovery convenience.
Disadvantages of Virtual Machines:
CONCLUSION
o The operating system makes it possible for the user to communicate with the hardware of
the computer. The operating system is used as the foundation for installing and using
system software.
o The interconnections between the various operating system components can be defined
as the operating system structure.
o The operating system is divided into various different structural types: simple structure,
monolithic approach, layered approach, micro-kernels, exokernels, and virtual machines.
o Each time one of these methods or structures changed, the OS became progressively
better.
System Call
o A system call is a mechanism that provides the interface between a
process and the operating system. It is a programmatic method in
which a computer program requests a service from the kernel of the
OS.
o System call offers the services of the operating system to the user
programs via API (Application Programming Interface). System calls
are the only entry points for the kernel system.
o
Step 1) The processes executed in the user mode till the time a system call
interrupts it.
Step 3) Once system call execution is over, control returns to the user
mode.,
• Process Control
• File Management
• Device Management
• Information Maintenance
• Communications
Functions:
File Management
File management system calls handle file manipulation jobs like creating a
file, reading, and writing, etc.
Functions:
• Create a file
• Delete file
• Open and close file
• Read, write, and reposition
• Get and set file attributes
Device Management
Device management does the job of device manipulation like reading from
device buffers, writing into device buffers, etc.
Functions:
Functions:
Communication:
These types of system calls are specially used for interprocess
communications.
Functions:
fork()
Processes use this system call to create processes that are a copy of
themselves. With the help of this system Call parent process creates a child
process, and the execution of the parent process will be suspended till the
child process executes.
exec()
This system call runs when an executable file in the context of an already
running process that replaces the older executable file. However, the
original process identifier remains as a new process is not built, but stack,
data, head, data, etc. are replaced by the new process.
kill():
The kill() system call is used by OS to send a termination signal to a
process that urges the process to exit. However, a kill system call does not
necessarily mean killing the process and can have various meanings.
exit():
The exit() system call is used to terminate program execution. Specially in
the multi-threaded environment, this call defines that the thread
execution is complete. The OS reclaims resources that were used by the
process after the use of exit() system call.
Summary:
Categories Windows Unix
CreateProcess() fork()
Process control ExitProcess() exit()
WaitForSingleObject() wait()
SetConsoleMode() loctl()
Device
ReadConsole() read()
manipulation
WriteConsole() write()
CreateFile() Open()
File ReadFile() Read()
manipulation WriteFile() write()
CloseHandle() close!)
GetCurrentProcessID() getpid()
Information
SetTimer() alarm()
maintanence
Sleep() sleep()
CreatePipe() Pipe()
Communication CreateFileMapping() shm_open()
MapViewOfFile() mmap()
SetFileSecurity()
Chmod()
InitlializeSecurityDescriptor()
Protection Umask()
SetSecurityDescriptorGroup
Chown()
()