0% found this document useful (0 votes)
11 views27 pages

Os Unit 4

Uploaded by

navika Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views27 pages

Os Unit 4

Uploaded by

navika Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT-4

Distributed Operating System


A distributed operating system is one in which several computer systems connected through a
single communication channel. Moreover, these systems have their individual processors
and memory. Furthermore, these processors communicate through high-speed buses or
telephone lines. These individual systems that connect through a single channel are
considered as a single unit. We can also call them loosely coupled systems. The individual
components or systems of the network are nodes.
The diagram below illustrates the structure of a distributed operating system:

Types of Distributed Operating System

There are four types of Distributed Operating System.


1. Client/Server Systems
In this system, the client requests the server for a resource. On the other hand, the server
provides this resource to the client. One client contacts only a single server at a time.
Whereas a single server can deal with multiple clients simultaneously. The clients and servers
connect through a computer network in the system.
2. Peer to Peer Systems
In this system, the nodes play an important role. All the work equally divides among the
nodes. Furthermore, these nodes can share data or resources as per the requirement. Again,
they require a network to connect.
3. Middleware
Middleware facilitates interoperability among applications running on different operating
systems. By employing these services, applications can exchange data with each other,
ensuring distribution transparency.

4. Three-Tier
Development is made easier because client data is saved in the intermediate tier rather than
the client itself. Online applications are where this kind of architecture is most frequently
found.

5. N-Tier
N-tier systems are utilized when a server or application has to send requests to other
corporate services over a network.

Applications of Distributed Operating System


There are various applications of the distributed operating system. Some of them are as
follows:

Network Applications
DOS is used by many network applications, including the Web, peer-to-peer networks,
multiplayer web-based games, and virtual communities.

Telecommunication Networks
DOS is useful in phones and cellular networks. A DOS can be found in networks like the
Internet, wireless sensor networks, and routing algorithms.

Parallel Computation
DOS is the basis of systematic computing, which includes cluster computing and grid
computing, and a variety of volunteer computing projects.

Real-Time Process Control


The real-time process control system operates with a deadline, and such examples include
aircraft control systems.
Examples of Distributed Operating System

 AIX operating system for IBM RS/6000 computers.


 Solaris operating system for SUN multiprocessor workstations.
 Mach/OS is a multitasking and multithreading UNIX compatible operating system.
 OSF/1 operating system
Advantages of Distributed OS
1. It may share all resources (CPU, disk, network interface, nodes, computers, and so on)
from one site to another, increasing data availability across the entire system.
2. It reduces the probability of data corruption because all data is replicated across all
sites; if one site fails, the user can access data from another operational site.
3. The entire system operates independently of one another, and as a result, if one site
crashes, the entire system does not halt.
4. It increases the speed of data exchange from one site to another site.
5. It is an open system since it may be accessed from both local and remote locations.
6. It helps in the reduction of data processing time.
7. Most distributed systems are made up of several nodes that interact to make them
fault-tolerant. If a single machine fails, the system remains operational.

Disadvantages of Distributed OS
There are various disadvantages of the distributed operating system. Some of them are as
follows:
1. The system must decide which jobs must be executed when they must be executed,
and where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections
must be secured.
3. The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
4. The underlying software is extremely complex and is not understood very well
compared to other systems.
5. The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.
6. These systems aren't widely available because they're thought to be too expensive.
7. Gathering, processing, presenting, and monitoring hardware use metrics for big
clusters can be a real issue.

Features/Characteristics of Distributed Operating System


The features are as follows:
1. Resource Sharing
The main important feature of this system is that it allows users to share resources. Moreover,
they can share resources in a secure and controlled manner. Resources can be of any type. For
example, some common resources which are shared can be printers, files, data, storage, web
pages, etc.
2. Openness
This means that the services which the system provides are openly displayed through
interfaces. Moreover, these interfaces provide only the syntax of the services. For example,
the type of functions, their return types, parameters, etc. These interfaces use Interface
Definition Languages (IDL).
3. Concurrency
It means that several tasks take place at different nodes of the system simultaneously.
Moreover, these tasks can also interact with each other. It results in increasing the efficiency
of the system.
4. Scalability
It refers to the fact that the efficiency of the system should not change when more nodes are
added to the system. Moreover, the performance for the system with 100 nodes should be
equal to the system with 1000 nodes.
5. Fault Tolerance
It means that the user can still work with the system in the case, hardware, or software fails.
6. Transparency
It is the most important feature of the system. The main goal of a distributed OS is to hide the
fact that the resources are being shared. Furthermore, transparency means that the user should
not know that the resources he is using are shared. Moreover, for the user, the system should
be a separate individual unit.
Communication and Synchronization
Distributed System is a collection of computers connected via a high-speed communication
network. In the distributed system, the hardware and software components communicate and
coordinate their actions by message passing. Each node in distributed systems can share its
resources with other nodes. So, there is a need for proper allocation of resources to preserve
the state of resources and help coordinate between the several processes. To resolve such
conflicts, synchronization is used. Synchronization in distributed systems is achieved via
clocks. The physical clocks are used to adjust the time of nodes. Each node in the system can
share its local time with other nodes in the system. The time is set based on UTC (Universal
Time Coordination). UTC is used as a reference time clock for the nodes in the system. Clock
synchronization can be achieved by 2 ways: External and Internal Clock Synchronization.
1. External clock synchronization is the one in which an external reference clock is
present. It is used as a reference and the nodes in the system can set and adjust
their time accordingly.
2. Internal clock synchronization is the one in which each node shares its time
with other nodes and all the nodes set and adjust their times accordingly.
There are 2 types of clock synchronization algorithms: Centralized and Distributed.
1. Centralized is the one in which a time server is used as a reference. The single
time-server propagates it’s time to the nodes, and all the nodes adjust the time
accordingly. It is dependent on a single time-server, so if that node fails, the whole
system will lose synchronization. Examples of centralized are-Berkeley the
Algorithm, Passive Time Server, Active Time Server etc.
2. Distributed is the one in which there is no centralized time-server present.
Instead, the nodes adjust their time by using their local time and then, taking the
average of the differences in time with other nodes. Distributed algorithms
overcome the issue of centralized algorithms like scalability and single point
failure. Examples of Distributed algorithms are – Global Averaging Algorithm,
Localized Averaging Algorithm, NTP (Network time protocol), etc.
Centralized clock synchronization algorithms suffer from two major drawbacks:
1. They are subject to a single-point failure. If the time-server node fails, the clock
synchronization operation cannot be performed. This makes the system unreliable.
Ideally, a distributed system should be more reliable than its individual nodes. If
one goes down, the rest should continue to function correctly.
2. From a scalability point of view, it is generally not acceptable to get all the time
requests serviced by a single-time server. In a large system, such a solution puts a
heavy burden on that one process.
Distributed algorithms overcome these drawbacks as there is no centralized time-server
present. Instead, a simple method for clock synchronization may be to equip each node of the
system with a real-time receiver so that each node’s clock can be independently synchronized
in real-time. Multiple real-time clocks (one for each node) are normally used for this purpose.
Architecture of Distributed Systems
Cloud-based software, the backbone of distributed systems, is a complicated network of
servers that anyone with an internet connection can access. In a distributed system,
components and connectors arrange themselves in a way that eases communication.
Components are modules with well-defined interfaces that can be replaced or reused.
Similarly, connectors are communication links between modules that mediate coordination or
cooperation among components.
A distributed system is broadly divided into two essential concepts — software architecture
(further divided into layered architecture, object-based architecture, data-centered
architecture, and event-based architecture) and system architecture (further divided into
client-server architecture and peer-to-peer architecture).
Let’s understand each of these architecture systems in detail:
1. Software architecture
Software architecture is the logical organization of software components and their interaction
with other structures. It is at a lower level than system architecture and focuses entirely on
components; e.g., the web front end of an ecommerce system is a component. The four main
architectural styles of distributed systems in software components entail:
i) Layered architecture
Layered architecture provides a modular approach to software. By separating each
component, it is more efficient. For example, the open systems interconnection (OSI) model
uses a layered architecture for better results. It does this by contacting layers in sequence,
which allows it to reach its goal. In some instances, the implementation of layered
architecture is in cross-layer coordination. Under cross-layer, the interactions can skip any
adjacent layer until it fulfills the request and provides better performance results.
Layered Architecture
Layered architecture is a type of software that separates components into units. A request
goes from the top down, and the response goes from the bottom up. The advantage of layered
architecture is that it keeps things orderly and modifies each layer independently without
affecting the rest of the system.
ii) Object-based architecture
Object-based architecture centers around an arrangement of loosely coupled objects with no
specific architecture like layers. Unlike layered architecture, object-based architecture doesn’t
have to follow any steps in a sequence. Each component is an object, and all the objects can
interact through an interface (or connector). Under object-based architecture, such
interactions between components can happen through a direct method call.
Object-based Architecture
At its core, communication between objects happens through method invocations, often
called remote procedure calls (RPC). Popular RPC systems include Java RMI and Web
Services and REST API Calls. The primary design consideration of these architectures is that
they are less structured. Here, component equals object, and connector equals RPC or RMI.
iii) Data-centered architecture
Data-centered architecture works on a central data repository, either active or passive. Like
most producer-consumer scenarios, the producer (business) produces items to the common
data store, and the consumer (individual) can request data from it. Sometimes, this central
repository can be just a simple database.
Data-centered Architecture
All communication between objects happens through a data storage system in a data-centered
system. It supports its stores’ components with a persistent storage space such as an SQL
database, and the system stores all the nodes in this data storage.
iv) Event-based architecture

In event-based architecture, the entire communication is through events. When an event


occurs, the system gets the notification. This means that anyone who receives this event will
also be notified and has access to information. Sometimes, these events are data, and at other
times they are URLs to resources. As such, the receiver can process what information they
receive and act accordingly.
Event-Based Architecture
One significant advantage of event-based architecture is that the components are loosely
coupled. Eventually, it means that it’s easy to add, remove, and modify them. To better
understand this, think of publisher-subscriber systems, enterprise services buses, or akka.io.
One advantage of event-based architecture is allowing heterogeneous components to
communicate with the bus, regardless of their communication protocols.
2. System architecture
System-level architecture focuses on the entire system and the placement of components of a
distributed system across multiple machines. The client-server architecture and peer-to-peer
architecture are the two major system-level architectures that hold significance today. An
example would be an ecommerce system that contains a service layer, a database, and a web
front.
i) Client-server architecture

As the name suggests, client-server architecture consists of a client and a server. The server is
where all the work processes are, while the client is where the user interacts with the service
and other resources (remote server). The client can then request from the server, and the
server will respond accordingly. Typically, only one server handles the remote side; however,
using multiple servers ensures total safety.
Client-server Architecture
Client-server architecture has one standard design feature: centralized security. Data such as
usernames and passwords are stored in a secure database for any server user to have access to
this information. This makes it more stable and secure than peer-to-peer. This stability comes
from client-server architecture, where the security database can allow resource usage in a
more meaningful way. The system is much more stable and secure, even though it isn’t as
fast as a server. The disadvantages of a distributed system are its single point of failure and
not being as scalable as a server.
ii) Peer-to-peer (P2P) architecture
A peer-to-peer network, also called a (P2P) network, works on the concept of no central
control in a distributed system. A node can either act as a client or server at any given time
once it joins the network. A node that requests something is called a client, and one that
provides something is called a server. In general, each node is called a peer.
Peer-to-Peer Architecture

If a new node wishes to provide services, it can do so in two ways. One way is to register
with a centralized lookup server, which will then direct the node to the service provider. The
other way is for the node to broadcast its service request to every other node in the network,
and whichever node responds will provide the requested service.
P2P networks of today have three separate sections:
 Structured P2P: The nodes in structured P2P follow a predefined distributed
data structure.
 Unstructured P2P: The nodes in unstructured P2P randomly select their
neighbors.
 Hybrid P2P: In a hybrid P2P, some nodes have unique functions appointed to
them in an orderly manner.

Multiprocessing Operating System


Multiprocessor operating systems are used in operating systems to boost the performance of
multiple CPUs within a single computer system.
Multiple CPUs are linked together so that a job can be divided and executed more quickly.
When a job is completed, the results from all CPUs are compiled to provide the final output.
Jobs were required to share main memory, and they may often share other system resources.
Multiple CPUs can be used to run multiple tasks at the same time, for example, UNIX.
One of the most extensively used operating systems is the multiprocessing operating system.
The following diagram depicts the basic organisation of a typical multiprocessing system.

The computer system should have the following features to efficiently use a multiprocessing
operating system:
In a multiprocessing OS, a motherboard can handle many processors.
Processors can also be utilised as a part of a multiprocessing system.
Pros of Multiprocessing OS
Increased reliability: Processing tasks can be spread among numerous processors in the
multiprocessing system. This promotes reliability because if one processor fails, the task can
be passed on to another.
Increased throughout: More work could be done in less time as the number of processors
increases.
The economy of scale: Multiprocessor systems are less expensive than single-processor
computers because they share peripherals, additional storage devices, and power sources.
Cons of Multiprocessing OS
Multiprocessing operating systems are more complex and advanced since they manage many
CPUs at the same time.
Types of Multiprocessing OS
Symmetrical
Each processor in a symmetrical multiprocessing system runs the same copy of the OS,
makes its own decisions, and collaborates with other processes to keep the system running
smoothly. CPU scheduling policies are straightforward. Any new job that is submitted by a
user could be assigned to the least burdened processor. It also means that at any given time,
all processors are equally taxed.
Since the processors share memory along with the I/O bus or data channel, the symmetric
multiprocessing OS is sometimes known as a “shared everything” system. The number of
processors in this system is normally limited to 16.
Characteristics
 Any processor in this system can run any process or job.
 Any CPU can start an Input and Output operation in this way.

Pros
These are fault-tolerant systems. A few processors failing does not bring the whole system to
a standstill.
Cons
 It is quite difficult to rationally balance the workload among processors.
 For handling many processors, specialised synchronisation algorithms are required.

Asymmetric
The processors in an asymmetric system have a master-slave relationship. In addition, one
processor may serve as a master or supervisor processor, while the rest are treated as
illustrated below.
In the asymmetric processing system represented above, CPU n1 serves as a supervisor,
controlling the subsequent CPUs. Each processor in such a system is assigned a specific task,
and the actions of the other processors are overseen by a master processor.
We have a maths coprocessor, for example, that can handle mathematical tasks better than the
main CPU. We also have an MMX processor, which is designed to handle multimedia-related
tasks. We also have a graphics processor to handle graphics-related tasks more efficiently
than the main processor. Whenever a user submits a new job, the operating system must
choose which processor is most suited for the task, and that processor is subsequently
assigned to the newly arriving job. This processor is the system’s master and controller. All
other processors search for masters for instructions or have jobs that are predetermined. The
master is responsible for allocating work to other processors.
Pros
Because several processors are available for a single job, the execution of an I/O operation or
application software in this type of system may be faster in some instances.
Cons
The processors are burdened unequally in this form of multiprocessing operating system. One
CPU may have a large job queue while another is idle. If a process handling a specific task
fails in this system, the entire system will fail.
Real-Time operating system
A real-time operating system (RTOS) is a special-purpose operating system used in
computers that has strict time constraints for any job to be performed. It is employed mostly
in those systems in which the results of the computations are used to influence a process
while it is executing. Whenever an event external to the computer occurs, it is communicated
to the computer with the help of some sensor used to monitor the event. The sensor produces
the signal that is interpreted by the operating system as an interrupt. On receiving an
interrupt, the operating system invokes a specific process or a set of processes to serve the
interrupt.
This process is completely uninterrupted unless a higher priority interrupt occurs during its
execution. Therefore, there must be a strict hierarchy of priority among the interrupts. The
interrupt with the highest priority must be allowed to initiate the process , while lower
priority interrupts should be kept in a buffer that will be handled later. Interrupt management
is important in such an operating system.
Real-time operating systems employ special-purpose operating systems because conventional
operating systems do not provide such performance.
The various examples of Real-time operating systems are:

o MTS
o Lynx
o QNX
o VxWorks etc.

Applications of Real-time operating system (RTOS):


RTOS is used in real-time applications that must work within specific deadlines. Following
are the common areas of applications of Real-time operating systems are given below.

o Real-time running structures are used inside the Radar gadget.


o Real-time running structures are utilized in Missile guidance.
o Real-time running structures are utilized in on line inventory trading.
o Real-time running structures are used inside the cell phone switching gadget.
o Real-time running structures are utilized by Air site visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging Systems.
o Real-time running structures are used inside the Fuel injection gadget.
o Real-time running structures are used inside the Traffic manipulate gadget.
o Real-time running structures are utilized in Autopilot travel simulators.

Types of Real-time operating system


Following are the three types of RTOS systems are:
Hard Real-Time operating system:
In Hard RTOS, all critical tasks must be completed within the specified time duration, i.e.,
within the given deadline. Not meeting the deadline would result in critical failures such as
damage to equipment or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the driver's
seat. When the driver applies brakes at a particular instance, the airbags grow and prevent the
driver's head from hitting the handle. Had there been some delay even of milliseconds, then it
would have resulted in an accident.
Similarly, consider an on-stock trading software. If someone wants to sell a particular share,
the system must ensure that command is performed within a given critical time. Otherwise, if
the market falls abruptly, it may cause a huge loss to the trader.
Soft Real-Time operating system:
Soft RTOS accepts a few delays via the means of the Operating system. In this kind of
RTOS, there may be a closing date assigned for a particular job, but a delay for a small
amount of time is acceptable. So, cut off dates are treated softly via means of this kind of
RTOS.

For Example,
This type of system is used in Online Transaction systems and Livestock price quotation
Systems.
Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date
might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.
For Example, this system is used in various forms of Multimedia applications.
Advantages of Real-time operating system:
The benefits of real-time operating system are as follows-:
o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o The real-time working structures are extra compact, so those structures require much
less memory space.
o In a Real-time operating system, the maximum utilization of devices and systems.
o Focus on running applications and less importance to applications that are in the
queue.
o Since the size of programs is small, RTOS can also be embedded systems like in
transport and others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of systems.

Disadvantages of Real-time operating system:


The disadvantages of real-time operating systems are as follows-

o Real-time operating systems have complicated layout principles and are very costly to
develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.

Characteristics of Real-time Operating System:


Following are the some of the characteristics of Real-time Operating System:
1. Time Constraints: Time constraints related with real-time systems simply means
that time interval allotted for the response of the ongoing program. This deadline
means that the task should be completed within this time interval. Real-time
system is responsible for the completion of all tasks within their time intervals.
2. Correctness: Correctness is one of the prominent part of real-time systems. Real-
time systems produce correct result within the given time interval. If the result is
not obtained within the given time interval then also result is not considered
correct. In real-time systems, correctness of result is to obtain correct result in
time constraint.
3. Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
4. Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It also
recovers very soon when failure occurs in the system and it does not cause any
harm to the data and information.
5. Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals. This
makes the real-time systems concurrent systems.
6. Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a way
that different components are at different geographical locations. Thus all the
operations of real-time systems are operated in distributed ways.
7. Stability: Even when the load is very heavy, real-time systems respond in the
time constraint i.e. real-time systems does not delay the result of tasks even when
there are several task going on a same time. This brings the stability in real-time
systems.
8. Fault tolerance: Real-time systems must be designed to tolerate and recover from
faults or errors. The system should be able to detect errors and recover from them
without affecting the system’s performance or output.
9. Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input,
regardless of the load or other factors.
10. Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must ensure
that communication is reliable, fast, and secure.
11. Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time constraints
and produce correct results.
12. Heterogeneous environment: Real-time systems may operate in a heterogeneous
environment, where different components or devices have different characteristics
or capabilities. The system must be designed to handle these differences and
ensure that all components work together seamlessly.
13. Scalability: Real-time systems must be scalable, which means that the system
must be able to handle varying workloads and increase or decrease its resources as
needed.
14. Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure that
data is protected and access is restricted to authorized users only.

Multiprocessor scheduling
Multiprocess scheduling allows various processes to run simultaneously over more than one
processor or core. The complexity of multiprocessor scheduling as compared to single
processor scheduling is mainly caused by the requirements to balance the execution load over
multiple processors. These processors can be tightly coupled (multicore systems), i.e., they
are capable of communicating through a common bus, memory, I/O channels. They also can
be loosely coupled (e.g., multiprocessor systems, distributed multiprocessor, and clustered
systems), where each processor is having its main memory and I/O channels. The processors
can be also identical (symmetric multiprocessing), or heterogeneous (asymmetric
multiprocessing). The design issues of multiprocessor scheduling include some basic
concepts: processor affinity, load balancing, synchronization granularity.
Processor affinity is a design concept that associates processes with the processors to run on.
Based on the processor affinity concept, the system tends to avoid migrating processes to
other processors different from where they are executed. The process affinity concept aims to
mitigate the overheads of invalidating process data in the first processor (where the process
ran first) cache and repopulating them again on the new processor cache. There are two types
of processor affinity: Soft and hard. In soft affinity, the operating systems choose but do not
guarantee to keep a process running on the same processor (dynamic scheduling). In hard
affinity, a process can specify a subset of processors on which it may run (dedicated
processor assignment).
Load Balancing is a design concept that distributes the workload evenly across all processors.
In general, systems immediately extract a runnable process from the common run queue once
a processor becomes idle. If a processor has its private queue of processes eligible to execute,
based on the processor affinity concept, the load balancing concept will be necessary to
assure equal distribution of execution load. There are two approaches for load balancing:
push and pull migration. In push migration, a system process routinely checks the load on
each processor and moves processes from overloaded to idle or less busy processors. In pull
migration, an idle processor pulls a waiting task from a busy processor and executes it.
Synchronization granularity is a measure that indicates the frequency of communication
between multiple process threads in a multithreading system relative to the execution time of
these threads.
Depending on the synchronization granularity, executing a multithreaded process on multiple
processors (i.e., parallelism) can be classified into three categories:

1. fine-grained,
2. medium-grained, and
3. coarse-grained parallelism.

1. Fine-Grained Multithreading: The switching between threads may take place at an


instruction level. The system may execute up to 20 instructions before it needs to
communicate with other threads.
2. Medium-Grained Multithreading: thread execution on a processor takes a longer time (~
between 20 - 2000 instructions) before communicating with other threads.
3. Coarse-Grained Multithreading: thread execution on a processor takes a longer time (~
>2000 instructions) before communicating with other threads.
Linux
Linux is a powerful and flexible family of operating systems that are free to use and share. It
was created by a person named Linus Torvalds in 1991. What’s cool is that anyone can see
how the system works because its source code is open for everyone to explore and modify.
This openness encourages people from all over the world to work together and make Linux
better and better. Since its beginning, Linux has grown into a stable and safe system used in
many different things, like computers, smartphones, and big supercomputers. It’s known for
being efficient, meaning it can do a lot of tasks quickly, and it’s also cost-effective, which
means it doesn’t cost a lot to use. Lots of people love Linux, and they’re part of a big
community where they share ideas and help each other out. As technology keeps moving
forward, Linux will keep evolving and staying important in the world of computers.
Architecture of Linux
Linux architecture has the following components:
1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its
virtual resources. This makes the process seem as if it is the sole process running
on the machine. The kernel is also responsible for preventing and mitigating
conflicts between different processes. Different types of the kernel are:
 Monolithic Kernel
 Hybrid kernels
 Exo kernels
 Micro kernels
2. System Library:Linux uses system libraries, also known as shared libraries, to
implement various functionalities of the operating system. These libraries contain
pre-written code that applications can use to perform specific tasks. By using
these libraries, developers can save time and effort, as they don’t need to write the
same code repeatedly. System libraries act as an interface between applications
and the kernel, providing a standardized and efficient way for applications to
interact with the underlying system.
3. Shell:The shell is the user interface of the Linux Operating System. It allows users
to interact with the system by entering commands, which the shell interprets and
executes. The shell serves as a bridge between the user and the kernel, forwarding
the user’s requests to the kernel for processing. It provides a convenient way for
users to perform various tasks, such as running programs, managing files, and
configuring the system.
4. Hardware Layer: The hardware layer encompasses all the physical components
of the computer, such as RAM (Random Access Memory), HDD (Hard Disk
Drive), CPU (Central Processing Unit), and input/output devices. This layer is
responsible for interacting with the Linux Operating System and providing the
necessary resources for the system and applications to function properly. The
Linux kernel and system libraries enable communication and control over these
hardware components, ensuring that they work harmoniously together.
5. System Utility: System utilities are essential tools and programs provided by the
Linux Operating System to manage and configure various aspects of the system.
These utilities perform tasks such as installing software, configuring network
settings, monitoring system performance, managing users and permissions, and
much more. System utilities simplify system administration tasks, making it easier
for users to maintain their Linux systems efficiently.

Advantages of Linux

 The main advantage of Linux is it is an open-source operating system. This means


the source code is easily available for everyone and you are allowed to contribute,
modify and distribute the code to anyone without any permissions.
 In terms of security, Linux is more secure than any other operating system. It does
not mean that Linux is 100 percent secure, it has some malware for it but is less
vulnerable than any other operating system. So, it does not require any anti-virus
software.
 The software updates in Linux are easy and frequent.
 Various Linux distributions are available so that you can use them according to
your requirements or according to your taste.
 Linux is freely available to use on the internet.
 It has large community support.
 It provides high stability. It rarely slows down or freezes and there is no need to
reboot it after a short time.
 It maintains the privacy of the user.
 The performance of the Linux system is much higher than other operating
systems. It allows a large number of people to work at the same time and it
handles them efficiently.
 It is network friendly.
 The flexibility of Linux is high. There is no need to install a complete Linux suite;
you are allowed to install only the required components.
 Linux is compatible with a large number of file formats.
 It is fast and easy to install from the web. It can also install it on any hardware
even on your old computer system.
 It performs all tasks properly even if it has limited space on the hard disk.

Disadvantages of Linux

 It is not very user-friendly. So, it may be confusing for beginners.


 It has small peripheral hardware drivers as compared to windows.

CASE STUDY-LINUX

An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs. The Linux open
source operating system, or Linux OS, is a freely distributable, cross-platform operating
system based on UNIX.
The Linux consist of a kernel and some system programs. There are also some application
programs for doing work. The kernel is the heart of the operating system which provides a
set of tools that are used by system calls.

The defining component of Linux is the Linux kernel, an operating system kernel first
released on 5 October 1991 by Linus Torvalds.

A Linux-based system is a modular Unix-like operating system. It derives much of its basic
design from principles established in UNIX. Such a system uses a monolithic kernel which
handles process control, networking, and peripheral and file system access.

Important features of Linux Operating System


1 Portable - Portability means software can work on different types of hardware in
same way. Linux kernel and application programs supports their installation on
any kind of hardware platform.

2 Open Source - Linux source code is freely available and it is community-based


development project.

3 Multi-User & Multiprogramming - Linux is a multiuser system where multiple users


can access system resources like memory/ ram/ application programs at same time. Linux
is a multiprogramming system means multiple applications can run at same time.

4 Hierarchical File System - Linux provides a standard file structure in which system files/
user files are arranged.

5 Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system.

6 Security - Linux provides user security using authentication features like


password protection/ controlled access to specific files/ encryption of data.

Components of Linux System: -


Linux Operating System has primarily three components

Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It is consisting of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level hardware
details to system or application programs.

System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.

System Utility - System Utility programs are responsible to do specialized, individual level
tasks
Installed components of a Linux system include the following:

A bootloader is a program that loads the Linux kernel into the computer's main
memory, by being executed by the computer when it is turned on and after the
firmware initialization is performed.

An init program is the first process launched by the Linux kernel, and is at the root of the
process tree.

Software libraries, which contain code that can be used by running processes. The
most commonly used software library on Linux systems, the GNU C Library (glibc),
C standard library and Widget toolkits.

User interface programs such as command shells or windowing environments. The user
interface, also known as the shell, is either a command-line interface (CLI), a graphical
user interface (GUI), or through controls attached to the associated hardware.

Architecture
Linux System Architecture is consisting of following layers
1. Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).

2. Kernel - Core component of Operating System, interacts directly with


hardware, provides low level services to upper layer components.

3. Shell - An interface to kernel, hiding complexity of kernel's functions


from users. Takes commands from user and executes kernel's functions.

4. Utilities - Utility programs giving user most of the functionalities of an operating systems.

Modes of operation

Kernel Mode:

Kernel component code executes in a special privileged mode called kernel mode
with full access to all resources of the computer.
This code represents a single process, executes in single address space and do not
require any context switch and hence is very efficient and fast.

Kernel runs each process and provides system services to processes, provides protected
access to hardware to processes.

User Mode:

The system programs use the tools provided by the kernel to implement the various
services required from an operating system. System programs, and all other programs, run
`on top of the kernel', in what is called the user mode.

Support code which is not required to run in kernel mode is in System Library.

User programs and other system programs work in User Mode which has no access
to system hardware and kernel code.

User programs/ utilities use System libraries to access Kernel functions to get
system's low- level tasks.

Major Services provided


bLINUX System
Initialization (init)

The single most important service in a LINUX system is provided by init program. The
init is started as the first process of every LINUX system, as the last thing the kernel does
when it boots. When init starts, it continues the boot process by doing various startup
chores (checking and mounting file systems, starting daemons, etc).

Logins from terminals (getty)


Logins from terminals (via serial lines) and the console are provided by the getty program.
init starts a separate instance of getty for each terminal upon which logins are to be
allowed. Getty reads the username and runs the login program, which reads the password.
If the username and password are correct, login runs the shell.

Logging and Auditing (syslog)

The kernel and many system programs produce error, warning, and other messages. It is
often important that these messages can be viewed later, so they should be written to a file.
The program doing this logging operation is known as syslog.

Periodic command execution (cron & at)

Both users and system administrators often need to run commands periodically. For
example, the system administrator might want to run a command to clean the directories
with temporary files from old files, to keep the disks from filling up, since not all programs
clean up after themselves correctly.
o The cron service is set up to do this. Each user can have a crontab file, where the
lists the commands wish to execute and the times they should be executed.

o The at service is similar to cron, but it is once only: the command is


executed at the given time, but it is not repeated.

Graphical user interface


o UNIX and Linux don't incorporate the user interface into the kernel; instead, they let
it be implemented by user level programs. This applies for both text mode and graphical
environments. This arrangement makes the system more flexible.

o The graphical environment primarily used with Linux is called the X Window
System (X for short) that provides tools with which a GUI can be implemented. Some
popular window managers are blackbox and windowmaker. There are also two popular
desktop managers, KDE and Gnome.

Network logins (telnet, rlogin & ssh)

Network logins work a little differently than normal logins. For each person logging in via
the network there is a separate virtual network connection. It is therefore not possible to
run a separate getty for each virtual connection. There are several different ways to log in
via a network, telnet and ssh being the major ones in TCP/IP networks.

Most of Linux system administrators consider telnet and rlogin to be insecure and prefer ssh,
the
``secure shell'', which encrypts traffic going over the network, thereby making it far less
likely that the malicious can ``sniff'' the connection and gain sensitive data like usernames
and passwords.

Network File System (NFS & CIFS)

One of the more useful things that can be done with networking services is sharing files via
a network file system. Depending on your network this could be done over the Network
File System (NFS), or over the Common Internet File System (CIFS).

NFS is typically a 'UNIX' based service. In Linux, NFS is supported by the kernel. CIFS
however is not. In Linux, CIFS is supported by Samba. With a network file system any file
operations done by a program on one machine are sent over the network to another
computer.

You might also like