Os Unit 4
Os Unit 4
4. Three-Tier
Development is made easier because client data is saved in the intermediate tier rather than
the client itself. Online applications are where this kind of architecture is most frequently
found.
5. N-Tier
N-tier systems are utilized when a server or application has to send requests to other
corporate services over a network.
Network Applications
DOS is used by many network applications, including the Web, peer-to-peer networks,
multiplayer web-based games, and virtual communities.
Telecommunication Networks
DOS is useful in phones and cellular networks. A DOS can be found in networks like the
Internet, wireless sensor networks, and routing algorithms.
Parallel Computation
DOS is the basis of systematic computing, which includes cluster computing and grid
computing, and a variety of volunteer computing projects.
Disadvantages of Distributed OS
There are various disadvantages of the distributed operating system. Some of them are as
follows:
1. The system must decide which jobs must be executed when they must be executed,
and where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections
must be secured.
3. The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
4. The underlying software is extremely complex and is not understood very well
compared to other systems.
5. The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.
6. These systems aren't widely available because they're thought to be too expensive.
7. Gathering, processing, presenting, and monitoring hardware use metrics for big
clusters can be a real issue.
As the name suggests, client-server architecture consists of a client and a server. The server is
where all the work processes are, while the client is where the user interacts with the service
and other resources (remote server). The client can then request from the server, and the
server will respond accordingly. Typically, only one server handles the remote side; however,
using multiple servers ensures total safety.
Client-server Architecture
Client-server architecture has one standard design feature: centralized security. Data such as
usernames and passwords are stored in a secure database for any server user to have access to
this information. This makes it more stable and secure than peer-to-peer. This stability comes
from client-server architecture, where the security database can allow resource usage in a
more meaningful way. The system is much more stable and secure, even though it isn’t as
fast as a server. The disadvantages of a distributed system are its single point of failure and
not being as scalable as a server.
ii) Peer-to-peer (P2P) architecture
A peer-to-peer network, also called a (P2P) network, works on the concept of no central
control in a distributed system. A node can either act as a client or server at any given time
once it joins the network. A node that requests something is called a client, and one that
provides something is called a server. In general, each node is called a peer.
Peer-to-Peer Architecture
If a new node wishes to provide services, it can do so in two ways. One way is to register
with a centralized lookup server, which will then direct the node to the service provider. The
other way is for the node to broadcast its service request to every other node in the network,
and whichever node responds will provide the requested service.
P2P networks of today have three separate sections:
Structured P2P: The nodes in structured P2P follow a predefined distributed
data structure.
Unstructured P2P: The nodes in unstructured P2P randomly select their
neighbors.
Hybrid P2P: In a hybrid P2P, some nodes have unique functions appointed to
them in an orderly manner.
The computer system should have the following features to efficiently use a multiprocessing
operating system:
In a multiprocessing OS, a motherboard can handle many processors.
Processors can also be utilised as a part of a multiprocessing system.
Pros of Multiprocessing OS
Increased reliability: Processing tasks can be spread among numerous processors in the
multiprocessing system. This promotes reliability because if one processor fails, the task can
be passed on to another.
Increased throughout: More work could be done in less time as the number of processors
increases.
The economy of scale: Multiprocessor systems are less expensive than single-processor
computers because they share peripherals, additional storage devices, and power sources.
Cons of Multiprocessing OS
Multiprocessing operating systems are more complex and advanced since they manage many
CPUs at the same time.
Types of Multiprocessing OS
Symmetrical
Each processor in a symmetrical multiprocessing system runs the same copy of the OS,
makes its own decisions, and collaborates with other processes to keep the system running
smoothly. CPU scheduling policies are straightforward. Any new job that is submitted by a
user could be assigned to the least burdened processor. It also means that at any given time,
all processors are equally taxed.
Since the processors share memory along with the I/O bus or data channel, the symmetric
multiprocessing OS is sometimes known as a “shared everything” system. The number of
processors in this system is normally limited to 16.
Characteristics
Any processor in this system can run any process or job.
Any CPU can start an Input and Output operation in this way.
Pros
These are fault-tolerant systems. A few processors failing does not bring the whole system to
a standstill.
Cons
It is quite difficult to rationally balance the workload among processors.
For handling many processors, specialised synchronisation algorithms are required.
Asymmetric
The processors in an asymmetric system have a master-slave relationship. In addition, one
processor may serve as a master or supervisor processor, while the rest are treated as
illustrated below.
In the asymmetric processing system represented above, CPU n1 serves as a supervisor,
controlling the subsequent CPUs. Each processor in such a system is assigned a specific task,
and the actions of the other processors are overseen by a master processor.
We have a maths coprocessor, for example, that can handle mathematical tasks better than the
main CPU. We also have an MMX processor, which is designed to handle multimedia-related
tasks. We also have a graphics processor to handle graphics-related tasks more efficiently
than the main processor. Whenever a user submits a new job, the operating system must
choose which processor is most suited for the task, and that processor is subsequently
assigned to the newly arriving job. This processor is the system’s master and controller. All
other processors search for masters for instructions or have jobs that are predetermined. The
master is responsible for allocating work to other processors.
Pros
Because several processors are available for a single job, the execution of an I/O operation or
application software in this type of system may be faster in some instances.
Cons
The processors are burdened unequally in this form of multiprocessing operating system. One
CPU may have a large job queue while another is idle. If a process handling a specific task
fails in this system, the entire system will fail.
Real-Time operating system
A real-time operating system (RTOS) is a special-purpose operating system used in
computers that has strict time constraints for any job to be performed. It is employed mostly
in those systems in which the results of the computations are used to influence a process
while it is executing. Whenever an event external to the computer occurs, it is communicated
to the computer with the help of some sensor used to monitor the event. The sensor produces
the signal that is interpreted by the operating system as an interrupt. On receiving an
interrupt, the operating system invokes a specific process or a set of processes to serve the
interrupt.
This process is completely uninterrupted unless a higher priority interrupt occurs during its
execution. Therefore, there must be a strict hierarchy of priority among the interrupts. The
interrupt with the highest priority must be allowed to initiate the process , while lower
priority interrupts should be kept in a buffer that will be handled later. Interrupt management
is important in such an operating system.
Real-time operating systems employ special-purpose operating systems because conventional
operating systems do not provide such performance.
The various examples of Real-time operating systems are:
o MTS
o Lynx
o QNX
o VxWorks etc.
For Example,
This type of system is used in Online Transaction systems and Livestock price quotation
Systems.
Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date
might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.
For Example, this system is used in various forms of Multimedia applications.
Advantages of Real-time operating system:
The benefits of real-time operating system are as follows-:
o Easy to layout, develop and execute real-time applications under the real-time
operating system.
o The real-time working structures are extra compact, so those structures require much
less memory space.
o In a Real-time operating system, the maximum utilization of devices and systems.
o Focus on running applications and less importance to applications that are in the
queue.
o Since the size of programs is small, RTOS can also be embedded systems like in
transport and others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of systems.
o Real-time operating systems have complicated layout principles and are very costly to
develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.
Multiprocessor scheduling
Multiprocess scheduling allows various processes to run simultaneously over more than one
processor or core. The complexity of multiprocessor scheduling as compared to single
processor scheduling is mainly caused by the requirements to balance the execution load over
multiple processors. These processors can be tightly coupled (multicore systems), i.e., they
are capable of communicating through a common bus, memory, I/O channels. They also can
be loosely coupled (e.g., multiprocessor systems, distributed multiprocessor, and clustered
systems), where each processor is having its main memory and I/O channels. The processors
can be also identical (symmetric multiprocessing), or heterogeneous (asymmetric
multiprocessing). The design issues of multiprocessor scheduling include some basic
concepts: processor affinity, load balancing, synchronization granularity.
Processor affinity is a design concept that associates processes with the processors to run on.
Based on the processor affinity concept, the system tends to avoid migrating processes to
other processors different from where they are executed. The process affinity concept aims to
mitigate the overheads of invalidating process data in the first processor (where the process
ran first) cache and repopulating them again on the new processor cache. There are two types
of processor affinity: Soft and hard. In soft affinity, the operating systems choose but do not
guarantee to keep a process running on the same processor (dynamic scheduling). In hard
affinity, a process can specify a subset of processors on which it may run (dedicated
processor assignment).
Load Balancing is a design concept that distributes the workload evenly across all processors.
In general, systems immediately extract a runnable process from the common run queue once
a processor becomes idle. If a processor has its private queue of processes eligible to execute,
based on the processor affinity concept, the load balancing concept will be necessary to
assure equal distribution of execution load. There are two approaches for load balancing:
push and pull migration. In push migration, a system process routinely checks the load on
each processor and moves processes from overloaded to idle or less busy processors. In pull
migration, an idle processor pulls a waiting task from a busy processor and executes it.
Synchronization granularity is a measure that indicates the frequency of communication
between multiple process threads in a multithreading system relative to the execution time of
these threads.
Depending on the synchronization granularity, executing a multithreaded process on multiple
processors (i.e., parallelism) can be classified into three categories:
1. fine-grained,
2. medium-grained, and
3. coarse-grained parallelism.
Advantages of Linux
Disadvantages of Linux
CASE STUDY-LINUX
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs. The Linux open
source operating system, or Linux OS, is a freely distributable, cross-platform operating
system based on UNIX.
The Linux consist of a kernel and some system programs. There are also some application
programs for doing work. The kernel is the heart of the operating system which provides a
set of tools that are used by system calls.
The defining component of Linux is the Linux kernel, an operating system kernel first
released on 5 October 1991 by Linus Torvalds.
A Linux-based system is a modular Unix-like operating system. It derives much of its basic
design from principles established in UNIX. Such a system uses a monolithic kernel which
handles process control, networking, and peripheral and file system access.
4 Hierarchical File System - Linux provides a standard file structure in which system files/
user files are arranged.
5 Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system.
Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It is consisting of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level hardware
details to system or application programs.
System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
System Utility - System Utility programs are responsible to do specialized, individual level
tasks
Installed components of a Linux system include the following:
A bootloader is a program that loads the Linux kernel into the computer's main
memory, by being executed by the computer when it is turned on and after the
firmware initialization is performed.
An init program is the first process launched by the Linux kernel, and is at the root of the
process tree.
Software libraries, which contain code that can be used by running processes. The
most commonly used software library on Linux systems, the GNU C Library (glibc),
C standard library and Widget toolkits.
User interface programs such as command shells or windowing environments. The user
interface, also known as the shell, is either a command-line interface (CLI), a graphical
user interface (GUI), or through controls attached to the associated hardware.
Architecture
Linux System Architecture is consisting of following layers
1. Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
4. Utilities - Utility programs giving user most of the functionalities of an operating systems.
Modes of operation
Kernel Mode:
Kernel component code executes in a special privileged mode called kernel mode
with full access to all resources of the computer.
This code represents a single process, executes in single address space and do not
require any context switch and hence is very efficient and fast.
Kernel runs each process and provides system services to processes, provides protected
access to hardware to processes.
User Mode:
The system programs use the tools provided by the kernel to implement the various
services required from an operating system. System programs, and all other programs, run
`on top of the kernel', in what is called the user mode.
Support code which is not required to run in kernel mode is in System Library.
User programs and other system programs work in User Mode which has no access
to system hardware and kernel code.
User programs/ utilities use System libraries to access Kernel functions to get
system's low- level tasks.
The single most important service in a LINUX system is provided by init program. The
init is started as the first process of every LINUX system, as the last thing the kernel does
when it boots. When init starts, it continues the boot process by doing various startup
chores (checking and mounting file systems, starting daemons, etc).
The kernel and many system programs produce error, warning, and other messages. It is
often important that these messages can be viewed later, so they should be written to a file.
The program doing this logging operation is known as syslog.
Both users and system administrators often need to run commands periodically. For
example, the system administrator might want to run a command to clean the directories
with temporary files from old files, to keep the disks from filling up, since not all programs
clean up after themselves correctly.
o The cron service is set up to do this. Each user can have a crontab file, where the
lists the commands wish to execute and the times they should be executed.
o The graphical environment primarily used with Linux is called the X Window
System (X for short) that provides tools with which a GUI can be implemented. Some
popular window managers are blackbox and windowmaker. There are also two popular
desktop managers, KDE and Gnome.
Network logins work a little differently than normal logins. For each person logging in via
the network there is a separate virtual network connection. It is therefore not possible to
run a separate getty for each virtual connection. There are several different ways to log in
via a network, telnet and ssh being the major ones in TCP/IP networks.
Most of Linux system administrators consider telnet and rlogin to be insecure and prefer ssh,
the
``secure shell'', which encrypts traffic going over the network, thereby making it far less
likely that the malicious can ``sniff'' the connection and gain sensitive data like usernames
and passwords.
One of the more useful things that can be done with networking services is sharing files via
a network file system. Depending on your network this could be done over the Network
File System (NFS), or over the Common Internet File System (CIFS).
NFS is typically a 'UNIX' based service. In Linux, NFS is supported by the kernel. CIFS
however is not. In Linux, CIFS is supported by Samba. With a network file system any file
operations done by a program on one machine are sent over the network to another
computer.