0% found this document useful (0 votes)
502 views17 pages

CH 1 Introduction

The document discusses the evolution of operating systems from first to fifth generations, describing the technologies of each generation and how they improved efficiency. It also outlines different types of operating systems including batch, multiprogrammed, and multitasking systems.

Uploaded by

api-263987257
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
502 views17 pages

CH 1 Introduction

The document discusses the evolution of operating systems from first to fifth generations, describing the technologies of each generation and how they improved efficiency. It also outlines different types of operating systems including batch, multiprogrammed, and multitasking systems.

Uploaded by

api-263987257
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

OPERATING SYSTEM

Introduction
Operating System
Operating system is a system program that acts as an interface between user
and computer hardware and controls execution of application programs.
A program that acts as an intermediary between a user of a computer and the computer
hardware
Operating System goals
Execute user programs and make solving user problems easier.
Make the computer system convenient to use
Use the computer hardware in an efficient manner
1.1 Evolution of Operating System (Generations )
A) First Generation (1940-1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms.
They were very expensive to operate and in addition to using a great deal of electricity,
generated a lot of heat, which was often the cause of malfunctions.
First generation computers relied on machine language, the lowest-level programming
language understood by computers, to perform operations, and they could only solve
one problem at a time.
Input was based on punched cards and paper tape, and output was displayed on
printouts.
OPERATING SYSTEM

Examples: The UNIVAC and ENIAC computers are examples of first-generation
computing devices. The UNIVAC was the first commercial computer delivered to a
business client, the U.S. Census Bureau in 1951.
B) Second Generation (1956-1963) Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers.
The transistor was invented in 1947 but did not see widespread use in computers until
the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become
smaller, faster, cheaper, more energy-efficient and more reliable than their first-
generation predecessors.
Though the transistor still generated a great deal of heat that subjected the computer to
damage, it was a vast improvement over the vacuum tube. Second-generation
computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify instructions
in words.
High-level programming languages were also being developed at this time, such as
early versions of COBOL and FORTRAN. These were also the first computers that stored
their instructions in their memory, which moved from a magnetic drum to magnetic
core technology.
The first computers of this generation were developed for the atomic energy industry.


OPERATING SYSTEM

C) Third Generation (1964-1971) Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of
computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors,
which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating system,
which allowed the device to run many different applications at one time with a central
program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were
smaller and cheaper than their predecessors.
D) Fourth Generation (1971-Present) Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of
integrated circuits were built onto a single silicon chip.
What in the first generation filled an entire room could now fit in the palm of the hand.
The Intel 4004 chip, developed in 1971, located all the components of the computer
from the central processing unit and memory to input/ output controlson a single
chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh.
Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.
OPERATING SYSTEM

As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet.
Fourth generation computers also saw the development of GUIs, the mouse and
handheld devices.
E) Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition, that are
being used today.
The use of parallel processing and superconductors is helping to make artificial
intelligence a reality.
Quantum computation and molecular and nanotechnology will radically change the face
of computers in years to come.
The goal of fifth-generation computing is to develop devices that respond to natural
language input and are capable of learning and self-organization.
Operating systems are there from the very first computer generation. Operating
systems keep evolving over the period of time. Following are few of the important
types of operating system which are most commonly used.
1.2. TYPES OF OPERATING SYSTEM
A) Batch Operating System
The users of batch operating system do not interact with the computer directly.
Each user prepares his job on an off-line device like punch cards and submits it to the
computer operator.
OPERATING SYSTEM

To speed up processing, jobs with similar needs are batched together and run as a
group. Thus, the programmers left their programs with the operator. The operator then
sorts programs into batches with similar requirements.
The problems with Batch Systems are following.
1. Lack of interaction between the user and job.
2. CPU is often idle, because the speeds of the mechanical I/ O devices are slower than
CPU.
3. Difficult to provide the desired priority.
B) Multiprogrammed Operating System
A multiprogramming operating system is one that allows end-users to run more than
one program at a time.
The development of such a system, the first type to allow this functionality, was a major
step in the development of sophisticated computers.
The technology works by allowing the central processing unit (CPU) of a computer to
switch between two or more running tasks when the CPU is idle.
Early computers were largely dedicated to executing one program or, more
accurately, one task initiated by a program at a time.
Understanding the concept of tasks is key to understanding how a multiprogramming
operating system functions. A "task" is a small sequence of commands that, when
combined, comprises the execution of a running program.
For example, if the program is a calculator, one task of the program would be recording
the numbers being input by the end-user.
OPERATING SYSTEM

A multiprogramming operating system acts by analyzing the current CPU activity in the
computer.
When the CPU is idle when it is between tasks it has the opportunity to use that
downtime to run tasks for another program. In this way, the functions of several
programs may be executed sequentially.
For example, when the CPU is waiting for the end-user to enter numbers to be
calculated, instead of being entirely idle, it may run load the components of a web page
the user is accessing.
The main benefit of this functionality is that it can reduce wasted time in the system's
operations. As in a business, efficiency is the key to generating the most profit from an
enterprise.
Using this type of operating system eliminates waste in the system by ensuring that the
computer's CPU is running at maximum capacity more of the time. This results in a
smoother computing experience from the end-user's point of view, as program
commands are constantly being executed in the background at all times, helping to
speed execution of programs.
The multiprogramming operating system has been largely supplanted by a new
generation of operating system known as multitasking operating systems.
In a multitasking operating system, the system does not have to wait for the completion
of a task before moving to work on an active program.
Instead, it can interrupt a running program at any time in order to shift its CPU
resources to a different active program. This provides for a more dynamic approach to
handling concurrent programs.

OPERATING SYSTEM

C) Multitasking Operating System
A multitasking operating system is any type of system that is capable of running more
than one program at a time.
Most modern operating systems are configured to handle multiple programs
simultaneously, with the exception of some privately developed systems that are
designed for use in specific business settings.
With older examples of the multitasking operating system, managing two or more tasks
normally involved switching system resources back and forth between the two running
processes.
The system would execute tasks for one, freeze that program for a few seconds, and
then execute tasks for the other program.
While this approach did create a short time lag for the operator, this lag was usually no
more than a few seconds, and still offered considerable more efficiency than the older
single-task operating system.
Over time, popular incarnations of the multitasking operating system were developed
that used a different approach to allocating resources for each active program.
This created a situation where virtually no time lag occurred at all, assuming that the
equipment driving the system had adequate resources.
For the end user, this meant the ability to perform several tasks simultaneously without
any waiting for the system to release or redirect resources as each task completed in
turn.
The typical multiple operating system requires more resources than the simple
operating systems that were common for desktop computers in the late 1970s and
early 1980s.
OPERATING SYSTEM

Newer systems require platforms with a considerable amount of random access
memory (RAM) as well as other type of virtual memory.
If the resources are not available to drive the various applications that are open and
being executed, the system may slow to a crawl, or possibly even shut down an
application or two if that is the way the system is configured to prevent overload.
Today, most desktop, laptop, and netbook operating systems function with some type
of multitasking operating system.
Even equipment such as automatic teller machines or ATMs still make use of some type
of multitasking system, using a series of programs to check balances and execute the
requests made by users.
There are also examples of movie ticket stub systems that are able to perform several
tasks at once, including posting receipts for tickets purchased, even as the system
generates and dispenses the purchased tickets.
D) Time-Sharing operating systems
Time sharing is a technique which enables many people, located at various terminals, to
use a particular computer system at the same time.
Time-sharing or multitasking is a logical extension of multiprogramming. Processor's
time which is shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, objective is to maximize
processor use, whereas in Time-Sharing Systems objective is to minimize response
time.
Multiple jobs are executed by the CPU by switching between them, but the switches
occur so frequently. Thus, the user can receives an immediate response.
OPERATING SYSTEM

For example, in a transaction processing, processor execute each user program in a
short burst or quantum of computation. That is if n users are present, each user can get
time quantum. When the user submits the command, the response time is in few
seconds at most.
Operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time. Computer systems that were designed primarily as batch
systems have been modified to time-sharing systems.
Advantages
Provide advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Disadvantages
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
E) Multiprocessor Operating System
Multiprocessor Operating System refers to the use of two or more central processing
units (CPU) within a single computer system.
These multiple CPUs are in a close communication sharing the computer bus, memory
and other peripheral devices. These systems are referred as tightly coupled systems.
OPERATING SYSTEM

These types of systems are used when very high speed is required to process a large
volume of data. These systems are generally used in environment like satellite control,
weather forecasting etc
Multiprocessing system is based on the symmetric multiprocessing model, in which
each processor runs an identical copy of operating system and these copies
communicate with each other.
In this system processor is assigned a specific task. A master processor controls the
system. This scheme defines a master-slave relationship.
These systems can save money in compare to single processor systems because the
processors can share peripherals, power supplies and other devices. The main
advantage of multiprocessor system is to get more work done in a shorter period of
time.
Moreover, multiprocessor systems prove more reliable in the situations of failure of
one processor.
In this situation, the system with multiprocessor will not halt the system; it will only
slow it down.
In order to employ multiprocessing operating system effectively, the computer system
must have the followings:
Motherboard Support: A motherboard capable of handling multiple processors. This
means additional sockets or slots for the extra chips and a chipset capable of handling
the multiprocessing arrangement.
Processor Support: processors those are capable of being used in a multiprocessing
system.
OPERATING SYSTEM

The whole task of multiprocessing is managed by the operating system, which allocates
different tasks to be performed by the various processors in the system.
Applications designed for the use in multiprocessing are said to be threaded, which
means that they are broken into smaller routines that can be run independently.
This allows the operating system to let these threads run on more than one processor
simultaneously, which is multiprocessing that results in improved performance.
Multiprocessor system supports the processes to run in parallel. Parallel processing is
the ability of the CPU to simultaneously process incoming jobs. This becomes most
important in computer system, as the CPU divides and conquers the jobs.
Generally the parallel processing is used in the fields like artificial intelligence and
expert system, image processing, weather forecasting etc.
In a multiprocessor system, the dynamically sharing of resources among the various
processors may cause therefore, a potential bottleneck. There are three main sources of
contention that can be found in a multiprocessor operating system:
Locking system: In order to provide safe access to the resources shared among
multiple processors, they need to be protected by locking scheme.
The purpose of a locking is to serialize accesses to the protected resource by multiple
processors. Undisciplined use of locking can severely degrade the performance of
system.
This form of contention can be reduced by using locking scheme, avoiding long critical
sections, replacing locks with lock-free algorithms, or, whenever possible, avoiding
sharing altogether.
OPERATING SYSTEM

Shared data: The continuous accesses to the shared data items by multiple processors
(with one or more of them with data write) are serialized by the cache coherence
protocol.
Even in a moderate-scale system, serialization delays can have significant impact on the
system performance.
In addition, bursts of cache coherence traffic saturate the memory bus or the
interconnection network, which also slows down the entire system.
This form of contention can be eliminated by either avoiding sharing or, when this is
not possible, by using replication techniques to reduce the rate of write accesses to the
shared data.
False sharing: This form of contention arises when unrelated data items used by
different processors are located next to each other in the memory and, therefore, share
a single cache line:
The effect of false sharing is the same as that of regular sharing bouncing of the cache
line among several processors. Fortunately, once it is identified, false sharing can be
easily eliminated by setting the memory layout of non-shared data.
Apart from eliminating bottlenecks in the system, a multiprocessor operating system
developer should provide support for efficiently running user applications on the
multiprocessor.
Some of the aspects of such support include mechanisms for task placement and
migration across processors, physical memory placement insuring most of the memory
pages used by an application is located in the local memory, and scalable
multiprocessor synchronization primitives.

OPERATING SYSTEM

F) Distributed operating System
Distributed systems use multiple central processors to serve multiple real time
application and multiple users.
Data processing jobs are distributed among the processors accordingly to which one
can perform each job most efficiently.
The processors communicate with one another through various communication lines
(such as high-speed buses or telephone lines).
These are referred as loosely coupled systems or distributed systems. Processors in a
distributed system may vary in size and function. These processors are referred as
sites, nodes, computers and so on.
Advantages
With resource sharing facility user at one site may be able to use the resources
available at another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially continue
operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.



OPERATING SYSTEM

G) Cluster Operating System
"Cluster" is an ambiguous term in computer industry. Depending on the vendor and
specific contexts, a cluster may refer to wide a variety of environments.
In computers, clustering is the use of multiple computers, typically PCs or UNIX
workstations, multiple storage devices, and redundant interconnections, to form what
appears to users as a single highly available system.
Cluster computing can be used for load balancing as well as for high availability.
Advocates of clustering suggest that the approach can help an enterprise achieve
99.999 availability in some cases.
One of the main ideas of cluster computing is that, to the outside world, the cluster
appears to be a single system.
Clustering is the use of multiple computers to provide a single service. Load Balancing
is a technique to use multiple computers in a cluster. Therefore in a nutshell, load
balancing implements a computer cluster.
OS clustering (aka hardware clustering) is designed to manage hardware and os-level
failures. These typically work by starting a backup server when a primary fails in such a
way that it fully assumes the role of the primary.
Failover generally involves re-assigning the failed server IP-Address to the backup (IP-
takeover), re-permissioning file system access to the backup (if using a shared file
system instead of replication) , and then running a script that you setup yourself to
startup all your applications.
This technology is older, takes more time to perform a failover, and is less able to fully
utilize all of your hardware resources.

OPERATING SYSTEM

Advantages
If you have a bunch of applications which must run on the same machine, OS clustering
can ensure that all these must run on the primary node in the cluster.
If your applications are dependant on "local file system" like databases need to manage
their files locally - the os cluster can ensure that this file system fails over with the
primary node of the cluster.
If you dont have a NAS or a file server, and you need a shared file store, you can create a
share file store on the OS cluster for use by machines outside the cluster.
Of course if you have a file server or are storing information in a database, then its not
valid.
Application Server Clustering, or, more generally, software clustering, is far more
capable and dynamic.
First of all, the backup server is usually in at least a warm-standby mode and hopefully
hot, meaning that it can immediately assume the primary's responsibilities with very
little delay.
Second, advanced software-level clustering also supports load-balancing, so you never
have "backup" hardware sitting idle. Instead of re-assigning IP addresses, applications
that connect to your clustered environment must already be designed to check for
service failure/ availability on more than one destination host.
Alternatively, you can use some kind of load balancer/ traffic router that exposes the
cluster as a single IP address. A final difference is the dynamic nature of newer software
clustering techniques.
OPERATING SYSTEM

You can generally add or lose capacity on-the-fly with little or no visible impact to
dependent applications. Hardware-level clustering is quite difficult to correctly setup
and modify and requires fairly painful and regular testing.
H) Real Time Operating System
Real time system is defines as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment.
Real time processing is always on line whereas on line system need not be real time.
The time taken by the system to respond to an input and display of required updated
information is termed as response time. So in this method response time is very less as
compared to the online processing.
Real-time systems are used when there are rigid time requirements on the operation of
a processor or the flow of data and real-time systems can be used as a control device in
a dedicated application.
Real-time operating system has well-defined, fixed time constraints otherwise system
will fail.
For example Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, and home-applicance controllers, Air traffic control
system etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
systems secondary storage is limited or missing with data stored in ROM. In these
systems virtual memory is almost never found.
OPERATING SYSTEM

Soft real-time systems
Soft real time systems are less restrictive. Critical real-time task gets priority over other
tasks and retains the priority until it completes.
Soft real-time systems have limited utility than hard real-time systems.
For example, Multimedia, virtual reality, Advanced Scientific Projects like undersea
exploration and planetary rovers etc.

You might also like