Notes-Operating System Concepts-Lect1
Notes-Operating System Concepts-Lect1
This module discusses about what an operating system is, the different
definitions of operating systems and the features of different operating systems that
evolved over time.
There are a number of resources in the computer like the CPU, I/O devices, memory
etc. These resources have to be utilized properly by the different applications of
different/same users/user that are running in the computer. There can be many
applications running in the computer and each application may have to use different
resources of the computer. Hence the resources have to be managed properly by
allocating them properly to different applications.
An operating system acts like a control program that controls the execution of user
programs and operations of I/O devices.
Convenience
Efficiency
One of the goals of an operating system is to execute the users’ programs in an
easier way. Convenience is another criterion that has been looked at for the design
of the operating system. For example, the Windows operating system was designed
in a way that it is easier for the users to use. But in the earlier UNIX systems,
efficiency was the main concern while designing the operating system. But these
days, UNIX systems have a user interface which is easier and convenient for people
to use. So, depending on the kind of application for which the system is used for,
convenience or efficiency can be taken as the criterion during the design of the
operating system.
Different operating systems have evolved over time. Right from when operating
systems came into existence, different functionalities have been gradually added to
operating systems.
Initially, computers were mainframe systems. The computers were very large,
sometimes occupying one full floor in a building. The computers were very
expensive and could be used only by a single person. Users had to take their
programs in cards and handover to the person who is in-charge of the computer.
The users could then collect the results the next day. So, users had to wait one after
the other to submit the input and wait till the next day to collect the results.
Hence the operating system in this case had to just transfer control from one job
to another in a sequential manner. This sequential execution causes unnecessary
time consumption. The operating system can reduce set up time by matching similar
jobs together. It is seen that the operating system in this type of systems has very
limited functionalities. As seen in Figure 1.1, there is a resident monitor/operating
system that resides in the system. Initially the control will be with the monitor. Then
the control is transferred to a job. After completion of the job, control is transferred to
the monitor. So, there is a possibility for the CPU to remain idle for a lot of time.
When disk technology was introduced, jobs could be put into a disk and could be
read from the disk rather than being read from the input card reader. Hence, it was
possible to place multiple jobs into the disk at the same. Thus, the
multi-programmed batch systems came into existence.
The operating systems like Windows, MacOS and Linux that we see today are all
time sharing systems. A time sharing system is a logical extension of multiprogramming
systems. In these systems, the CPU is multiplexed among several jobs that are kept in
memory and on disk and hence job scheduling and CPU scheduling are included in the
functions of the operating system. Here, a direct communication between the user and
the system is provided. It is possible for many users to use the system at a particular
time. Since the speed of users is much less than the speed of the CPU, CPU time can
be shared among the different users’ programs. Since many jobs can be brought into
the main memory, memory management is needed.
It may be necessary for some process to wait for I/O to happen (say read an
input from the user). Then this process may be moved/swapped from the main memory
to the disk. Another partially executed process which was moved to the disk earlier can
be brought into the main memory. Thus partially executed jobs can be moved in and out
of memory to the disk. Hence the concept of virtual memory is introduced. Virtual
memory also allows programs to be larger than the physical memory.
Since many users are using the computer, each user will have his/her own files
and these files have to be placed in the secondary storage device / disks. When there
are many files, the files have to be arranged in a logical manner. Hence, file systems
were included as a part of operating systems. Since files are present in disks, disk
management was also needed.
It is possible to have many processes of a particular user or different users to run
concurrently. At a particular time, only one process can use the CPU. When one
process is waiting for I/O, another process may use the CPU and vice versa. This may
not be felt by the users using the computer. Thus concurrent execution of processes is
possible.
When many processes execute concurrently, they may have to communicate among
themselves or may have to share common variables or data structures. Hence job
synchronization and communication are needed. Similarly, when processes execute
concurrently and share resources of the computer, it is possible that deadlocks may
occur. Hence the operating system must have the capability to handle deadlocks.
Initially, the CPUs in PCs lacked the features needed to protect an operating
system from user programs. PC operating systems therefore were neither multiuser nor
multitasking. The goals of these operating systems have changed with time; instead of
maximizing CPU and peripheral utilization, the systems opt for maximizing user
convenience and responsiveness. Initially, file protection was not needed in a personal
machine. But since computers are connected to other networks these days, other
computers and other users can access the files on a PC and hence file protection again
has become a necessary feature of the operating system. The lack of such protection
has made it easy for malicious programs to destroy data on systems such as MS-DOS
and the Macintosh operating system. These programs may be self-replicating, and may
spread rapidly via worm or virus mechanisms and disrupt entire companies or even
worldwide networks.
The systems discussed in the earlier sections had only one CPU. But
multiprocessor systems have more than one processor in close communication, sharing
the computer bus, the clock, and sometimes memory and peripheral devices. These
systems are also called as tightly coupled systems.
The advantages of multiprocessor systems are increased throughput, economy
of scale and increased reliability. When there are more number of processors, more
work is done in less time. Multiprocessor systems can save more money than multiple
single-processor systems, because they can share peripherals, storage and power
supplies. If several programs operate on the same set of data, it is cheaper to store
those data on one disk and to have all the processors share them, than to have many
computers with local disks and many copies of the data. If functions can be distributed
properly among several processors, then the failure of one processor will not halt the
system, only slow it down. Hence the system becomes more reliable.
There are two types of multiprocessing systems: symmetric and asymmetric
multiprocessing. In symmetric multiprocessing (SMP), each processor runs an identical
copy of the operating system, and these copies communicate with one another as
needed. In asymmetric multiprocessing, each processor is assigned a specific task.
Each processor can have different capabilities as well. A master processor controls the
system; the other processors either look to the master for instructions or have
predefined tasks. This scheme defines a master-slave relationship. The master
processor schedules and allocates work to the slave processors.
Client-Server Systems:
Earlier, terminals which were connected to a centralized system provided the
user interface for the user. Now, personal computers (PCs) have replaced the terminals.
In client-server systems, multiple clients connect to a server and the server provides the
necessary service for the clients. Client-server systems can be either compute-server
systems or file-server systems. In compute-server systems, the server provides
computational services to the user. In file-server systems, a file-system interface is
provided where clients can create, update, read, and delete files.
Peer-to-peer Systems:
In this type of systems, all computers are peers. Each computer can
communicate with the other computers using communication lines or a network.
Different processes on different computers can exchange messages. A network
operating system which is an operating system that provides features such as file
sharing across the network and communication, can help in this kind of communication.
A computer running a network operating system acts autonomously from all other
computers on the network, although it is aware of the network and is able to
communicate with other networked computers. The different operating systems present
in different computers communicate closely enough to provide the illusion that only a
single operating system controls the network.
References
1. Abraham Silberschatz, Peter B. Galvin, Greg Gagne, “Operating System
Concepts”, Ninth Edition, John Wiley & Sons Inc., 2012.
2. Thomas Anderson, Michael Dahlin, “Operating Systems: Principles and
Practice”, Recursive Books, 2012.