MODULEV1
MODULEV1
INTRODUCTION TO
1
H
A
P OPERATING SYSTEMS
T
E
R
LEARNING OBJECTIVES
To know about Operating Systems and their main functions,
To have an idea about measuring system performance,
To understand process management,
To learn about multiprogramming and its requirements,
To have an overview of multitasking and multithreading,
To discuss multiprocessing and its advantages and limitations,
To know about time-sharing systems and its advantages,
To discuss various concepts in File Management,
To understand various features in Operating System Structure and other related concepts, and
To know about some popular operating systems.
Computer Hardware
1.6 MULTITASKING
Multitasking is a method with multiple tasks processes sharing common processing resources such
as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point
in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves
the problem by scheduling which task may be the one running at any given time, and when another
waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context
switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even
on computers with more than one CPU (called multiprocessor machines), multitasking allows many
more tasks to be run than there are CPUs.
Many persons do not distinguish between multiprogramming and multitasking because both the
terms refer to the same concept. However, some persons prefer to use the term multiprogramming
for multi-user systems (systems that are simultaneously used by many users such as mainframe and
server class systems), and multitasking for single-user systems (systems that are used by only one
user at a time such as a personal computer or a notebook computer). Note that even in a single-user
system, it is not necessary that the system works only on one job at a time. In fact, a user of a single-user
1.7 MULTITHREADING
Threads are a popular way to improve application performance. In traditional operating systems,
the basic unit of CPU utilization is a process. Each process has its own program counter, its own
register states, its own stack, and its own address space (memory area allocated to it). On the other
hand, in operating systems, with threads facility, the basic unit of CPU utilization is a thread. In
these operating systems, a process consists of an address space and one or more threads of control
as shown in Fig 1.7.1 (a). Each thread of a process has its own program counter, its own register
states, and its own stack. But all the threads of a process share the same address space. Hence, they
also share the same global variables. In addition, all threads of a process also share the same set of
operating system resources, such as open files, signals, accounting information, and so on. Due to
the sharing of address space, there is no protection between the threads of a process. However, this
is not a problem. Protection between processes is needed because different processes may belong to
different users. But a process (and hence, all its threads) is always owned by a single user. Therefore,
protection between multiple threads of a process is not necessary. If protection is required between
two threads of a process, it is preferable to put them in different processes, instead of putting them
in a single process.
(a) (b)
Figure 1.7.1: (a) Single‑threaded and (b) multithreaded processes
A single-threaded process corresponds to a process of a traditional operating system. Threads share
a CPU in the same way as processes do. At a particular instance of time, a thread can be in anyone
of several states namely, running, blocked, ready, or terminated. Due to these similarities, threads
are often viewed as miniprocesses. In fact, in operating systems with threads facility, a process
1.8 MULTIPROCESSING
Multiprocessing is the use of two or more Central Processing Units (CPUs) within a single computer
system. The term also refers to the ability of a system to support more than one processor and/or
the ability to allocate tasks between them. There are many variations on this basic theme, and the
definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined
(multiple cores on one die, multiple dies in one package, multiple packages in one system unit,
etc.). Multiprocessing sometimes refers to the execution of multiple concurrent software processes
in a system as opposed to a single process at any one instant. However, the terms multitasking or
multiprogramming are more appropriate to describe this concept, which is implemented mostly in
software, whereas multiprocessing is more appropriate to describe the use of multiple hardware
CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither
of the two.
In a multiprocessing system, all CPUs may be equal, or some may be reserved for special purposes.
A combination of hardware and operating-system software design considerations determine the
symmetry (or lack thereof) in a given system. For example, hardware or software considerations may
require that only one CPU respond to all hardware interrupts, whereas all other work in the system
may be distributed equally among CPUs; or execution of kernel-mode code may be restricted to
only one processor (either a specific processor, or only one processor at a time), whereas user-mode
code may be executed in any combination of processors. Multiprocessing systems are often easier
to design if such restrictions are imposed, but they tend to be less efficient than systems in which all
CPUs are utilized. Systems that treat all CPUs equally are called Symmetric Multiprocessing (SMP)
systems. In systems where all CPUs are not equal, system resources may be divided in a number
of ways, including Asymmetric Multiprocessing (ASMP), Non-Uniform Memory Access (NUMA)
multiprocessing, and clustered multiprocessing.
Multiprocessing systems are basically of two types namely, tightly-coupled systems and loosely-
coupled systems:
Tightly and Loosely Coupled Multiprocessing Systems: Tightly-coupled multiprocessor
systems contain multiple CPUs that are connected at the bus level. These CPUs may have access
to a central shared memory (SMP or UMA), or may participate in a memory hierarchy with both
local and shared memory (NUMA). The IBM p690 Regatta is an example of a high end SMP
system. Intel Xeon processors dominated the multiprocessor market for business PCs and were
the only x86 option until the release of AMD’s Opteron range of processors in 2004. Both ranges
of processors had their own onboard cache but provided access to shared memory; the Xeon
processors via a common pipe and the Opteron processors via independent pathways to the
system RAM. Chip multiprocessors, also known as multi-core computing, involves more than
one processor placed on a single chip and can be thought of the most extreme form of tightly-
coupled multiprocessing. Mainframe systems with multiple processors are often tightly-coupled.
Loosely Coupled Multiprocessing Systems: Loosely-coupled multiprocessor systems (often
referred to as clusters) are based on multiple standalone single or dual processor commodity
computers interconnected via a high speed communication system (Gigabit Ethernet is common).
A Linux Beowulf cluster is an example of a loosely-coupled system.
1.9 TIME-SHARING
Time-sharing is the sharing of a computing resource among many users by means of
multiprogramming and multi-tasking. This concept was introduced in the 1960s, and emerged as
the prominent model of computing in the 1970s, represents a major technological shift in the history
of computing. By allowing a large number of users to interact concurrently with a single computer,
time-sharing dramatically lowered the cost of providing computing capability, made it possible for
individuals and organizations to use a computer without owning one, and promoted the interactive
use of computers and the development of new interactive applications. Time-sharing is a mechanism