0% found this document useful (0 votes)
9 views

1 Operating System Overview

The document provides an overview of operating systems, describing their purpose of managing computer hardware and allowing users to run programs efficiently. It discusses different types of operating systems designed for mainframes, PCs, and mobile devices. It also describes operating systems from the user and system viewpoints, and how they allocate resources and control hardware. Finally, it discusses single-processor and multiprocessor computer system architectures.

Uploaded by

nikhilghniki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

1 Operating System Overview

The document provides an overview of operating systems, describing their purpose of managing computer hardware and allowing users to run programs efficiently. It discusses different types of operating systems designed for mainframes, PCs, and mobile devices. It also describes operating systems from the user and system viewpoints, and how they allocate resources and control hardware. Finally, it discusses single-processor and multiprocessor computer system architectures.

Uploaded by

nikhilghniki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1

Operating System Overview


• An operating system acts as an intermediary between the user of a
computer and the computer hardware.
• The purpose of an operating system is to provide an environment
in which a user can execute programs in a convenient and
efficient manner.
• An operating system is a software/program that manages the
computer hardware.
• Mainframe operating systems are designed primarily to optimize
utilization of hardware.
• Personal computer (PC) operating systems support complex
games, business applications, etc.
• Operating systems for mobile provide an environment in which a
user can easily interface with the mobile to execute programs.
• Some operating systems are designed to be convenient, others to
be efficient.

1.1 What Operating Systems Do


• A computer system can be divided roughly into four components: the
hardware, the operating system, the application programs, and the users
(Figure 1.1).
• The hardware - the central processing unit (CPU), the memory, and the
input/output (I/O) devices - provides the basic computing resources for the
system.
• The application programs - such as word processors, spreadsheets,
compilers, and Web browsers - define the ways in which these resources are
used to solve users’ computing problems.
• The operating system controls the hardware and coordinates its use among
the various application programs for the various users.
• We can view a computer system as consisting of hardware, software, and
data.
• The operating system provides the means for proper use of these resources
in the operation of the computer system.
• An operating system is similar to a government.
• Like a government, it performs no useful function by itself.
• It simply provides an environment within which other programs can do
useful work.
2

1.1.1 User View


• The user’s view of the computer varies according to the interface being
used.
• Most computer users sit in front of a PC, consisting of a monitor, keyboard,
mouse, and system unit.
• Such a system is designed for one user to monopolize its resources.
• The goal is to maximize the work that the user is performing.
• In this case, the operating system is designed mostly for ease of use, with
some attention paid to performance and none paid to resource utilization —
how various hardware and software resources are shared.
• Performance is important to the user; but such a systems are optimized for
the single-user experience rather than the requirements of multiple users.
• In other cases, a user sits at a terminal connected to a mainframe or a
minicomputer.
• Other users are accessing the same computer through other terminals.
3

• These users share resources and may exchange information.


• The operating system in such cases is designed to maximize resource
utilization— to assure that all available CPU time, memory, and I/O are used
efficiently and that no individual user takes more than her fair share.
• Many users sit at workstations connected to networks of other
workstations and servers.
• These users have dedicated resources at their disposal, but they also share
resources such as networking and servers, including file, compute, and print
servers.
• Recently, many varieties of mobile computers, such as smartphones and
tablets, have come into fashion.
• Most mobile computers are standalone units for individual users.
• Often, they are connected to networks through cellular or other wireless
technologies.
• These mobile devices are replacing desktop and laptop computers for
people who are primarily interested in using computers for e-mail and web
browsing.
• The user interface for mobile computers generally features a touch screen,
where the user interacts with the system by pressing and swiping fingers
across the screen rather than using a physical keyboard and mouse.
• Some computers have little or no user view.
• For example, embedded computers in home devices and automobiles may
have numeric keypads and may turn indicator lights on or off to show status,
but they and their operating systems are designed primarily to run without
user intervention.

1.1.2 System View


• From the computer’s point of view, the operating system is the
program closely involved with the hardware.
• We can view an operating system as a resource allocator.
• A computer system has many resources that may be required to solve a
problem: CPU time, memory space, file-storage space, I/O devices, and so
on.
• The operating system acts as the manager of these resources.
• To overcome conflicting requests for resources, the operating system must
decide how to allocate the resources to specific programs and users so that it
can operate the computer system efficiently.
• Resource allocation is important where many users access the same
mainframe or minicomputer.
• Operating system highlights the need to control the various I/O devices and
user programs.
• An operating system is a control program.
• A control program manages the execution of user programs to prevent
errors and improper use of the computer.
4

• Control program is concerned with the operation and control of I/O devices.

1.1.3 Defining Operating Systems


• Operating systems exist because they offer a reasonable way to solve the
problem of creating a usable computing system.
• The fundamental goal of computer systems is to execute user programs and
to make solving user problems easier.
• Computer hardware is constructed toward this goal.
• Hardware alone is not particularly easy to use, application programs are
developed.
• Application programs require certain common operations, such as those
controlling the I/O devices.
• The common functions of controlling and allocating resources are then
brought together into one piece of software: the operating system.
• A simple viewpoint is that it includes everything a vendor ships when you
order “the operating system.”
• The operating system is the program running at all times on the computer —
usually called the kernel.
• Along with the kernel, there are two other types of programs:
i. System programs, which are associated with the operating system but are
not necessarily part of the kernel.
ii. Application programs, which include all programs not associated with
the operation of the system.
• Operating system became important as personal computers became more
widespread and operating systems grew increasingly.
• If we look at operating systems for mobile devices, we see that once again
the number of features constituting the operating system is increasing.
• Mobile operating systems often include not only a core kernel but also
middleware — a set of software frameworks that provide additional services
to application developers.
• For example, each of the two most prominent mobile operating systems—
Apple’s iOS and Google’s Android — features a core kernel along with
middleware that supports databases, multimedia, and graphics.

1.3 Computer-System Architecture


1.3.1 Single-Processor Systems
• Most computer systems use a single processor.
• On a single-processor system, there is one main CPU capable of executing a
general-purpose instruction set, including instructions from user processes.
• Almost all single-processor systems have other special-purpose processors
as well.
• They may come in the form of device-specific processors, such as disk,
5

keyboard, and graphics controllers.


• On mainframes they may come in the form of more general-purpose
processors, such as I/O processors.
• All of these special-purpose processors run a limited instruction set and do
not run user processes.
• Sometimes, they are managed by the operating system, in that the operating
system sends them information about their next task and monitors their
status.
• For example, a disk-controller microprocessor receives a sequence of
requests from the main CPU and implements its own disk queue and
scheduling algorithm.
• This arrangement relieves the main CPU of the overhead of disk scheduling.
• PCs contain a microprocessor in the keyboard to convert the keystrokes into
codes to be sent to the CPU.
• In other systems, special-purpose processors are low-level components built
into the hardware.
• The operating system cannot communicate with these processors; they do
their jobs autonomously.
• The use of special-purpose microprocessors is common and does not turn a
single-processor system into a multiprocessor.
• If there is only one general-purpose CPU, then the system is a single-
processor system.

1.3.2 Multiprocessor Systems


• From past several years, multiprocessor systems (also known as parallel
systems or multicore systems) have begun to dominate the view of
computing.
• Such systems have two or more processors in close communication, sharing
the computer bus and sometimes the clock, memory, and peripheral devices.
• Multiprocessor systems first appeared prominently appeared in servers and
have since migrated to desktop and laptop systems.
• Recently, multiple processors have appeared on mobile devices
such as smartphones and tablet computers.
• Multiprocessor systems have three main advantages:

1. Increased throughput. By increasing the number of processors, we expect


to get more work done in less time. When multiple processors cooperate on
a task, a certain amount of overhead is incurred in keeping all the parts
6

working correctly. This overhead, plus contention for shared resources,


lowers the expected gain from additional processors.
2. Economy of scale. Multiprocessor systems can cost less than equivalent
multiple single-processor systems, because they can share peripherals, mass
storage, and power supplies. If several programs operate on the same set of
data, it is cheaper to store those data on one disk and to have all the
processors share them than to have many computers with local disks and
many copies of the data.
3. Increased reliability. If functions can be distributed properly among several
processors, then the failure of one processor will not halt the system, only
slow it down. If we have ten processors and one fails, then each of the
remaining nine processors can pick up a share of the work of the failed
processor. Thus, the entire system runs only 10 percent slower, rather than
failing altogether.

• Increased reliability of a computer system is crucial in many applications.


The ability to continue providing service proportional to the level of
surviving hardware is called graceful degradation.
• Some systems go beyond graceful degradation and are called fault tolerant,
because they can suffer a failure of any single component and still continue
operation.
• Fault tolerance requires a mechanism to allow the failure to be detected,
diagnosed, and, if possible, corrected.
• The multiple-processor systems in use today are of two types.
• Some systems use asymmetric multiprocessing, in which each processor is
assigned a specific task. A master (boss) processor controls the system; the
other processors either look to the master (boss) for instruction or have
predefined tasks. This scheme defines a master–slave (boss-worker)
relationship. The master (boss) processor schedules and allocates work to the
slave (worker) processors.
• The most common systems use symmetric multiprocessing (SMP), in
which each processor performs all tasks within the operating system. SMP
means that all processors are peers; no master-slave (boss–worker)
relationship exists between processors.
• Figure 1.6 illustrates a typical SMP architecture.
• Notice that each processor has its own set of registers, as well as a private—
or local — cache.
• However, all processors share physical memory.
7

• An example of an SMP system is Solaris, a commercial version of UNIX


designed by Sun Microsystems.
• A Solaris system can be configured to employ dozens of processors.
• The benefit of this model is that many processes can run simultaneously— N
processes can run if there are N CPUs — without causing significant
deterioration (decline) of performance.

1.3.3 Clustered Systems


• Another type of multiprocessor system is a clustered system, which gathers
together multiple CPUs.
• Clustered systems differ from the multiprocessor systems in that they are
composed of two or more individual systems— or nodes— joined together.
• Such systems are considered loosely coupled. Each node may be a single
processor system or a multicore system.
• Clustered computers share storage and are closely linked via a local-area
network LAN or a faster interconnect, such as InfiniBand.
• InfiniBand (IB) is a computer networking communications standard used in
high-performance computing that features very high throughput and very
low latency. It is used for data interconnect both among and within
computers.
8

• Clustering is usually used to provide high-availability service— that is,


service will continue even if one or more systems in the cluster fail.
• Clustering can be structured asymmetrically or symmetrically.
• In asymmetric clustering, one machine is in hot-standby mode while the
other is running the applications. The hot-standby host machine does nothing
but monitor the active server. If that server fails, the hot-standby host
becomes the active server.
• In symmetric clustering, two or more hosts are running applications and are
monitoring each other. This structure is obviously more efficient, as it uses
all of the available hardware. However it does require that more than one
application be available to run.
• Other forms of clusters include parallel clusters and clustering over a wide-
area network (WAN).
• Parallel clusters allow multiple hosts to access the same data on shared
storage.
• Cluster technology is changing rapidly. Some cluster products support
dozens of systems in a cluster, as well as clustered nodes that are separated
by miles.
• Many of these improvements are made possible by storage-area
networks (SANs), which allow many systems to attach to a pool of
storage.
• If the applications and their data are stored on the SAN, then the cluster
software can assign the application to run on any host that is attached to the
SAN.
• If the host fails, then any other host can take over.
• In a database cluster, dozens of hosts can share the same database, greatly
increasing performance and reliability.
• Figure 1.8 depicts the general structure of a clustered system.
9

1.4 Operating-System Structure


• An operating system provides the environment within which programs are
executed.
• One of the most important aspects of operating systems is the ability to
multiprogramming.
• A single program cannot, in general, keep either the CPU or the I/O devices
busy at all times.
• Single users frequently have multiple programs running.
• Multiprogramming increases CPU utilization by organizing jobs (code and
data) so that the CPU always has one to execute.
• The operating system keeps several jobs in memory simultaneously (Figure
1.9).
10

• Since, main memory is too small to accommodate all jobs, the jobs are kept
initially on the disk in the job pool.
• This pool consists of all processes residing on disk awaiting allocation of
main memory.
• The set of jobs in memory can be a subset of the jobs kept in the job pool.
• The operating system picks and begins to execute one of the jobs in memory.
• Finally, the job may have to wait for some task, such as an I/O operation, to
complete.
• In a non-multiprogrammed system, the CPU would sit idle.
• In a multiprogrammed system, the operating system simply switches to, and
executes, another job.
• When that job needs to wait, the CPU switches to another job, and so on.
• Finally, the first job finishes waiting and gets the CPU back. As long as at
least one job needs to execute, the CPU is never idle.
• This idea is common in other life situations.
• A lawyer does not work for only one client at a time.
11

• Multiprogrammed systems provide an environment in which the various


system resources (for example, CPU, memory, and peripheral devices) are
utilized effectively, but they do not provide for user interaction with the
computer system.
• Time sharing (or multitasking) is a logical extension of
multiprogramming.
• In time-sharing systems, the CPU executes multiple jobs by switching
among them, but the switches occur so frequently that the users can interact
with each program while it is running.
• Time sharing requires an interactive computer system, which provides
direct communication between the user and the system.
• The user gives instructions to the operating system or to a program directly,
using a input device such as a keyboard, mouse, touch pad, or touch screen,
and waits for immediate results on an output device.
• Accordingly, the response time should be short — typically less than one
second.
• A time-shared operating system allows many users to share the computer
simultaneously.
• Since each action or command in a time-shared system tends to be short,
only a little CPU time is needed for each user.
• As the system switches rapidly from one user to the next, each user is given
the impression that the entire computer system is dedicated to his use, even
though it is being shared among many users.
• A time-shared operating system uses CPU scheduling and
multiprogramming to provide each user with a small portion of a time-
shared computer.
• Each user has at least one separate program in memory.
• A program loaded into memory and executing is called a process.
• When a process executes, it typically executes for only a short time before it
either finishes or needs to perform I/O.
• I/O may be interactive; that is, output goes to a display for the user, and
input comes from a user keyboard, mouse, or other device.
• Since interactive I/O typically runs at “people speeds,” it may take a long
time to complete.
• For example, Input, may be bounded by the user’s typing speed; seven
characters per second is fast for people but incredibly slow for computers.
• Rather than let the CPU sit idle as this interactive input takes place, the
operating system will rapidly switch the CPU to the program of some other
user.
12

• Time sharing and multiprogramming require that several jobs be kept


simultaneously in memory.
• If several jobs are ready to be brought into memory, and if there is not
enough room for all of them, then the system must choose among them.
• Making this decision involves job scheduling.
• When the operating system selects a job from the job pool, it loads that job
into memory for execution.
• Having several programs in memory at the same time requires some form of
memory management.
• If several jobs are ready to run at the same time, the system must choose
which job will run first.
• Making this decision is CPU scheduling.
• Running multiple jobs concurrently requires that their ability to affect one
another be limited in all phases of the operating system, including process
scheduling, disk storage, and memory management.
• In a time-sharing system, the operating system must ensure reasonable
response time.
• This goal is sometimes accomplished through swapping, whereby processes
are swapped in and out of main memory to the disk.
• A more common method for ensuring reasonable response time is virtual
memory, a technique that allows the execution of a process that is not
completely in memory.
• The main advantage of the virtual-memory scheme is that it enables users to
run programs that are larger than actual physical memory.
• It abstracts main memory into a large, uniform array of storage, separating
logical memory as viewed by the user from physical memory.
• This arrangement frees programmers from concern over memory-storage
limitations.
• A time-sharing system must also provide a file system.
• The file system resides on a collection of disks; hence, disk management
must be provided.
• In addition, a time-sharing system provides a mechanism for protecting
resources from inappropriate use.
• To ensure orderly execution, the system must provide mechanisms for job
synchronization and communication, and it may ensure that jobs do not get
stuck in a deadlock, forever waiting for one another.
13

1.5 Operating-System Operations


• Modern operating systems are interrupt driven.
• If there are no processes to execute, no I/O devices to service, and no users
to whom to respond, an operating system will sit quietly, waiting for
something to happen.
• Events are almost always signaled by the occurrence of an interrupt or a
trap.
• A trap (or an exception) is a software-generated interrupt caused either by
an error.
• For example, division by zero or invalid memory access or by a specific
request from a user program that an operating-system service be performed.
• The interrupt-driven nature of an operating system defines that system’s
general structure.
• For each type of interrupt, separate segments of code in the operating system
determine what action should be taken.
• An interrupt service routine is provided to deal with the interrupt.
• Since the operating system and the users share the hardware and software
resources of the computer system, we need to make sure that an error in a
user program could cause problems only for the one program running.
• With sharing, many processes could be adversely affected by a bug in one
program.
• For example, if a process gets stuck in an infinite loop, this loop could
prevent the correct operation of many other processes.
• More delicate errors can occur in a multiprogramming system, where one
incorrect or wrong program might modify another program, the data of
another program, or even the operating system itself.
• Without protection against these sorts of errors, either the computer must
execute only one process at a time or all output must be suspect.
• A properly designed operating system must ensure that an incorrect (or
malicious) program cannot cause other programs to execute incorrectly.

1.5.1 Dual-Mode Operation


• In order to ensure the proper execution of the operating system, we must be
able to distinguish between the execution of operating-system code and user-
defined code.
• The approach taken by most computer systems is to provide hardware
support that allows us to differentiate among various modes of execution.
14

• At the very least, we need two separate modes of operation: user mode and
kernel mode (also called supervisor mode, system mode, or privileged
mode).
• A bit, called the mode bit, is added to the hardware of the computer to
indicate the current mode: kernel (0) or user (1).
• With the mode bit, we can distinguish between a task that is executed on
behalf of the operating system and one that is executed on behalf of the user.
• When the computer system is executing on behalf of a user application, the
system is in user mode.
• However, when a user application requests a service from the operating
system (via a system call), the system must transition from user to kernel
mode to fulfill the request.
• This is shown in Figure 1.10.

• At system boot time, the hardware starts in kernel mode.


• The operating system is then loaded and starts user applications in user
mode.
• Whenever a trap or interrupt occurs, the hardware switches from user mode
to kernel mode (that is, changes the state of the mode bit to 0).
• Thus, whenever the operating system gains control of the computer, it is in
kernel mode.
• The system always switches to user mode (by setting the mode bit to 1)
before passing control to a user program.
• The dual mode of operation provides us with the means for protecting the
operating system from errant (naughty) users — and errant users from one
another.
• We accomplish this protection by designating some of the machine
instructions that may cause harm as privileged instructions.
15

• The hardware allows privileged instructions to be executed only in kernel


mode.
• If an attempt is made to execute a privileged instruction in user mode, the
hardware does not execute the instruction but rather treats it as illegal and
traps it to the operating system.

1.5.2 Timer
• We must ensure that the operating system maintains control over the CPU.
• We cannot allow a user program to get stuck in an infinite loop or to fail to
call system services and never return control to the operating system. To
accomplish this goal, we can use a timer.
• A timer can be set to interrupt the computer after a specified period.
• The period may be fixed (for example, 1/60 second) or variable (for
example, from 1 millisecond to 1 second).
• A variable timer is generally implemented by a fixed-rate clock and a
counter.
• The operating system sets the counter.
• Every time the clock ticks, the counter is decremented.
• When the counter reaches 0, an interrupt occurs.
• Before turning over control to the user, the operating system ensures that the
timer is set to interrupt.
• If the timer interrupts, control transfers automatically to the operating
system, which may treat the interrupt as a fatal error or may give the
program more time.
• Clearly, instructions that modify the content of the timer are privileged.
• We can use the timer to prevent a user program from running too long.
• A simple technique is to initialize a counter with the amount of time that a
program is allowed to run.
• A program with a 7-minute time limit, for example, would have its counter
initialized to 420.
• Every second, the timer interrupts, and the counter is decremented by 1.
• As long as the counter is positive, control is returned to the user program.
• When the counter becomes negative, the operating system terminates the
program for exceeding the assigned time limit.

You might also like