0% found this document useful (0 votes)
34 views

Module 1 - Chapter - 1

The document discusses operating systems and computer system structure. It defines what an operating system is and its goals. It describes the four main components of a computer system: hardware, operating system, application programs, and users. It then discusses computer system organization including storage structure, I/O structure, and single and multi-processor system architectures. The document provides an overview of key operating system concepts.

Uploaded by

Om
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Module 1 - Chapter - 1

The document discusses operating systems and computer system structure. It defines what an operating system is and its goals. It describes the four main components of a computer system: hardware, operating system, application programs, and users. It then discusses computer system organization including storage structure, I/O structure, and single and multi-processor system architectures. The document provides an overview of key operating system concepts.

Uploaded by

Om
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

DAYANANDA SAGAR UNIVERSITY

DEPARTMENT OF CSE

OPERATING SYSTEM

MODULE-1
OS OVERVIEW AND SYSTEM STRUCTURE
CHAPTER 1

INTRODUCTION
TO
OPERATING SYSTEMS
1. WHAT IS AN OPERATING SYSTEM?

■ A program that acts as an intermediary


between a user of a computer and the
computer hardware.
■ An operating system (OS) is a collection of
software that manages computer hardware
resources and provides common services for
computer programs. The operating system is
a vital component of the system software in a
computer system.
3
1. WHAT IS AN OPERATING SYSTEM?

• An operating system is a software which performs all the


basic tasks like file management, memory management,
process management, handling input and output, and
controlling peripheral devices such as disk drives and
printers.
■ Operating system goals:
● Execute user programs and make solving user problems
easier.
● Make the computer system convenient to use.
● Use the computer hardware in an efficient manner.

4
1. 1 WHY STUDY OS?
• OS is a key part of a computer system
– it is “magic” and we want to understand how
– it has “power” and we want to have the power
• OS is complex
– What happens when your running application program ?
-- Real OS is huge and very expensive to build
-- win/NT: 8 years, 1000s of people

5
OVERVIEW OF OS

6
2. COMPUTER SYSTEM STRUCTURE

■ Computer system can be divided into four components


● Hardware – provides basic computing resources
Ex:CPU, memory, I/O devices
● Operating system- Controls and coordinates use of
hardware among various applications and users
● Application programs – define the ways in which the
system resources are used to solve the computing
problems of the users.
Ex: Word processors, web browsers, database
systems, video games
● Users
Ex:People, machines, other computers 7
FOUR COMPONENTS OF A COMPUTER SYSTEM

8
3. WHAT OPERATING SYSTEMS DO
3.1 VIEWS

9
3. WHAT OPERATING SYSTEMS DO

10
3.2 OPERATING SYSTEM DEFINITION

• No universally accepted definition


• “Everything a vendor ships when you order an operating system”
is a good approximation
• But varies wildly
• “The one program running at all times on the computer” is the
kernel.
• Everything else is either
• A system program (ships with the operating system) , or
• An application program.

11
3.3 NEED OF OS

■ OS as a platform for Application programs

■ Managing Input Output Unit

■ Consistent User Interface

■ Multi-Tasking

12
4. COMPUTER SYSTEM ORGANIZATION

4.1. COMPUTER SYSTEM OPERATION


4.2. STORAGE STRUCTURE
4.3. I/O STRUCTURE

13
4.1 COMPUTER SYSTEM OPERATION
• One or more CPUs, device controllers connect through
common bus providing access to shared memory
• Concurrent execution of CPUs and devices competing
for memory cycles

14
4.1 Computer System Operation
• I/O devices and the CPU can execute concurrently.
• Each device controller is in charge of a particular
device type.
• Each device controller has a local buffer.
• CPU moves data from/to main memory to/from local
buffers.
• I/O is from the device to local buffer of controller.
• Device controller informs CPU that it has finished its
operation by causing an interrupt.
15
4.1 Computer System Operation
Common functions of interrupts
• Interrupt transfers control to the interrupt service routine generally,
through the interrupt vector, which contains the addresses of all the
service routines.

• Interrupt architecture must save the address of the interrupted


instruction.

• A trap or exception is a software-generated interrupt caused either by


an error or a user request.

16

• An operating system is interrupt driven


4.1 Computer System Operation
Interrupt handling
• The operating system preserves the state of the CPU by
storing registers and the program counter.
• Determines which type of interrupt has occurred:
• Polling
• Vectored interrupt system
• Separate segments of code determine what action should be
taken for each type of interrupt.

17
4.1 Computer System Operation

INTERRUPT TIMELINE

18
4.2 Storage Structure
•Main memory – only large storage media that the CPU can access directly
•Random access
•Typically volatile
•Secondary storage – extension of main memory that provides large
nonvolatile storage capacity
•Hard disks – rigid metal or glass platters covered with magnetic recording
material
•Disk surface is logically divided into tracks, which are subdivided into
sectors
•The disk controller determines the logical interaction between the
device and the computer
•Solid-state disks – faster than hard disks, nonvolatile
•Various technologies
•Becoming more popular 19
4. 2 Storage Structure
Storage Hierarchy
• Storage systems organized in hierarchy
• Speed
• Cost
• Volatility
• Caching – copying information into faster storage system;
main memory can be viewed as a cache for secondary storage
• Device driver for each device controller to manage I/O
• Provides uniform interface between controller and kernel

20
4.2 Storage Structure
STORAGE DEVICE HIERARCHY

21
4.2 Storage Structure

Storage Device Hierarchy


• The wide variety of storage systems in a computer system
can be organized in a hierarchy according to speed and
cost.
• The higher levels are expensive, but they are fast. As we
move down the hierarchy, the cost per bit.
• Generally decreases, whereas the access time generally
increases.
• The storage systems above the electronic disk are volatile,
whereas those below are nonvolatile 22
4.3 I/O STRUCTURE

How A Modern Computer Works

23
4.3 I/O STRUCTURE
• A device controller maintains some local buffer storage
and a set of special-purpose registers. The device
controller is responsible for moving the data between the
peripheral devices that it controls and its local buffer
storage. Typically, operating systems have a device driver
for each device controller.
• Driver understands the device controller and presents a
uniform interface to the device to the rest of the operating
system.

24
4.3 I/O STRUCTURE

• To start an I/O operation, the device driver loads the appropriate


registers within the device controller.
• The device controller, in turn, examines the contents of these registers to
determine what action to take (such as "read a character from the
keyboard")- the controller starts the transfer of data from the device to
its local buffer.
• Once the transfer of data is complete, the device controller informs the
device driver via an interrupt that it has finished its operation.
• The device driver then returns control to the operating system,
possibly returning the data or a pointer to the data if the operation was a
read. For other operations, the device driver returns status information.
25
4.3 I/O STRUCTURE

Direct Memory Access Structure


• Used for high-speed I/O devices able to transmit information at
close to memory speeds.
• Device controller transfers blocks of data from buffer storage
directly to main memory without CPU intervention.
• Only one interrupt is generated per block, rather than the one
interrupt per byte.

26
5. COMPUTER SYSTEM ARCHITECTURE

5.1. SINGLE PROCESSOR SYSTEM


5.2. MULTI PROCESSOR SYSTEM
5.3. CLUSTERED SYSTEM

27
5.1 SINGLE PROCESSOR SYSTEM

• On a single-processor system, there is one main CPU capable of


executing a general-purpose instruction set, including instructions from
user processes.
• So only one process can be executed at a time and then the process is
selected from the ready queue. Most general purpose computers contain
the single processor systems as they are commonly in use.

28
5.2 MULTI -PROCESSOR SYSTEM

In, multiprocessor systems (also known as parallel systems or


tightly coupled systems) are growing in importance.
Such systems have two or more processors in close
communication, sharing the computer bus and sometimes the
clock, memory, and peripheral devices.

Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability

Two types:
4. Asymmetric Multiprocessing – each processor is
assigned a specie task. 29
5. Symmetric Multiprocessing – each processor performs
all tasks
5.2 MULTI -PROCESSOR SYSTEM

30
5.3 CLUSTERED SYSTEMS

• Like multiprocessor systems, but multiple systems working together


• Usually sharing storage via a storage-area network (SAN)
• Provides a high-availability service which survives failures
• Asymmetric clustering has one machine in hot-standby mode
• Symmetric clustering has multiple nodes running
applications, monitoring each other
• Some clusters are for high-performance computing (HPC)
• Applications must be written to use parallelization
• Some have distributed lock manager (DLM) to avoid conflicting
operations

31
5.3 CLUSTERED SYSTEMS

32
6 .OPERATING SYSTEM STRUCTURE

6.1.Multiprogramming (batch system)


6.2.Timesharing (multitasking)

33
6. OPERATING SYSTEM STRUCTURE

6.1. Multiprogramming (batch system) needed for efficiency

• Single user cannot keep CPU and I/O devices


busy at all times

• Multiprogramming organizes jobs (code and


data) so CPU always has one to execute
• A subset of total jobs in system is kept in
memory
• One job selected and run via job scheduling
• When it has to wait (for I/O for example), OS
switches to another job 34
Memory layout for multiprogrammed system

35
Multi Programming System

36
6. OPERATING SYSTEM STRUCTURE
6.2. Timesharing (multitasking) is logical extension in which cpu
switches jobs so frequently that users can interact with each job while it
is running, creating interactive computing.

• Response time should be < 1 second

• Each user has at least one program executing in memory


🢡process

• If several jobs ready to run at the same time 🢡 CPU scheduling

• If processes don’t fit in memory, swapping moves them in and out


to run

• Virtual memory allows execution of processes not completely in37


memory
TIME SHARING SYSTEM

38
7. OPERATING-SYSTEM OPERATIONS
• Interrupt driven (hardware and software)
• Hardware interrupt by one of the devices
• Software interrupt (exception or trap):
• Software error (e.g., Division by zero)
• Request for operating system service
• Other process problems include infinite loop, processes
modifying each other or the operating system

OPERATIONS:
1. DUAL MODE
2. TIMER
39
7. OPERATING-SYSTEM OPERATIONS

7.1 DUAL MODE


• Dual-mode operation allows OS to protect itself and other system
components
• User mode and kernel mode

• A bit, called the mode bit, is added to the hardware of the computer to
indicate the current mode: kernel (0) or user (1). With the mode bit,
we are able to distinguish between a task that is executed on behalf of
the operating system.
• And one that is executed on behalf of the user. When the computer
system is executing on behalf of a user application, the system is in
user mode. However, when a user application requests a service from
the operating system (via a System call), it must transition from user to
kernel mode to fulfill the request. 40
7. OPERATING-SYSTEM OPERATIONS

7.1 DUAL MODE


• At system boot time, the hardware starts in kernel mode. The operating
system is then loaded and starts user applications in user mode.
• Whenever a trap or interrupt occurs, the hardware switches from user
mode to kernel mode (that is, changes the state of the mode bit to 0).
Thus, whenever the operating system gains control of the computer, it
is in kernel mode.
• The system always switches to user mode (by setting the mode bit to 1)
before passing control to a user program.
• The dual mode of operation provides us with the means for protecting
the operating system from errant users—and errant users from one
another.
41
7. OPERATING-SYSTEM OPERATIONS

7.1 DUAL MODE

42
7. OPERATING-SYSTEM OPERATIONS

7.2 TIMER

• Timer to prevent infinite loop / process hogging resources


• Timer is set to interrupt the computer after some time
period
• Keep a counter that is decremented by the physical clock.
• Operating system set the counter (privileged instruction)
• When counter zero generate an interrupt
• Set up before scheduling process to regain control or
terminate program that exceeds allotted time.
43
8. PROCESS MANAGEMENT
• A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.
• Process needs resources to accomplish its task
• CPU, memory, I/O, files
• Initialization data

• Process termination requires reclaim of any reusable resources.

• Single-threaded process has one program counter specifying location of


next instruction to execute
• Process executes instructions sequentially, one at a time, until completion
• Multi-threaded process has one program counter per thread

• Typically system has many processes, some user, some operating system
running concurrently on one or more CPUs. 44

• Concurrency by multiplexing the CPUs among the processes / threads


8. PROCESS MANAGEMENT

The operating system is responsible for the following activities in


connection with process management:
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling

45
9. MEMORY MANAGEMENT
• To execute a program all (or part) of the instructions must be in
memory
• All (or part) of the data that is needed by the program must be in
memory.
• Memory management determines what is in memory and when
• Optimizing CPU utilization and computer response to users
• Memory management activities
• Keeping track of which parts of memory are currently being used
and by whom
• Deciding which processes (or parts thereof) and data to move into
and out of memory 46

• Allocating and deallocating memory space as needed


10. STORAGE MANAGEMENT

10.1 FILE SYSTEM MANAGEMENT


10.2 MASS STORAGE MANAGEMENT
10.3 CACHING
10.4 I/O SYSTEMS

47
10. STORAGE MANAGEMENT

10.1. File-system management


• Files usually organized into directories
• Access control on most systems to determine who can access
what
• OS activities include
• Creating and deleting files and directories
• Primitives to manipulate files and directories
• Mapping files onto secondary storage
• Backup files onto stable (non-volatile) storage media

48
10. STORAGE MANAGEMENT

10.1. File-system management


• Files usually organized into directories
• Access control on most systems to determine who can
access what
• OS activities include
• Creating and deleting files and directories
• Primitives to manipulate files and directories
• Mapping files onto secondary storage
• Backup files onto stable (non-volatile) storage
media

49
10. STORAGE MANAGEMENT
10.2. MASS STORAGE MANAGEMENT :
Usually disks used to store data that does not fit in main memory or data that
must be kept for a “long” period of time
• Proper management is of central importance
• Entire speed of computer operation hinges on disk subsystem and its
algorithms
• OS activities
• Free-space management
• Storage allocation
• Disk scheduling
• Some storage need not be fast
• Tertiary storage includes optical storage, magnetic tape
• Still must be managed – by OS or applications
50
• Varies between WORM (write-once, read-many-times) and RW (read-
write)
10. STORAGE MANAGEMENT
10.3. Caching is an important principle of computer systems.
Information is normally kept in some storage system (such as
main memory).
• As it is used, it is copied into a faster storage system—the
cache—on a temporary basis.
• When we need a particular piece of information, we first
check whether it is in the cache.
• If it is, we use the information directly from the cache; if it
is not, we use the information from the source, putting a
copy in the cache under the assumption that we will need it
again soon. 51
10. STORAGE MANAGEMENT

10.3. Caching
Performance of Various Levels of Storage

52
10. STORAGE MANAGEMENT

10.3. Caching
In a hierarchical storage structure, the same data may appear in different levels of the
storage system.
• For example, suppose that an integer A that is to be incremented by 1 is located in
file B, and file B resides on magnetic disk. The increment operation proceeds by first
issuing an I/O operation to copy the disk block on which A resides to main memory.
• This operation is followed by copying A to the cache and to an internal register.
Thus, the copy of A appears in several places: on the magnetic disk, in main memory,
in the cache, and in an internal register .
• Once the increment takes place in the internal register, the value of A differs in the
various storage systems. The value of A becomes the same only after the new value
of A is written from the internal register back to the magnetic disk.

53
10. STORAGE MANAGEMENT

10.3. Caching

In a computing environment where only one process executes at a


time, this arrangement poses no difficulties, since an access to integer A
will always be to the copy at the highest level of the hierarchy.

However, in a multitasking environment, where the CPU is switched


back and forth among various processes, extreme care must be taken to
ensure that, if several processes wish to access A, then each of these
processes will obtain the most recently updated value of A.

54
10. STORAGE MANAGEMENT
10.3. Caching
• The situation becomes more complicated in a multiprocessor environment where, in
addition to maintaining internal registers, each of the CPUs also contains a local cache.
• In such an environment, a copy of A may exist simultaneously in several caches. Since the
various CPUs can all execute concurrently, we must make sure that an update to the value of
A in one cache is immediately reflected in all other caches where A resides.
• This situation is called cache coherency, and it is usually a hardware problem (handled
below the operating-system level).

• In a distributed environment, the situation becomes even more complex.


• In this environment, several copies (or replicas) of the same file can be kept on different
computers that are distributed in space.
• Since the various replicas may be accessed and updated concurrently, some distributed
55
systems ensure that, when a replica is updated in one place, all other replicas are brought up
to date as soon as possible.
10. STORAGE MANAGEMENT

10.4 I/O SYSTEMS:


• One purpose of OS is to hide peculiarities of hardware devices from the
user
• I/O subsystem responsible for
• Memory management of I/O including buffering (storing data
temporarily while it is being transferred), caching (storing parts of data
in faster storage for performance), spooling (the overlapping of output
of one job with input of other jobs)
• General device-driver interface
• Drivers for specific hardware devices

56
11. PROTECTION AND SECURITY
• Protection – any mechanism for controlling access of processes or
users to resources defined by the OS
• Security – defense of the system against internal and external attacks
• Huge range, including denial-of-service, worms, viruses, identity
theft, theft of service
• Systems generally first distinguish among users, to determine who
can do what
• User identities (user ids, security ids) include name and
associated number, one per user
• User ID then associated with all files, processes of that user to
determine access control
• Group identifier (group ID) allows set of users to be defined and
controls managed, then also associated with each process, file
• Privilege escalation allows user to change to effective ID with57
more rights
12. DISTRIBUTED SYSTEMS
• A distributed system is a collection of physically separate, possibly heterogeneous
computer systems that are networked to provide the users with access to the various
resources that the system maintains.
• Access to a shared resource increases computation speed, functionality, data
availability, and reliability.
• A network, in the simplest terms, is a communication path between two or more
systems.
• Distributed systems depend on networking for their functionality. Networks vary by
the protocols used, the distances between nodes, and the transport media. TCP/IP is
the most common network protocol, although ATM and other protocols are in
widespread use.
• Likewise, operating system support of protocols varies. Most operating systems
support TCP/IP, including the windows and UNIX operating systems. 58Some
systems support proprietary protocols to suit their needs
12. DISTRIBUTED SYSTEMS
Networks are characterized based on the distances between their nodes.
▪ A local-area network (LAN) connects computers within a room, a floor, or a
building.
▪ A wide-area network (WAN) usually links buildings, cities, or countries. A
global company may have a WAN to connect its offices worldwide. These
networks may run one protocol or several protocols.
▪ A metropolitan-area network (MAN) could link buildings within a city.
Bluetooth and 802.11 devices use wireless technology to communicate over a
distance of several feet, in essence creating a small-area network such as
might be found in a home.

A network operating system is an operating system that provides features


such as file sharing across the network and that includes a communication
scheme that allows different processes on different computers to exchange
messages.
59
13. COMPUTING ENVIRONMENT

13.1 TRADITIONAL COMPUTING


13.2 CLIENT SERVER COMPUTING
13.3 PEER TO PEER COMPUTING
13.4 WEB BASED COMPUTING

60
13. COMPUTING ENVIRONMENT

13.1 Traditional Computing


• Stand-alone general purpose machines
• But blurred as most systems interconnect with others (i.e., The internet)
• Portals provide web access to internal systems
• Network computers (thin clients) are like web terminals
• Mobile computers interconnect via wireless networks
• Networking becoming ubiquitous – even home systems use firewalls to
protect home computers from internet attacks

61
13. COMPUTING ENVIRONMENT

13.2. CLIENT-SERVER COMPUTING


In client server computing, the clients requests a resource and
the server provides that resource.
A server may serve multiple clients at the same time while a
client is in contact with only one server.
Both the client and server usually communicate via a
computer network but sometimes they may reside in the same
system.

62
13. COMPUTING ENVIRONMENT

13.2 Client-Server Computing


Server systems can be broadly categorized as
1. Compute-server system
provides an interface to client to request services (i.e.,
Database)
Ex: a server running a database that responds to client
requests for data is an example of such a system
2. File-server system
provides interface for clients to store and retrieve file.
Ex: of such a system is a web server that delivers files to
clients running web browsers
63
13. COMPUTING ENVIRONMENT

13.3 Peer-Peer Computing

• Another model of distributed system


• P2P does not distinguish clients and servers
• Instead all nodes are considered peers
• May each act as client, server or both
• Node must join P2P network
• Registers its service with central lookup service on network, or
• Broadcast request for service and respond to requests for
service via discovery protocol
• Examples include napster and gnutella, voice over IP (voip) such
as skype
64
13. COMPUTING ENVIRONMENT

13.4. Web Based Computing


• Web-based computing is an environment that consists of ultra-thin
clients networked over the internet or intranet.
• Applications in this environment consist of code on the servers
distributed to thin clients containing a browser, such as netscape
communicator or internet explorer; the browser completely defines the
user interface.
• The implementation of web-based computing has given rise to new
categories of devices, such as load balancers, which distribute network
connections among a pool of similar servers.
• Operating systems like windows 95, which acted as web clients, have
evolved into linux and windows XP, which can act as web servers as65
well as clients

You might also like