0% found this document useful (0 votes)
48 views168 pages

UNIT 1 Material

This document provides an overview of basic computer structure and computer arithmetic. It discusses the main components of a computer including the central processing unit, memory, input/output devices, and control unit. It also covers computer types, generations of computers, data representation methods, and algorithms for basic arithmetic operations like addition, subtraction, multiplication, and division. Computer architecture involves the hardware, instruction set, and organization of the system. The document provides definitions and examples to explain these fundamental computer science concepts in the first unit of a course.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views168 pages

UNIT 1 Material

This document provides an overview of basic computer structure and computer arithmetic. It discusses the main components of a computer including the central processing unit, memory, input/output devices, and control unit. It also covers computer types, generations of computers, data representation methods, and algorithms for basic arithmetic operations like addition, subtraction, multiplication, and division. Computer architecture involves the hardware, instruction set, and organization of the system. The document provides definitions and examples to explain these fundamental computer science concepts in the first unit of a course.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 168

CO & OS MATERIAL

Unit – I – Basic structure of computers and Computer arithmetic

BASIC STRUCTURE OF COMPUTERS: Computer Types, Functional units, Basic operational


concepts, Bus structures, Software, Performance, multiprocessors and multi computers. Data types,
Complements, Data Representation. Fixed Point Representation. Floating – Point Representation.

COMPUTER ARITHMETIC: Addition and subtraction, multiplication Algorithms, Division


Algorithms, Floating point Arithmetic operations. Decimal Arithmetic unit, Decimal Arithmetic
operations.

UNIT I

Basic Structure of Computers

Computer Architecture in general covers three aspects of computer design namely: Computer
Hardware, Instruction set Architecture and Computer Organization.
Computer hardware consists of electronic circuits, displays, magnetic and optical storage media
and communication facilities.
Instruction set Architecture is programmer visible machine interface such as instruction set,
registers, memory organization and exception handling. Two main approaches are mainly CISC
(Complex Instruction Set Computer) and RISC (Reduced Instruction Set Computer)
Computer Organization includes the high level aspects of a design, such as memory system,
the bus structure and the design of the internal CPU.

Computer Types
Computer is a fast electronic calculating machine which accepts digital input, processes it
according to the internally stored instructions (Programs) and produces the result on the output

device. The

Page 1
CO & OS MATERIAL

Figure 1: Fetch, Decode and Execute steps in a Computer System

internal operation of the computer can be as depicted in the figure below:


The computers can be classified into various categories as given below:

 Micro Computer
 Laptop Computer
 Work Station
 Super Computer
 Main Frame
 Hand Held
 Multi core

Micro Computer: A personal computer; designed to meet the computer needs of an individual.
Provides access to a wide variety of computing applications, such as word processing, photo
editing, e-mail, and internet.

Laptop Computer: A portable, compact computer that can run on power supply or a battery unit.
All components are integrated as one compact unit. It is generally more expensive than a
comparable desktop. It is also called a Notebook.

Work Station: Powerful desktop computer designed for specialized tasks. Generally used for tasks
that requires a lot of processing speed. Can also be an ordinary personal computer attached to a
LAN (local area network).

Super Computer: A computer that is considered to be fastest in the world. Used to execute tasks
that would take lot of time for other computers. For Ex: Modeling weather systems, genome
sequence, etc (Refer site: http://www.top500.org/)

Main Frame: Large expensive computer capable of simultaneously processing data for hundreds
or thousands of users. Used to store, manage, and process large amounts of data that need to be
reliable, secure, and centralized.

Hand Held: It is also called a PDA (Personal Digital Assistant). A computer that fits into a pocket,
runs on batteries, and is used while holding the unit in your hand. Typically used as an
appointment book, address book, calculator and notepad.

Multi Core: Have Multiple Cores – parallel computing platforms. Many Cores or computing
elements in a single chip. Typical Examples: Sony Play station, Core 2 Duo, i3, i7 etc.

GENERATION OF COMPUTERS
Development of technologies used to fabricate the processors, memories and I/O units of the
computers has been divided into various generations as given below:
 First generation
 Second generation
 Third generation

Page 2
CO & OS MATERIAL

 Fourth generation
 Beyond the fourth generation

First generation:
1946 to 1955: Computers of this generation used Vacuum Tubes. The computes were built using
stored program concept. Ex: ENIAC, EDSAC, IBM 701. Computers of this age typically used about
ten thousand vacuum tubes. They were bulky in size had slow operating speed, short life time and
limited programming facilities.

Second generation:
1955 to 1965: Computers of this generation used the germanium transistors as the active
switching electronic device. Ex: IBM 7000, B5000, IBM 1401. Comparatively smaller in size About
ten times faster operating speed as compared to first generation vacuum tube based computers.
Consumed less power, had fairly good reliability. Availability of large memory was an added
advantage.

Third generation:
1965 to 1975: The computers of this generation used the Integrated Circuits as the active
electronic components. Ex: IBM system 360, PDP minicomputer etc. They were still smaller in size.
They had powerful CPUs with the capacity of executing 1 million instructions per second (MIPS).
Used to consume very less power consumption.

Fourth generation:
1976 to 1990: The computers of this generation used the LSI chips like microprocessor as their
active electronic element. HCL horizen III, and WIPRO‟S Uniplus+ HCL‟s Busybee PC etc. They used
high speed microprocessor as CPU. They were more user friendly and highly reliable systems. They
had large storage capacity disk memories.

Beyond Fourth Generation:


1990 onwards: Specialized and dedicated VLSI chips are used to control specific functions of these
computers. Modern Desktop PC‟s, Laptops or Notebook Computers.

Functional Unit

A computer in its simplest form comprises five functional units namely input unit, output unit
memory unit, arithmetic & logic unit and control unit. Figure 2 depicts the functional units of a
computer system.

Page 3
CO & OS MATERIAL

Figure 2: Basic functional units of a computer

Let us discuss about each of them in brief:


1. Input Unit: Computer accepts encoded information through input unit. The
standard input device is a keyboard. Whenever a key is pressed, keyboard
controller sends the code to CPU/Memory.
Examples include Mouse, Joystick, Tracker ball, Light pen, Digitizer, Scanner etc.

2. Memory Unit: Memory unit stores the program instructions (Code), data and
results of computations etc. Memory unit is classified as:
 Primary /Main Memory
 Secondary /Auxiliary Memory

Primary memory is a semiconductor memory that provides access at high speed. Run
time program instructions and operands are stored in the main memory. Main memory is
classified again as ROM and RAM. ROM holds system programs and firmware routines
such as BIOS, POST, I/O Drivers that are essential to manage the hardware of a computer.
RAM is termed as Read/Write memory or user memory that holds run time program
instruction and data. While primary storage is essential, it is volatile in nature and
expensive. Additional requirement of memory could be supplied as auxiliary memory at
cheaper cost. Secondary memories are non volatile in nature.

3. Arithmetic and logic unit: ALU consist of necessary logic circuits like adder,
comparator etc., to perform operations of addition, multiplication, comparison of
two numbers etc.

4. Output Unit: Computer after computation returns the computed results, error
messages, etc. via output unit. The standard output device is a video monitor,
LCD/TFT monitor. Other output devices are printers, plotters etc.

5. Control Unit: Control unit co-ordinates activities of all units by issuing control
signals. Control signals issued by control unit govern the data transfers and then
appropriate operations take place. Control unit interprets or decides the
operation/action to be performed.
The operations of a computer can be summarized as follows:

1. A set of instructions called a program reside in the main memory of computer.

2. The CPU fetches those instructions sequentially one-by-one from the main
memory, decodes them and performs the specified operation on associated data
operands in ALU.
3. Processed data and results will be displayed on an output unit.

4. All activities pertaining to processing and data movement inside the computer
machine are governed by control unit.

Page 4
CO & OS MATERIAL

Basic Operational Concepts


An Instruction consists of two parts, an operation code and operand/s as shown below:

OPCODE OPERAND/s

Let us see a typical instruction


ADD LOCA, R0
This instruction is an addition operation. The following are the steps to execute the instruction:
Step 1: Fetch the instruction from main memory into the processor
Step 2: Fetch the operand at location LOCA from main memory into the processor
Step 3: Add the memory operand (i.e. fetched contents of LOCA) to the contents of register R0
Step 4: Store the result (sum) in R0.
The same instruction can be realized using two instructions as
Load LOCA, R1 Add R1, R0

The steps to execute the instructions can be enumerated as below:

Step 1: Fetch the instruction from main memory into the processor
Step 2: Fetch the operand at location LOCA from main memory into the processor Register R1
Step 3: Add the content of Register R1 and the contents of register R0
Step 4: Store the result (sum) in R0.

Figure 3 below shows how the memory and the processor are connected. As shown in the diagram,
in addition to the ALU and the control circuitry, the processor contains a number of registers used
for several different purposes. The instruction register holds the instruction that is currently being
executed. The program counter keeps track of the execution of the program. It contains the
memory address of the next instruction to be fetched and executed. There are n general purpose
registers R0 to Rn-1 which can be used by the programmers during writing programs.

Page 5
CO & OS MATERIAL

Figure 3: Connections between the processor and the memory

The interaction between the processor and the memory and the direction of flow of
information is as shown in the diagram below:

Figure 4: Interaction between the memory and the ALU

BUS STRUCTURES
Group of lines that serve as connecting path for several devices is called a bus (one bit per line).
Individual parts must communicate over a communication line or path for exchanging data,
address and control information as shown in the diagram below. Printer example – processor to
printer. A common approach is to use the concept of buffer registers to hold the content during
the transfer.

Page 6
CO & OS MATERIAL

Figure 5: Single bus structure

SOFTWARE:
If a user wants to enter and run an application program, he/she needs a System Software. System
Software is a collection of programs that are executed as needed to perform functions such as:

 Receiving and interpreting user commands


 Entering and editing application programs and storing then as files in
secondary storage devices
 Running standard application programs such as word processors, spread
sheets, games etc…
Operating system - is key system software component which helps the user to exploit the
below underlying hardware with the programs.

USER PROGRAM and OS ROUTINE INTERACTION

Let‟s assume computer with 1 processor, 1 disk and 1 printer and application program is in
machine code on disk. The various tasks are performed in a coordinated fashion, which is called
multitasking. t0, t1 …t5 are the instances of time and the interaction during various instances as
given below:

t0: the OS loads the program from the disk to memory


t1: program executes
t2: program accesses disk
t3: program executes some more
t4: program accesses printer
t5: program terminates

Page 7
CO & OS MATERIAL

Figure 6 :User program and OS routine sharing of


the processor

PERFORMANCE
The most important measure of the performance of a computer is how quickly it can
execute programs. The speed with which a computer executes program is affected by
the design of its hardware. For best performance, it is necessary to design the compiles, the
machine instruction set, and the hardware in a coordinated way. The total time required to
execute the program is elapsed time is a measure of the performance of the entire
computer system. It is affected by the speed of the processor, the disk and the printer. The
time needed to execute a instruction is called the processor time.

Just as the elapsed time for the execution of a program depends on all units in a computer
system, the processor time depends on the hardware involved in the execution of individual
machine instructions. This hardware comprises the processor and the memory which are
usually connected by the bus. The pertinent parts of the fig. c is repeated in fig. d which
includes the cache memory as part of the processor unit.

Let us examine the flow of program instructions and data between the memory and the
processor. At the start of execution, all program instructions and the required data are
stored in the main memory. As the execution proceeds, instructions are fetched one by one
over the bus into the processor, and a copy is placed in the cache later if the same
instruction or data item is needed a second time, it is read directly from the cache.

The processor and relatively small cache memory can be fabricated on a single IC chip. The
internal speed of performing the basic steps of instruction processing on chip is very high
and is considerably faster than the speed at which the instruction and data can be fetched
from the main memory. A program will be executed faster if the movement of instructions
and data between the main memory and the processor is minimized, which is achieved by
using the cache.
For example:- Suppose a number of instructions are executed repeatedly over a short period
of time as happens in a program loop. If these instructions are available in the cache, they
can be fetched quickly during the period of repeated use. The same applies to the data
that are used repeatedly.

Processor clock:

Processor circuits are controlled by a timing signal called clock. The clock designer the
regular time intervals called clock cycles. To execute a machine instruction the processor
divides the action to be performed into a sequence of basic steps that each step can be
completed in one clock cycle. The length P of one clock cycle is an important parameter that
affects the processor performance.
Processor used in today‟s personal computer and work station have a clock rates that range
from a few hundred million to over a billion cycles per second.

Basic performance equation:

We now focus our attention on the processor time component of the total elapsed time.

Page 8
CO & OS MATERIAL

Let „T‟ be the processor time required to execute a program that has been prepared
in some high-level language. The compiler generates a machine language object program
that corresponds to the source program. Assume that complete execution of the program
requires the execution of N machine cycle language instructions. The number N is the actual
number of instruction execution and is not necessarily equal to the number of machine cycle
instructions in the object program. Some instruction may be executed more than once,
which in the case for instructions inside a program loop others may not be executed all,
depending on the input data used.

Suppose that the average number of basic steps needed to execute one machine cycle
instruction is S, where each basic step is completed in one clock cycle. If clock rate is „R‟
cycles per second, the program execution time is given by

T=N*S/R

this is often referred to as the basic performance equation.


We must emphasize that N, S & R are not independent parameters changing one may affect
another. Introducing a new feature in the design of a processor will lead to improved
performance only if the overall result is to reduce the value of T.

Performance measurements:

It is very important to be able to access the performance of a computer, comp designers use
performance estimates to evaluate the effectiveness of new features.
The previous argument suggests that the performance of a computer is given by the
execution time T, for the program of interest.
Inspite of the performance equation being so simple, the evaluation of „T‟ is highly complex.
Moreover the parameters like the clock speed and various architectural features are not
reliable indicators of the expected performance.
Hence measurement of computer performance using bench mark programs is done to make
comparisons possible, standardized programs must be used.
The performance measure is the time taken by the computer to execute a given bench mark.
Initially some attempts were made to create artificial programs that could be used as bench
mark programs. But synthetic programs do not properly predict the performance obtained
when real application programs are run.
A non profit organization called SPEC- system performance evaluation corporation selects
and publishes bench marks.
The program selected range from game playing, compiler, and data base applications to
numerically intensive programs in astrophysics and quantum chemistry. In each case, the
program is compiled under test, and the running time on a real computer is measured. The
same program is also compiled and run on one computer selected as reference.
The „SPEC‟ rating is computed as follows.
Running time on the reference computer
SPEC rating =
Running time on the computer under
test If the SPEC rating = 50

Multiprocessor & microprocessors:

Page 9
CO & OS MATERIAL

Large computers that contain a number of processor units are called multiprocessor system.
These systems either execute a number of different application tasks in parallel or execute
subtasks of a single large task in parallel. All processors usually have access to all memory
locations in such system & hence they are called shared memory multiprocessor systems.
The high performance of these systems comes with much increased complexity and cost. In
contrast to multiprocessor systems, it is also possible to use an interconnected group of
complete computers to achieve high total computational power. These computers
normally have access to their own memory units when the tasks they are executing need to
communicate data they do so by exchanging messages over a communication network. This
properly distinguishes them from shared memory multiprocessors, leading to name
message-passing multi computer.

Data Representation:

Information that a Computer is dealing with


Data
Numeric Data
Numbers( Integer,
real) Non-
numeric Data
Letters, Symbols
Relationship between data elements
Data Structures
Linear Lists, Trees, Rings, etc
Program(Instruction)
Numeric Data Representation

Decimal Binary Octal Hexadecimal

00 0000 00 0
01 0001 01 1
02 0010 02 2
03 0011 03 3
04 0100 04 4
05 0101 05 5
06 0110 06 6
07 0111 07 7
08 1000 10 8
09 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
Page 10
CO & OS MATERIAL

Fixed Point Representation:


It‟s the representation for integers only where the decimal point is always fixed. i.e at the
end of rightmost point. it can be again represented in two ways.
1. Sign and Magnitude Representation
In this system, he most significant (leftmost) bit in the word as a sign bit. If the sign bit is 0,
the number is positive; if the sign bit is 1, the number is negative.
The simplest form of representing sign bit is the sign magnitude representation.
One of the draw back for sign magnitude number is addition and subtraction need to consider
both sign of the numbers and their relative magnitude.
Another drawback is there are two representation for 0(Zero) i.e +0 and -0.
2. One’s Complement (1’s) Representation
In this representation negative values are obtained by complementing each bit of the
corresponding positive number.
For example 1s complement of 0101 is 1010 . The process of forming the 1s complement of
a given number is equivalent to subtracting that number from 2n -1 i.e from 1111 for 4 bit
number.
Two‟s Complement (2‟s) Representation Forming the 2s complement of a number is done
by subtracting that number from 2n . So 2s complement of a number is obtained by adding
1 to 1s complement of that number.
Ex: 2‟s complement of 0101 is 1010 +1 = 1011
NB: In all systems, the leftmost bit is 0 for positive number and 1 for negative number.

Floating-point representation
Floating-point numbers are so called as the decimal or binary point floats over the base
depending on the exponent
value. It consists two
components.
• Exponent
• Mantissa
Example: Avogadro's number can be written as 6.02x1023 in base 10. And the mantissa and
exponent are 6.02 and 1023 respctivly. But computer floating-point numbers are usually
based on base two. So 6.02x1023 is approximately (1 and 63/64)x278 or 1.111111 (base
two) x 21001110 (base two)

Page 11
CO & OS MATERIAL

COMPUTER ARITHMETIC:

Addition, subtraction, multiplication are the four basic arithmetic operations. Using these operations
other arithmetic functions can be formulated and scientific problems can be solved by numerical
analysis methods.

Arithmetic Processor:

It is the part of a processor unit that executes arithmetic operations. The arithmetic instructions
definitions specify the data type that should be present in the registers used . The arithmetic instruction
may specify binary or decimal data and in each case the data may be in fixed-point or floating point
form.

Fixed point numbers may represent integers or fractions. The negative numbers may be in signed-
magnitude or signed- complement representation. The arithmetic processor is very simple if only a
binary fixed point add instruction is included. It would be more complicated if it includes all four
arithmetic operations for binary and decimal data in fixed and floating point representations.

Algorithm:

Algorithm can be defined as a finite number of well defined procedural steps to solve a problem.
Usually, an algorithm will contain a number of procedural steps which are dependent on results of
previous steps. A convenient method for presenting an algorithm is a flowchart which consists of
rectangular and diamond –shaped boxes. The computational steps are specified in the rectangular
boxes and the decision steps are indicated inside diamond-shaped boxes from which 2 or more
alternate path emerge.

Addition and Subtraction:


3 ways of representing negative fixed point binary numbers:

1. Signed-magnitude representation---- used for the representation of mantissa for floating point
operations by most computers.
2. Signed-1’s complement
3. Signed -2’s complement—Most computers use this form for performing arithmetic operation
with integers
Addition and subtraction algorithm for signed-magnitude data

Let the magnitude of two numbers be A & B. When signed numbers are added or subtracted,
there are 4 different conditions to be considered for each addition and subtraction depending
on the sign of the numbers. The conditions are listed in the table below. The table shows the
operation to be performed with magnitude(addition or subtraction) are indicated for different
conditions.

Add Subtract magnitudes


Sl.No Operation Magnitudes
When A> B When A< BWhen A=B
Page 12
2 ( +A ) + (-B ) +( A-B ) -( B-A ) +( A-B )

3
CO & OS MATERIAL ( -A ) + (+B ) -( A-B ) +( B-A ) +( A-B )

4 ( -A ) + (-B ) -(A+B)

5 ( +A ) - (+B ) +( A-B ) -( B-A ) +( A-B )


The last column is needed to prevent a negative zero. In other words, when two equal numbers
6are subtracted,
( +A ) - the
(-B )result should
+ ( A +be
B )+0 not -0.

7The algorithm
( -A ) -for
(+Baddition
) - ( Asubtraction
and +B) ( from the table above):

8 ( -A ) - (-B ) -( A-B ) +( B-A ) +( A-B )


Addition Algorithm:
When the signs of A and B are identical, add two magnitudes and attach the sign of A to the
result. When the sign of A and B are different, compare the magnitudes and subtract the
smaller number from the larger. Choose the sign of the result to be the same as A if A>B or the
complement of sign of A if A < B. If the two magnitudes are equal, subtract B from A and make
te sign of the result positive.
Subtraction algorithm:
When the signs of A and B are different, add two magnitudes and attach the sign of A to the
result. When the sign of A and B are identical, compare the magnitudes and subtract the
smaller number from the larger. Choose the sign of the result to be the same as A if A>B or the
complement of sign of A if A < B. If the two magnitudes are equal, subtract B from A and make
te sign of the result positive.
Hardware Implementation:
Let A and B are two registers that hold the numbers.
AS and BS are 2, flip-flops that hold sign of corresponding numbers. The result is stored In A and
AS .and thus they form Accumulator register.
We need to perform micro operation, A+ B and hence a parallel adder.
A comparator is needed to establish if A> B, A=B, or A<B.
We need to perform micro operations A-B and B-A and hence two parallel subtractor.
An exclusive OR gate can be used to determine the sign relationship, that is, equal or not.
Thus the hardware components required are a magnitude comparator, an adder, and two
subtractors.
Reduction of hardware by using different procedure:
1. We know subtraction can be done by complement and add.
2. The result of comparison can be determined from the end carry after the subtraction.
We find An adder and a complementer can do subtraction and comparison if 2’s
complement is used for subtraction.

Hardware forsigned-magnitude addition and subtraction:

Page 13
CO & OS MATERIAL

AVF Add overflow flip flop. It hold the overflow bit when A & B are added.
Flip flop E—Output carry is transferred to E. It can be checked to see the relative magnitudes of the two
numbers.

A-B = A +( -B )= Adding a and 2’s complement of B.

The A register provides other micro operations that may be needed when the sequence of steps in the
algorithm is specified.

The complementer Passes the contents of B or the complement of B to the Parallel Adder depending on
the state of the mode control B. It consists of EX-OR gates and the parallel adder consists of full adder
circuits. The M signal is also applied to the input carry of the adder.

When input carry M=0, the sum of full adder is A +B. When M=1, S = A + B’ +1= A – B

Hardware algorithm:

Flow Chart for Add and Subtract operations:

The EX-OR gate provides 0 as output when the signs are identical. It is 1 when the signs are different.

A + B is computed for the following and the sum is stored in EA:

1. When the signs are same and addition operation is required.


2. When the signs are different and subtract operation is required.

The carry in E after addition indicates an overflow if it is 1 and it is transferred to AVF, the
addoverflow flag

A-B = A+ B’+1 computed for the following:


1. When the signs are different and addition operation is required.
2. When the signs are same and subtract operation is required.
No overflow can occur if the numbers are subtracted and hence AVF is cleared to Zero.

[ the subtraction of 2 n-digit un signed numbers M-N ( N≠0) in base r can be done as follows:
1. Add minuend M to thee r’s complement of the subtrahend N. This performs M-N +rn .
2. If M ≥ N, The sum will produce an end carry rnwhich is discarded, and what is left is the result M-N.
3. If M< N, the sum does not produce an end carry and is equal to rn–( N-M ), which is the r’s complement of the sum and place a negative
sign in front.]
A 1 in E indicates that A ≥ B and the number in A is the correct result.
If this number in A is zero, the sign AS must be made positive to avoid a negative zero.
A 0 in E indicates that A< B. For this case it is necessary to take the 2’s complement of
the value in A.
In the algorithm shown in flow chart, it is assumed that A register has circuits for micro
operations complement and increment. Hence two complement of value in A is
obtained in 2, micro operations. In other paths of the flow chart , the sign of the result is

Page 14
CO & OS MATERIAL

the same as the sign of A, so no change in AS is required.


However When A < B, the sign of the result is the complement of original sign of A.
Hence The complement of AS stored in AS.
Final Result: AS A
Flow chart for ADD and Subtract operations:

Addition and Subtraction with signed-2’s complement Data.:


Arithmetic Addition:
This method does not need a comparison or subtraction but only addition and
complementation. The procedure is as below:
1. Represent the negative numbers in 2’s complement form.
2. Add the two numbers including the sign bits and discard any carry out of sign
bit position.
3. The overflow bit V is set to 1 if there is a carry into sign bit and no carry out of sign
bit or if there is a no carry into sign bit and a carry out of sign bit. Otherwise it is
set to zero.
4. If the result is negative, take the 2’s complement of the result to get a correct
negative result.

Arithmetic Subtraction:

1. Represent the negative numbers in 2’s complement form.


2. Take the 2’s complement of the subtrahend including the sign bit and add it to the
minuend including the sign bit.
3. The overflow bit V is set to 1 if there is a carry into sign bit and no carry out of sign
bit or if there is a no carry into sign bit and a carry out of sign bit. Otherwise it is
set to zero.

Page 15
CO & OS MATERIAL

4. Discard the carry out of the sign bit position.

Note: A subtraction operation can be changed to an addition operation if the sign of the subtrahend is
changed. BR Register

V
Complementer&Parallel Adder
Overflow

AC Register

Fig: Hardware for Signed 2/s complement for addition/ subtractioin.

Multiplication Algorithm:

Hardware implementation of multiplication of numbers in signed – magnitude form:

1. A adder is provided to add two binary numbers and the partial product is accumulated in a register.
2. Instead of shifting the multiplicand to the left, the partial product is shifted to the right, which result
in leaving the partial product and the multiplicand in the required relative positions.
3. When the corresponding bit of the multiplier is zero, there is no need to add all zeros to the partial
product, since it will not alter it’s value.

The hardware consists of 4 flipflops, 3 registers, one sequence counter , an adder and complementer.

Page 16
CO & OS MATERIAL

Q register & QS flip flop : contains multiplier & Its sign


Sequence counter : It is set to a value equal to the number of bits in the multiplier
B Register& BS flipflop : It contains the multiplicand,& its sign
A Register, E Flip flop : Initialized to ‘ 0’. AS denotes sign of partial product
EA Register : hold partial product, with carry generated in addition being shifted to E .
Qn: Rightmost bit of the multiplier; AQ : will contain the final product. As AQ represent product
register, both AS QS represent the sign of the partial product or product. The number to be multiplied are
stores in memory as n bit sign magnitude numbers and when transferred to register msb bit go to sign
flipflop and remaining n-1 bits go to registers. Hence SC is initially set to n-1. Let the lower order bit of
the multiplier in Qn tested.

If it is 1, the multiplicand in B is added to the present partial product in A. If it is a ‘0’, nothing is done.
Register EAQ is then shifted once to the right to form the new partial product. The sequence counter is
decremented by 1 and it’s new value checked. If it is not equal to zero, the process is repeated and a
new partial product is formed. The process stops when SC = 0.

The final product is available in both A and Q, with A holding the most significant bits and Q holding the
least significant bits.

Flowchart for multiply operation:

Page 17
CO & OS MATERIAL

Numerical Example for the above algorithm:

Multiplicand B= 10111 E A Q SC

Multiplier in Q 0 00000 10011 101

Qn =1;add B 10111

First Partial Product 0 10111

Shift Right EAQ 0 01011 11001 100

Qn =1;add B 10111

Second Partial Product 1 00010

Page 18
CO & OS MATERIAL

Shift Right EAQ 0 10001 01100 011

Qn =0; Shift Right EAQ 0 01000 10110 010

Qn =0; Shift Right EAQ 0 00100 01011 001

Qn =1;add B 10111

Fifth Partial Product 0 11011

Shift Right EAQ 0 01101 10101 000

Final Product in AQ

AQ = 0110110101

Booth Multiplication Algorithm:


Multiplication of signed- 2’s complement integers:

This algorithm uses the following facts.

1. A string of 0’s in the multiplier requires no addition but just shifting.


2. A string of 1’s in the multiplier from bit weight 2k to weight 2m can be treated as 2k+1 - 2m.

Example: Consider the binary number: 001110( +14 )

The number has a string of 1’s from 2 3 to 21 . Hence k = 3 and m= 1. As other bits are 0’s, the
number can be represented as 2k+1 - 2m = 24 – 21 = 16-2 = 14. Therefore the multiplication M * 14 ,
where M is the multiplicand and 14 the multiplier can be done as Mx 24 –M x 21.

This can be achieved by shifting binary multiplicand M four times to the left and subtracting M
shifted left once which is equal to (Mx 24 –M x 21. ).

Shifting and addition/subtraction rules for multiplicand in Booth’s Algorithm:

1. The multiplicand is subtracted from the partial product upon encountering the first least
significand 1 in a string of I’s in the multiplier.
2. The multiplicand is added to the partial product upon encountering the first 0 ( provided that
there was a previous 1)in a string of 0’s in the multiplier.
3. The partial product does not change when the multiplier bit is identical to the previous
multiplier bit
Hardware Implementation of Booth Algorithm:

Page 19
CO & OS MATERIAL

Note: Sign bit is not separated from register. QR register contains the multiplier register and
Qnrepresent the least significant bit of the multiplier in QR. Q n+1 is an extra flip flop appended to
QR to facilitate a double bit inspection of the multiplier.
AC register and appended Qn+1 are initially cleared to 0.
Sequence counter Sc is set to the number n which is equal to the number of bits of bits In the
multiplier.
QnQn+1 are to successive bits in the multiplier

Example for multiplication using Boot h algorithm:

QnQn+1 BR = 1011 ,𝐵𝑅′+1 = AC QR Qn+1 SC


01001
10 Initial 00000 10011 0 101
Subtract BR 01001
01001
ashr 00100 11001 1 100

11 ashr 00010 01100 1 011

01 Add BR 10111
11001
ashr 11100 10110 0 010

00 ashr 11110 01011 0 001

10 Subtract BR 01001

00111
Ashr 00011 10101 1 000

Algorithm in flowchart for multiplication of signed 2’s complement numbers.

Page 20
CO & OS MATERIAL

Array Multiplier:
2 -bit by 2- bit Array Multiplier:

Multiplicand bits are b1 and b0 .Multiplier bits are a1 and a0 .The first partial product is obtained by
multiplying a0 by b1 b0 . The bit multiplication is implemented by AND gate. First partial product is
made by two AND gates. Second partial product is made by two AND gates. The two partial
products are added with two half adder circuits.

Page 21
CO & OS MATERIAL

Combinational circuit binary multiplier:

A bit of the multiplier is ANDed with each bit of the multiplicand in as many levels as there bits in the
multiplier. The binary output in each level of the AND gates is added in parallel with the partial product
of the previous level to form a ne partial product. The last level produces the product. For j multiplier
and k multiplicand bits, we need j*k AND Gates and (j-1)*k bit adders to ptoduce a product of j+k bits.

4- bit by 3-bit Array Multiplier:

Page 22
CO & OS MATERIAL

Division Algorithms:

Division Process for division of fixed point binary number in signed –magnitude representation:

Let dividend A consists of 10 bits and divisor B consists of 5 bits.

1. Compare the 5 most significant bits of the dividend with that of divisor.
2. If the 5 bit number is smaller than divisor B, then take 6 bits of the dividend and compare with the 5 bit divisor.
3. The 6 bit number is greater than divisor B. Hence place a 1 for the quotient bit in the sixth position above
the dividend. Shift the divisor once to the right and subtracted from the dividend. The difference is called
partial remainder.
4. Repeat the process with the partial remainder and divisor. If the partial remainder is equal or greater than or equal
to the divisor, the quotient bit is equal to 1.The divisor is then shifted right and subtracted from the partial
remainder. If the partial remainder is small than the divisor, then the quotient bit is zero and no subtraction is
needed. The divisor is shifted once to the right in any case,.

Hardware Implementation of division for signed magnitude fixed point numbers:

To implement division using a digital computer, the process is changed slightly for convenience.
1. Instead of shifting the divisor to the right, the dividend or the partial remainder, is shifted to the left so as to
leave the two numbers in the required relative position.
2. Subtraction may be achieved by adding A (dividend)to the 2’s complement of B(divisor). The information
about the relative magnitude is then available from end carry.
3. Register EAQ is now shifted to the left with 0 inserted into Qn and the previous value of E is lost..
4. The divisor is stored in B register and the double length dividend is stored in registers A and Q.
5. The dividend is shifted to the left and the divisor is subtracted by adding it’s 2’s complement value.
6. If E= 1, it signifies that A ≥ B.A quotient bit is inserted into Qnand the partial remainder is shifted to the left to
repeat the process.
7. If E = 0, it signifies that A < B so the quotient Qn remains 0( inserted during the shift). The value of B is then
added to restore the partial remainder in A to its previous value. The partial remainder is shifted to the left and
the process is repeated again until all 5 quotient bits are formed.
8. At the end Q contains the quotient and A the remainder. If the sign of dividend and divisor are alike, the
quotient is positive and if unalike, it is negative. The sign of the remainder is the same as dividend.

Page 23
CO & OS MATERIAL

B Register Sequence Counter( SC)

Complementer and parallel


adder
AS QS
Qn

A Register Q Register 0
E
Hardware for implementing division of fixed point signed- Magnitude Numbers

Example of Binary division with digital hardware: Divisor B = 10001, B + 1 = 01111

E A Q SC

Dividend: 01110 00000 5

Shl EAQ 11100 00000

Add ,B + 1 01111

E=1 1 01011

Set Qn= 1 1 01011 00001 4

Shl EAQ 0 10110 00010

Add ,B + 1 01111

E=1 1 00101

Set Qn= 1 1 00101 00011 3

Shl EAQ 0 01010 00110

Add ,B + 1 01111

E= 0; Leave Qn= 0 0 11001 00110

Add B 10001

**

Page 24
CO & OS MATERIAL

Restore remainder 1 01010 2

Shl EAQ 0 10100 01100

Add ,B + 1 01111

E=1 1 00011

Set Qn= 1 1 00011 01101 1

Shl EAQ 0 00110 11010

Add ,B + 1 01111

E= 0; Leave Qn= 0 0 10101 11010

Add B 10001

Restore remainder 1 00110 11010 0

Neglect E

Remainder in A 00110 11010

Quotient in Q

Divide overflow:

When the dividend is twice as long as the divisor, the condition for overflow can be stated as follows:

A divide-overflow condition occurs if the higher order half bits of the dividend constitute a number
greater than or equal to the divisor. If the divisor is zero, then the dividend will definitely be greater
than or equal to divisor. Hence divide overflow condition occurs and hence the divide-overflow –flip flop
will be set. Let the flip flop be called DVF.

Handling DVF:

1. Check if DVF is set after each divide instruction. If DVF is set, then the program branches to a
subroutine that takes corrective measures such as rescaling the data to avoid overflow.
2. An interrupt is generated if DVF is set. The interrupt causes the processor to suspend the
current program and branch to interrupt service routine to take corrective measure. The
most

Page 25
CO & OS MATERIAL

common corrective measure is to remove the program and type an error message that explains
the reasons.
3. The divide overflow can be handled very simply if the numbers are represented in floating point
representation.

Flow chart for divide operation:

Assumption:

Operands are transferred from memory to registers as n bit words.n-1 bit form magnitude and 1 bit
shows the sign.

Page 26
CO & OS MATERIAL

A divide overflow condition is tested by subtracting the divisor in B from half of the bits of dividend
stored in A. If vA ≥ B, the DVF is set and the operation is terminated prematurely. If A < B, no DVF occurs
and so the value of dividend is restored by adding B to A.

The division of the magnitudes starts by shifting the dividend in AQ to the left, with the higher order bit
shifted into E. If the bit shifted into E is 1, we know that EA is greater than B because EA consists of a 1
followed by n-1 bits while B consists of only n-1 bits. In this case, B must be subtracted from EA and 1
inserted into Qn for the quotient bit. Since register A is missing the higher order bit of the dividend
(which is in E), it’s value is EA – 2n-1 . Adding to this value the 2’s complement of B results in

(EA-2n-1 ) + ( 2n-1 –B )= E-B. The carry from the addition is not transferred to E if we want E to remain a 1.

If the shift left operation inserts a zero into E, the divisor is subtracted by adding it’s 2’s complement
value and the carry is transferred into E. If E = 1, it signifies that A ≥ B and hence Q n is set to 1. If E = 0, it
signifies that A < B and the original number is restored by adding B to A. In the latter case we leave a 0 in
Qn .( 0 was inserted during the shift).

This process is repeated again with register A holding the partial remainder. After n-1 times, the
quotient magnitude is formed in the register Q and the remainder is found in register A.

Page 27
CO & OS MATERIAL

Page 28
CO & OS MATERIAL

Page 29
CO & OS MATERIAL

2.

Page 30
CO & OS MATERIAL

Page 31
CO & OS MATERIAL

Page 32
CO & OS MATERIAL

Page 33
CO & OS MATERIAL

Page 34
CO & OS MATERIAL

Page 35
CO & OS MATERIAL

Page 36
CO & OS MATERIAL

Page 37
CO & OS MATERIAL

Page 38
CO & OS MATERIAL

Page 39
CO & OS MATERIAL

Page 40
CO & OS MATERIAL

Page 41
CO & OS MATERIAL

Page 42
CO & OS MATERIAL

Page 43
CO & OS MATERIAL

Page 44
CO & OS MATERIAL

Page 45
CO & OS MATERIAL

Page 46
UNIT -1
COMPUTER SYSTEM AND OPERATING SYSTEM OVERVIEW

OVER VIEW OF OPERATING SYSTEM


What is an Operating System?
A program that acts as an intermediary between a user of a computer and the computer
hardware Operating system goals:
Execute user programs and make solving user problems
easier Make the computer system convenient to use
Use the computer hardware in an efficient manner
Computer System Structure
Computer system can be divided into four
components Hardware – provides basic
computing resources
CPU, memory, I/O
devices Operating system
Controls and coordinates use of hardware among various applications and users
Application programs – define the ways in which the system resources are used to solve the computing
problems of the users
Word processors, compilers, web browsers, database systems, video
games Users
People, machines, other computers
Four Components of a Computer System

Operating System Definition


OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair resource
use OS is a control program
Controls execution of programs to prevent errors and improper use of the
computer No universally accepted definition
Everything a vendor ships when you order an operating system” is good approximation
But varies wildly

“The one program running at all times on the computer” is the kernel. Everything else is either a system
program (ships with the operating system) or an application program
Computer Startup
bootstrap program is loaded at power-up or reboot
Typically stored in ROM or EPROM, generally known as firmware
Initializes all aspects of system
Loads operating system kernel and starts execution
Computer System Organization
Computer-system operation
One or more CPUs, device controllers connect through common bus providing access to shared memory
Concurrent execution of CPUs and devices competing for memory cycles

Computer-System Operation
I/O devices and the CPU can execute concurrently
Each device controller is in charge of a particular device
type Each device controller has a local buffer
CPU moves data from/to main memory to/from local
buffers I/O is from the device to local buffer of controller
Device controller informs CPU that it has finished its operation by causing An interrupt

Common Functions of Interrupts


Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which
contains the addresses of all the service routines
Interrupt architecture must save the address of the interrupted instruction
Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interruptnA
trap is a software-generated interrupt caused either by an error or a user request
An operating system is interrupt
driven Interrupt Handling
The operating system preserves the state of the CPU by storing registers and the program counter
Determines which type of interrupt has occurred:
polling
vectored interrupt system
Separate segments of code determine what action should be taken for each type of interrupt

Interrupt Timeline

I/O Structure
After I/O starts, control returns to user program only upon I/O
completion Wait instruction idles the CPU until the next interrupt
Wait loop (contention for memory access)
At most one I/O request is outstanding at a time, no simultaneous I/O processing
After I/O starts, control returns to user program without waiting for I/O completion
System call – request to the operating system to allow user to wait for I/O completion
Device-status table contains entry for each I/O device indicating its type, address, and state Operating
system indexes into I/O device table to determine device status and to modify table entry to include
interrupt

Direct Memory Access Structure


Used for high-speed I/O devices able to transmit information at close to memory speeds
Device controller transfers blocks of data from buffer storage directly to main memory without CPU
intervention
Only one interrupt is generated per block, rather than the one interrupt per byte
Storage Structure
Main memory – only large storage media that the CPU can access directly
Secondary storage – extension of main memory that provides large nonvolatile storage capacity
Magnetic disks – rigid metal or glass platters covered with magnetic recording material
Disk surface is logically divided into tracks, which are subdivided into sectors
The disk controller determines the logical interaction between the device and the computer
Storage Hierarchy
Storage systems organized in
hierarchy Speed
Cos
t
Vol
atili
ty
Caching – copying information into faster storage system; main memory can be viewed as a last cache for
secondary storage

Caching
Important principle, performed at many levels in a computer (in hardware, operating system, software)
Information in use copied from slower to faster storage temporarily
Faster storage (cache) checked first to determine if information is
there If it is, information used directly from the cache (fast)
If not, data copied to cache and used there
Cache smaller than storage being cached
Cache management important design
problem Cache size and replacement
policy

Computer-System Architecture
Most systems use a single general-purpose processor (PDAs through mainframes)
Most systems have special-purpose processors as well
Multiprocessors systems growing in use and
importance Also known as parallel systems, tightly-
coupled systems
Advantages include
1.Increased
throughput
2.Economy of
scale
3. Increased reliability – graceful degradation or fault
tolerance Two types
1. Asymmetric
Multiprocessing
2.Symmetric
Multiprocessing

How a Modern Computer Works Symmetric Multiprocessing Architecture

A Dual-Core Design

Clustered Systems

Like multiprocessor systems, but multiple systems working


together Usually sharing storage via a storage-area network (SAN)
Provides a high-availability service which survives failures
Asymmetric clustering has one machine in hot-standby
mode
Symmetric clustering has multiple nodes running applications, monitoring each other
Some clusters are for high-performance computing (HPC)
Applications must be written to use parallelization
Operating System Structure
Multiprogramming needed for
efficiency
Single user cannot keep CPU and I/O devices busy at all times
Multiprogramming organizes jobs (code and data) so CPU always has one to Execute
A subset of total jobs in system is kept in memory
One job selected and run via job scheduling
When it has to wait (for I/O for example), OS switches to another job
Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that
users can interact with each job while it is running, creating interactive computing
Response time should be < 1 second
Each user has at least one program executing in memory
[process If several jobs ready to run at the same time [ CPU
scheduling
If processes don’t fit in memory, swapping moves them in and out
to run Virtual memory allows execution of processes not completely in
memory Memory Layout for Multiprogrammed System

Operating-System Operations
Interrupt driven by hardware
Software error or request creates exception or trap
Division by zero, request for operating system service
Other process problems include infinite loop, processes modifying each Other or the operating system
Dual-mode operation allows OS to protect itself and other system components
User mode and kernel
mode Mode bit provided
by hardware
Provides ability to distinguish when system is running user code or kernel code
Some instructions designated as privileged, only executable in kernel mode
System call changes mode to kernel, return from call resets it to user
Transition from User to Kernel Mode
Timer to prevent infinite loop / process hogging
resources Set interrupt after specific period
Operating system decrements
counter When counter zero
generate an interrupt
Set up before scheduling process to regain control or terminate program that exceeds allotted time
UNIT - 1

OPERATING SYSTEM FUNCTIONS

Process Management
A process is a program in execution. It is a unit of work within the system. Program is a passive
entity, process is an active entity.
Process needs resources to accomplish its
task CPU, memory, I/O, files
Initialization data
Process termination requires reclaim of any reusable resources
Single-threaded process has one program counter specifying location of next instruction to execute
Process executes instructions sequentially, one at a time, until completion
Multi-threaded process has one program counter per thread
Typically system has many processes, some user, some operating system running concurrently on one or
more CPUs
Concurrency by multiplexing the CPUs among the processes / threads
Process Management Activities
The operating system is responsible for the following activities in connection with process
management:
Creating and deleting both user and system
processes Suspending and resuming processes
Providing mechanisms for process
synchronization Providing mechanisms for
process communication Providing mechanisms
for deadlock handling
Memory Management
All data in memory before and after
processing All instructions in memory in
order to execute
Memory management determines what is in memory
when Optimizing CPU utilization and computer response
to users

Memory management activities


Keeping track of which parts of memory are currently being used and by whom
Deciding which processes (or parts thereof) and data to move into and out of
memory Allocating and deallocating memory space as needed
Storage Management
OS provides uniform, logical view of information
storage Abstracts physical properties to logical storage
unit - file
Each medium is controlled by device (i.e., disk drive, tape drive)
Varying properties include access speed, capacity, data-transfer rate, access method (sequential or
random)
File-System management
Files usually organized into directories
Access control on most systems to determine who can access what
OS activities include
Creating and deleting files and
directories Primitives to manipulate
files and dirs Mapping files onto
secondary storage
Backup files onto stable (non-volatile) storage media
Mass-Storage Management

Usually disks used to store data that does not fit in main memory or data that must be kept for a “long”
period of time
Proper management is of central importance
Entire speed of computer operation hinges on disk subsystem and its algorithms
MASS STORAGE
activities Free-space
management Storage
allocation
Disk scheduling
Some storage need not be fast
Tertiary storage includes optical storage, magnetic
tape Still must be managed
Varies between WORM (write-once, read-many-times) and RW (read-write)
Performance of Various Levels of Storage

Migration of Integer A from Disk to Register


Multitasking environments must be careful to use most recent value, no matter where it is stored in the
storage hierarchy
Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the
most recent value in their cache
Distributed environment situation even more
complex Several copies of a datum can exist
I/O Subsystem
One purpose of OS is to hide peculiarities of hardware devices from the user
I/O subsystem responsible for
Memory management of I/O including buffering (storing data temporarily while it is being transferred),
caching (storing parts of data in faster storage for performance), spooling (the overlapping of output of
one job with input of other jobs)
General device-driver interface
Drivers for specific hardware
devices
Protection and Security
Protection – any mechanism for controlling access of processes or users to resources defined by the OS
Security – defense of the system against internal and external attacks
Huge range, including denial-of-service, worms, viruses, identity theft, theft of service
Systems generally first distinguish among users, to determine who can do what
User identities (user IDs, security IDs) include name and associated number, one per user
User ID then associated with all files, processes of that user to determine access control
Group identifier (group ID) allows set of users to be defined and controls managed, then also associated
with each process, file
Privilege escalation allows user to change to effective ID with more rights
DISTRIBUTED SYSTEMS
Computing
Environments
Traditional computer
Blurring over
time Office
environment
PCs connected to a network, terminals attached to mainframe or minicomputers providing batch
and timesharing
Now portals allowing networked and remote systems access to same
resources Home networks
Used to be single system, then
modems Now firewalled,
networked
Client-Server Computing

Dumb terminals supplanted by smart PCs


Many systems now servers, responding to requests generated by clients
Compute-server provides an interface to client to request services (i.e.
database) File-server provides interface for clients to store and retrieve files
Peer-to-Peer Computing
Another model of distributed system
P2P does not distinguish clients and
servers Instead all nodes are considered
peers
May each act as client, server or
both Node must join P2P network
Registers its service with central lookup service on network, or
Broadcast request for service and respond to requests for service via discovery protocol
Examples include Napster and Gnutella
Web-Based Computing
Web has become
ubiquitous PCs most
prevalent devices
More devices becoming networked to allow web access
New category of devices to manage web traffic among similar servers: load balancers
Use of operating systems like Windows 95, client-side, have evolved into Linux and Windows XP,
which can be clients and servers
Open-Source Operating Systems
Operating systems made available in source-code format rather than just binary closed-source
Counter to the copy protection and Digital Rights Management (DRM) movement
Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License (GPL) Examples
include GNU/Linux, BSD UNIX (including core of Mac OS X), and Sun Solaris
Operating System Services
One set of operating-system services provides functions that are helpful to the
user: User interface - Almost all operating systems have a user interface (UI)
Varies between Command-Line (CLI), Graphics User Interface (GUI), Batch
Program execution - The system must be able to load a program into memory and to run that program,
end execution, either normally or abnormally (indicating error)
I/O operations - A running program may require I/O, which may involve a file or an I/O device
File-system manipulation - The file system is of particular interest. Obviously, programs need to read
and write files and directories, create and delete them, search them, list file Information, permission
management.
A View of Operating System Services

Operating System Services


One set of operating-system services provides functions that are helpful to the user
Communications – Processes may exchange information, on the same computer or between computers
over a network Communications may be via shared memory or through message passing (packets
moved by the OS)
Error detection – OS needs to be constantly aware of possible errors May occur in the CPU and memory
hardware, in I/O devices, in user program For each type of error, OS should take the appropriate action
to ensure correct and consistent computing Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system

Another set of OS functions exists for ensuring the efficient operation of the system itself via resource
sharing
Resource allocation - When multiple users or multiple jobs running concurrently, resources must be
allocated to each of them
Many types of resources - Some (such as CPU cycles, main memory, and file storage) may have special
allocation code, others (such as I/O devices) may have general request and release code
Accounting - To keep track of which users use how much and what kinds of computer resources
Protection and security - The owners of information stored in a multiuser or networked computer
system may want to control use of that information, concurrent processes should not interfere with each
other
Protection involves ensuring that all access to system resources is controlled
Security of the system from outsiders requires user authentication, extends to defending external I/O
devices from invalid access attempts
If a system is to be protected and secure, precautions must be instituted throughout it. A chain is only as
strong as its weakest link.
User Operating System Interface - CLI
Command Line Interface (CLI) or command interpreter allows direct command entry
Sometimes implemented in kernel, sometimes by systems
program Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
Sometimes commands built-in, sometimes just names of
programs If the latter, adding new features doesn’t require shell
modification
User Operating System Interface - GUI

User-friendly desktop metaphor


interface Usually mouse, keyboard, and
monitor Icons represent files, programs,
actions, etc
Various mouse buttons over objects in the interface cause various actions (provide information, options,
execute function, open directory (known as a folder)
Invented at Xerox PARC
Many systems now include both CLI and GUI interfaces
Microsoft Windows is GUI with CLI “command” shell
Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and shells available Solaris
is CLI with optional GUI interfaces (Java Desktop, KDE)

Bourne Shell Command Interpreter


The Mac OS X GUI

System Calls

Programming interface to the services provided by the


OS Typically written in a high-level language (C or C++)
Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct
system call usenThree most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual
machine (JVM)
Why use APIs rather than system calls?(Note that the system-call names used throughout this text are
generic)
Example of System Calls
Example of Standard API
Consider the ReadFile() function in the
Win32 API—a function for reading from a file

A description of the parameters passed to


ReadFile() HANDLE file—the file to be
read
LPVOID buffer—a buffer where the data will be read into and written from
DWORD bytesToRead—the number of bytes to be read into the buffer
LPDWORD bytesRead—the number of bytes read during the last read
LPOVERLAPPED ovl—indicates if overlapped I/O is being used

System Call Implementation


Typically, a number associated with each system call
System-call interface maintains a table indexed according to
these Numbers
The system call interface invokes intended system call in OS kernel and returns status of the system call
and any return values
The caller need know nothing about how the system call is
implemented Just needs to obey API and understand what OS will do as
a result call Most details of OS interface hidden from programmer by
API
Managed by run-time support library (set of functions built into libraries included with compiler)
API – System Call – OS Relationship

Standard C Library Example


System Call Parameter Passing
Often, more information is required than simply identity of desired system call
Exact type and amount of information vary according to OS and call
Three general methods used to pass parameters to the
OS Simplest: pass the parameters in registers
 In some cases, may be more parameters than registers
Parameters stored in a block, or table, in memory, and address of block passed as a parameter in a
register
This approach taken by Linux and Solaris
Parameters placed, or pushed, onto the stack by the program and popped off the stack by the
operating system
Block and stack methods do not limit the number or length of parameters being passed

Parameter Passing via Table


Types of System Calls
Process
control File
manageme
nt
Device management
Information
maintenance
Communications
Protection

Examples of Windows and Unix System Calls

MS-DOS execution

(a) At system startup (b) running a program


FreeBSD Running Multiple Programs

System Programs
System programs provide a convenient environment for program development and execution. The can be
divided into:
File
manipulation
Status
information
File
modification
Programming language
support Program loading
and execution
Communications
Application programs
Most users’ view of the operation system is defined by system programs, not the actual system calls
Provide a convenient environment for program development and execution
Some of them are simply user interfaces to system calls; others are considerably more complex
File management - Create, delete, copy, rename, print, dump, list, and generally manipulate files and
directories
Status information
Some ask the system for info - date, time, amount of available memory, disk space, number of users
Others provide detailed performance, logging, and debugging information
Typically, these programs format and print the output to the terminal or other output
devices Some systems implement a registry - used to store and retrieve configuration
information
File modification
Text editors to create and modify files
Special commands to search contents of files or perform transformations of the text
Programming-language support - Compilers, assemblers, debuggers and interpreters sometimes
provided
Program loading and execution- Absolute loaders, relocatable loaders, linkage editors, and overlay-
loaders, debugging systems for higher-level and machine language

Communications - Provide the mechanism for creating virtual connections among processes, users, and
computer systems
Allow users to send messages to one another’s screens, browse web pages, send electronic-mail
messages, log in remotely, transfer files from one machine to another

Operating System Design and Implementation


Design and Implementation of OS not “solvable”, but some approaches have proven successful
Internal structure of different Operating Systems can vary widely
Start by defining goals and specifications
Affected by choice of hardware, type of
system User goals and System goals
User goals – operating system should be convenient to use, easy to learn, reliable, safe, and fast
System goals – operating system should be easy to design, implement, and maintain, as well as flexible,
reliable, error-free, and efficient
Important principle to
separate Policy: What
will be done?
Mechanism: How to do
it?
Mechanisms determine how to do something, policies decide what will be done
The separation of policy from mechanism is a very important principle, it allows maximum flexibility if
policy decisions are to be changed later
Simple Structure
MS-DOS – written to provide the most functionality in the least
space Not divided into modules
Although MS-DOS has some structure, its interfaces and levels of Functionality are not well separated

MS-DOS Layer Structure


Layered Approach

The operating system is divided into a number of layers (levels), each built on top of lower layers. The
bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of only
lower-level layers
Traditional UNIX System Structure

UNIX

UNIX – limited by hardware functionality, the original UNIX operating system had limited structuring.
The UNIX OS consists of two separable
parts Systems programs
The kernel
Consists of everything below the system-call interface and above the physical hardware
Provides the file system, CPU scheduling, memory management, and other operating-system
functions; a large number of functions for one level
Layered Operating System
Micro kernel System Structure
Moves as much from the kernel into “user” space
Communication takes place between user modules using message passing Benefits:
Easier to port the operating system to new architectures
More reliable (less code is running in kernel mode)

More
secure
Detrim
ents:
Performance overhead of user space to kernel space communication

Mac OS X Structure

Modules

Most modern operating systems implement kernel


modules Uses object-oriented approach
Each core component is separate
Each talks to the others over known
interfaces Each is loadable as needed within
the kernel Overall, similar to layers but with
more flexible

Solaris Modular Approach

Virtual Machines
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the
operating system kernel as though they were all hardware
A virtual machine provides an interface identical to the underlying bare hardware
The operating system host creates the illusion that a process has its own processor and (virtual memory)
Each guest provided with a (virtual) copy of underlying computer
Virtual Machines History and Benefits
First appeared commercially in IBM mainframes in 1972
Fundamentally, multiple execution environments (different operating systems) can share the same
hardware
Protect from each other
Some sharing of file can be permitted, controlled
Commutate with each other, other physical systems via
networking Useful for development, testing
Consolidation of many low-resource use systems onto fewer busier systems
“Open Virtual Machine Format”, standard format of virtual machines, allows a VM to run within many
different virtual machine (host) platforms
Para-virtualization
Presents guest with system similar but not identical to
hardware Guest must be modified to run on paravirtualized
hardwareF
Guest can be an OS, or in the case of Solaris 10 applications running in containers
Solaris 10 with Two Containers
VMware Architecture

The Java Virtual Machine

Operating-System Debugging

Debugging is finding and fixing errors, or bugs


OSes generate log files containing error
information
Failure of an application can generate core dump file capturing memory of the process
Operating system failure can generate crash dump file containing kernel memory
Beyond crashes, performance tuning can optimize system performance
Kernighan’s Law: “Debugging is twice as hard as writing the code in the rst place. Therefore, if you write
the code as cleverly as possible, you are, by definition, not smart enough to debug it.”
DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production systems Probes
fire when code is executed, capturing state data and sending it to consumers of those probes
Solaris 10 dtrace Following System Call

Operating System Generation


Operating systems are designed to run on any of a class of machines; the system must be configured for
each specific computer site
SYSGEN program obtains information concerning the specific configuration of the hardware system
Booting – starting a computer by loading the kernel
Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start
its execution
System Boot
Operating system must be made available to hardware so hardware can start it
Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
Sometimes two-step process where boot block at fixed location loads bootstrap loader
When power initialized on system, execution starts at a fixed memory location Firmware used to hold initial
boot code
UNIT -4

PROCESS AND MEMORY MANAGEMENT

Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost interchangeably
Process – a program in execution; process execution must progress in sequential fashion
A process includes:
program
counter
stack
data section
Process in Memory

Process State

As a process executes, it changes state


new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)

Information associated with each


process Process state
Program
counter
CPU
registers
CPU scheduling information
Memory-management
information Accounting
information
I/O status information

CPU Switch From Process to Process

Process Scheduling Queues

Job queue – set of all processes in the system


Ready queue – set of all processes residing in main memory, ready and waiting to execute
Device queues – set of processes waiting for an I/O
device Processes migrate among the various queues

Ready Queue and Various I/O Device Queues


Representation of Process Scheduling

Schedulers
Long-term scheduler (or job scheduler) – selects which processes should be brought into
the ready queue
Short-term scheduler (or CPU scheduler) – selects which process should be executed next and
allocates CPU
Addition of Medium Term Scheduling
Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast)
Long-term scheduler is invoked very infrequently (seconds, minutes) Þ (may be slow)
The long-term scheduler controls the degree of multiprogramming
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts
Context Switch
When CPU switches to another process, the system must save the state of the old process and load the
saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while switching
Time dependent on hardware support
Process Creation
Parent process create children processes, which, in turn create other processes, forming a tree of
processes
Generally, process identified and managed via a process identifier
(pid) Resource sharing
Parent and children share all resources
Children share subset of parent’s
resources Parent and child share no
resources Execution
Parent and children execute
concurrently Parent waits until
children terminate Address space
Child duplicate of parent
Child has a program loaded
into it UNIX examples
fork system call creates new process
exec system call used after a fork to replace the process’ memory space with a new program

Process Creation

C Program Forking Separate Process

int main()
{
pid_t pid;
/* fork another
process */ pid =
fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork
Failed"); exit(-1);
}
else if (pid == 0) { /* child process
*/ execlp("/bin/ls", "ls",
NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait (NULL);
printf ("Child
Complete"); exit(0);
}
}

A tree of processes on a typical Solaris

Process Termination
Process executes last statement and asks the operating system to delete it
(exit) Output data from child to parent (via wait)
Process’ resources are deallocated by operating system
Parent may terminate execution of children processes
(abort) Child has exceeded allocated resources
Task assigned to child is no longer required
If parent is exiting Some operating system do not allow child to continue if its parent terminates
All children terminated - cascading termination
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Reasons for cooperating processes:
Information
sharing
Computation
speedup
Modularity
Convenience
Cooperating processes need interprocess communication
(IPC) Two models of IPC
Shared
memory
Message
passing

Communications Models

Cooperating Processes
Independent process cannot affect or be affected by the execution of another process
Cooperating process can affect or be affected by the execution of another process
Advantages of process cooperation
Information
sharing
Computation
speed-up
Modularity
Convenience
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a
consumer process
unbounded-buffer places no practical limit on the size of the buffer
bounded-buffer assumes that there is a fixed buffer size
Bounded-Buffer – Shared-Memory Solution
Shared data
#define BUFFER_SIZE
10 typedef struct {
...
} item;
item
buffer[BUFFER_SIZE]; int
in = 0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Producer
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers
*/ buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}

Bounded Buffer – Consumer


while (true) {
while (in == out)
; // do nothing -- nothing to consume
// remove an item from the
buffer item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Interprocess Communication – Message Passing
Mechanism for processes to communicate and to synchronize their actions
Message system – processes communicate with each other without resorting to shared variables
IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
If P and Q wish to communicate, they need
to: establish a communication link between
them exchange messages via send/receive
Implementation of communication link
physical (e.g., shared memory, hardware
bus) logical (e.g., logical properties)

Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from
process Q Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional

Indirect Communication
Messages are directed and received from mailboxes (also referred to as
ports) Each mailbox has a unique id
Processes can communicate only if they share a
mailbox Properties of communication link
Link established only if processes share a common
mailbox A link may be associated with many processes
Each pair of processes may share several communication
links Link may be unidirectional or bi-directional
Operations
create a new mailbox
send and receive messages through
mailbox destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from
mailbox A Mailbox sharing
P1, P2, and P3 share
mailbox A P1, sends; P2
and P3 receive Who gets
the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive
operation
Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send has the sender block until the message is
received Blocking receive has the receiver block until a
message is available Non-blocking is considered
asynchronous
Non-blocking send has the sender send the message and continue
Non-blocking receive has the receiver receive a valid message or null
Buffering
Queue of messages attached to the link; implemented in one of three ways
1. Zero capacity – 0 messages
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n
messages Sender must wait if link full
3. Unbounded capacity – infinite
length Sender never waits
Examples of IPC Systems - POSIX
POSIX Shared Memory
Process first creates shared memory segment
segment id = shmget(IPC PRIVATE, size, S IRUSR | S IWUSR);
Process wanting access to that shared memory must attach to it
shared memory = (char *) shmat(id, NULL, 0);
Now the process could write to the shared
memory printf(shared memory, "Writing to shared
memory");
When done a process can detach the shared memory from its address
space shmdt(shared memory);
Examples of IPC Systems - Mach
Mach communication is message
based Even system calls are
messages
Each task gets two mailboxes at creation- Kernel and
Notify Only three system calls needed for message
transfer msg_send(), msg_receive(), msg_rpc()
Mailboxes needed for commuication, created
via port_allocate()
Examples of IPC Systems – Windows XP
Message-passing centric via local procedure call (LPC) facility
Only works between processes on the same system
Uses ports (like mailboxes) to establish and maintain communication
channels Communication works as follows:
The client opens a handle to the subsystem’s connection port object
The client sends a connection request
The server creates two private communication ports and returns the handle to one of them to the
client The client and server use the corresponding port handle to send messages or callbacks and to
listen for
replies
Local Procedure Calls in Windows XP

Communications in Client-Server Systems


Sockets
Remote Procedure Calls
Remote Method Invocation (Java)
Sockets
A socket is defined as an endpoint for communication
Concatenation of IP address and port
The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8
Communication consists between a pair of sockets
Socket Communication

Remote Procedure Calls


Remote procedure call (RPC) abstracts procedure calls between processes on networked systems
Stubs – client-side proxy for the actual procedure on the server
The client-side stub locates the server and marshalls the
parameters
The server-side stub receives this message, unpacks the marshalled parameters, and peforms
the procedure on the server
Execution of RPC

Remote Method Invocation


Remote Method Invocation (RMI) is a Java mechanism similar to RPCs
RMI allows a Java program on one machine to invoke a method on a remote object

Marshalling Parameters
CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is
nonpreemptive All other scheduling
is preemptive Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
switching context
switching to user
mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)
Max CPU
utilization Max
throughput Min
turnaround
time Min
waiting time
Min response
time
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the
order
P2 , P3 , P1
The Gantt chart for the schedule is:nnnnWaiting time for P1 = 6; P2 = 0; P3 = 3nAverage waiting time: (6 + 0 +
3)/3 = 3
Much better than previous case
Convoy effect short process behind long process

P2 P3 P1

0 3 6 30
Shortest-Job-First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the process
with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing
Process Arrival Time Burst Time

P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart
average waiting time = (3 + 16 + 9 + 0) / 4 = 7the length of the next CPU request

P4 P1 P3 P2

0 3 9 16 24
Determining Length of Next CPU Burst

tn actual length of nt CPU burst


h
2. n 1
predicted value
for the
CPU burst
next
3. ,0 1
4. Define :
Can only estimate the length
Can be done by using the length of previous CPU bursts, using exponential averaging
Prediction of the Length of the Next CPU Burst

Examples of Exponential Averaging


a
=0
tn+
1 =
tn
Recent history does not
count a =1
tn+1 = a tn
Only the actual last CPU burst
counts If we expand the formula,
we get: tn+1 = a tn+(1 - a)a tn -1 +

+(1 - a )j a tn -j + …

+(1 - a )n +1 t0
Since both a and (1 - a) are less than or equal to 1, each successive term has less weight than its predecessor
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer º highest priority)
Preempti
ve
nonpree
mptive
SJF is a priority scheduling where priority is the predicted next CPU burst time
Problem º Starvation – low priority processes may never execute
Solution º Aging – as time progresses increase the priority of the process
Round Robin (RR)
Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of
the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
Performance
q large Þ FIFO
q small Þ q must be large with respect to context switch, otherwise overhead is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

The Gantt chart is:

P
Typically,her 1
higavera P P P P
2 ound 3tha n SJF, 1but better
ge turnar 1 spons1P P1 P1
re e

0 4 7 10 14 18 22 26 30
Time Quantum and Context Switch Time
Turnaround Time Varies With The Time Quantum

Multilevel Queue
Ready queue is partitioned into separate
queues: foreground (interactive)
background (batch)
Each queue has its own scheduling
algorithm foreground – RR
background – FCFS
Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of
starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes;
i.e., 80% to foreground in RR
20% to background in FCFS

Multilevel Queue Scheduling


Multilevel Feedback Queue
A process can move between the various queues; aging can be implemented this
way Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a
process method used to determine when to
demote a process
method used to determine which queue a process will enter when that process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to queue Q1.
At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is
preempted and moved to queue Q2.
Multilevel Feedback Queues

Thread Scheduling
Distinction between user-level and kernel-level threads
Many-to-one and many-to-many models, thread library schedules user-level threads to run on
LWP Known as process-contention scope (PCS) since scheduling competition is within the
process Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system
Pthread Scheduling
API allows specifying either PCS or SCS during thread creation
PTHREAD SCOPE PROCESS schedules threads using PCS scheduling
PTHREAD SCOPE SYSTEM schedules threads using SCS scheduling.

Pthread Scheduling
API #include
<pthread.h>
#include <stdio.h>
#define NUM THREADS 5
int main(int argc, char
*argv[])

{
int i; pthread t tid[NUM
THREADS]; pthread attr t attr;
/* get the default
attributes */ pthread attr
init(&attr);
/* set the scheduling algorithm to PROCESS or SYSTEM */
pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM);
/* set the scheduling policy - FIFO, RT, or OTHER */
pthread attr setschedpolicy(&attr, SCHED OTHER);
/* create the threads */
for (i = 0; i < NUM THREADS; i++)
pthread create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM THREADS; i++)
pthread join(tid[i], NULL);
}
/* Each thread will begin control in this function
*/ void *runner(void *param)
{
printf("I am a
thread\n"); pthread
exit(0);
}

Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are available
Homogeneous processors within a multiprocessor
Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing
Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common
ready queue, or each has its own private queue of ready processes
Processor affinity – process has affinity for processor on which it is currently running
soft
affinity
hard
affinity

NUMA and CPU Scheduling


Multicore Processors
Recent trend to place multiple processor cores on same physical
chip Faster and consume less power
Multiple threads per core also growing
Takes advantage of memory stall to make progress on another thread while memory retrieve happens

Multithreaded Multicore System

Operating System
Examples Solaris
scheduling
Windows XP
scheduling Linux
scheduling
Solaris Dispatch Table

Solaris Scheduling
Windows XP Priorities

Linux Scheduling
Constant order O(1) scheduling time
Two priority ranges: time-sharing and real-time
Real-time range from 0 to 99 and nice value from 100 to 140
riorities and Time-slice length
List of Tasks Indexed According to Priorities

Algorithm Evaluation
Deterministic modeling – takes a particular predetermined workload and defines the performance of
each algorithm for that workload
Queuing
models
Implement
ation
Evaluation of CPU schedulers by Simulation

UNIT -3
CONCURRENC
Y

Process Synchronization
To introduce the critical-section problem, whose solutions can be used to ensure the consistency of
shared data
To present both software and hardware solutions of the critical-section problem
To introduce the concept of an atomic transaction and describe mechanisms to ensure atomicity
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating
processes
Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers.
We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is
set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the
consumer after it consumes a buffer
Producer
while (true) {

/* produce an item and put in nextProduced


*/ while (count == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer
while (true) {
while (count == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Race Condition
count++ could be implemented as

register1 = count
register1 =
register1 + 1 count
= register1
count-- could be implemented as

register2 = count
register2 =
register2 - 1 count
= register2
Consider this execution interleaving with “count = 5” initially:

S0: producer execute register1 = count {register1 = 5}


S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that wish
to enter their critical section, then the selection of the processes that will enter the critical section next
cannot be postponed indefinitely
3.Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request
is granted Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes

Peterson’s Solution
Two process solution
Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted.
The two processes share two variables:
int turn;
Boolean
flag[2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies
that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] =
TRUE; turn
= j;
while (flag[j] && turn ==
j); critical
section
flag[i] = FALSE;
remainder section
} while (TRUE);
Synchronization Hardware
Many systems provide hardware support for critical section
code Uniprocessors – could disable interrupts
Currently running code would execute without
preemption Generally too inefficient on multiprocessor
systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware
instructions Atomic = non-interruptable
Either test memory word and set value Or swap contents of two memory words

Solution to Critical-section Problem Using Locks


do {
acquire lock
critical
section release
lock
remainder section
} while
(TRUE);
TestAndSet
Instruction
Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target =
TRUE;
return rv:
}
Solution using TestAndSet
Shared boolean variable lock., initialized to
false. Solution:
do {
while ( TestAndSet (&lock ))
; // do nothing
// critical
section lock =
FALSE;
// remainder section
} while (TRUE);

Swap Instruction
Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}

Solution using Swap


Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key
Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock,
&key );

// critical
section lock = FALSE;
// remainder section
} while (TRUE);

Bounded-waiting Mutual Exclusion with TestandSet()


do {
waiting[i] =
TRUE; key =
TRUE;
while (waiting[i] && key)
key =
TestAndSet(&lock); waiting[i]
= FALSE;
// critical
section j = (i + 1) %
n;
while ((j != i) && !
waiting[j]) j = (j +
1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);

Semaphore
Synchronization tool that does not require busy waiting nSemaphore S – integer variable
Two standard operations modify S: wait() and signal()
Originally called P() and
V() Less complicated
Can only be accessed via two indivisible (atomic)
operations wait (S) {
while S <= 0
; // no-op
S--;
}
signal (S) {
S++;
}

Semaphore as General Synchronization Tool


Counting semaphore – integer value can range over an unrestricted
domain Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
Also known as mutex locksnCan implement a counting semaphore S as a binary semaphore
Provides mutual exclusionSemaphore mutex; // initialized to do {
wait (mutex);
// Critical
Section signal
(mutex);
// remainder section

} while (TRUE);

Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same semaphore at
the same time
Thus, implementation becomes the critical section problem where the wait and signal code are placed in
the crtical section.
Could now have busy waiting in critical section
implementation But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and therefore this is not a good
solution.
Semaphore Implementation with no Busy waiting
With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data
items:
value (of type integer)
pointer to next record in
the list Two operations:
block – place the process invoking the operation on the appropriate waiting
queue. wakeup – remove one of processes in the waiting queue and place it in the
ready queue.

Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S-
>list; block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S-
>list; wakeup(P);
}
}

Deadlock and Starvation


Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one
of the waiting processes
Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
.

. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it
is suspended
Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher- priority
process
Classical Problems of
Synchronization Bounded-
Buffer Problem Readers
and Writers Problem
Dining-Philosophers
Problem

Bounded-Buffer Problem
N buffers, each can hold one item
Semaphore mutex initialized to the value
1 Semaphore full initialized to the value
0 Semaphore empty initialized to the
value N.
The structure of the producer process
do {// produce an item in nextp wait
(empty);
wait (mutex);
// add the item to the
buffer signal (mutex);
signal (full);
} while (TRUE);
The structure of the consumer
process do { wait
(full);
wait (mutex);
// remove an item from buffer to nextc
signal (mutex);
signal (empty);

// consume the item in nextc


} while (TRUE);
Readers-Writers Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and writenProblem – allow multiple readers to read at the same time. Only one
single writer can access the shared data at the same time
Shared Data

Data set
Semaphore mutex initialized
to 1 Semaphore wrt
initialized to 1 Integer
readcount initialized to 0
The structure of a writer process

do { wait (wrt) ;

// writing is
performed signal
(wrt) ;
} while (TRUE);

The structure of a reader


process do {
wait
(mutex) ;
readcoun
t ++ ;
if (readcount == 1)
wait
(wrt) ; signal (mutex)

// reading is
performed wait
(mutex) ;
readcount - - ;
if (readcount == 0)
signal
(wrt) ; signal
(mutex) ;
} while (TRUE);

Dining-Philosophers Problem

Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
The structure of Philosopher i:
d
owait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
{
// eat
signal
( chopstick[i] );
signal
(i +
// think
} while (TRUE);
Problems with Semaphores
Incorrect use of semaphore operations:
l signal (mutex)
….
wait (mutex)

wait
(mutex) …
wait
(mutex)
Omitting of wait (mutex) or signal (mutex) (or both)

Monitors
A high-level abstraction that provides a convenient and effective mechanism for process synchronization
Only one process may be active within the monitor at a time
monitor monitor-name
{
// shared variable
declarations procedure
P1 (…) { …. }

procedure Pn (…)
{……} Initialization code (
….) { … }

}
}
Schematic view of a Monitor

Condition Variables

condition x, y;
Two operations on a condition variable:
x.wait () – a process that invokes the
operation is suspended.
x.signal () – resumes one of processes (if any)
that invoked x.wait ()
Monitor with Condition Variables

Solution to Dining Philosophers

monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self
[5]; void
pickup (int i) {
state[i] =
HUNGRY; test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right
neighbors test((i + 4) %
5);
test((i + 1) % 5);
}

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] =
EATING ;
self[i].signal
() ;
}
}
initialization_code() {
for (int i = 0; i < 5; i+
+) state[i] =
THINKING;
}
}
Each philosopher I invokes the operations
pickup() and putdown() in the following
sequence:
DiningPhilosophters.pickup
(i); EAT
DiningPhilosophers.putdown (i);

Monitor Implementation Using Semaphores

Vari
able semaphore mutex; // (initially = 1)
s semaphore next; // (initially = 0)
int next-count = 0;nEach procedure F will be replaced by wait(mutex);

body of F;

if (next_count > 0)
signal(next)
else
signal(mutex);nMutual exclusion within a monitor is ensured.

Monitor Implementation
For each condition variable x, we have:
semaphore x_sem; // (initially = 0)
int x-count = 0;nThe operation x.wait can be implemented as:

x-count++;
if (next_count
> 0)
signal(next);
else
signal(mute
x);
wait(x_sem
);
x-count--;

The operation x.signal can be implemented as:


if (x-count > 0) {
next_cou
nt++;
signal(x_s
em);
wait(next
);
next_cou
nt--;
}
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean
busy;
conditio
n x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}
void release() {
busy =
FALSE;
x.signal();
}
initialization
code() { busy = FALSE;
}

Synchronization Examples
Solaris
Window
s XP
Linux
Pthread
s
Solaris Synchronization
Implements a variety of locks to support multitasking, multithreading (including real-time
threads), and multiprocessing
Uses adaptive mutexes for efficiency when protecting data from short code segments
Uses condition variables and readers-writers locks when longer sections of code need access to data
Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer
lock

Windows XP Synchronization
Uses interrupt masks to protect access to global resources on uniprocessor
systems Uses spinlocks on multiprocessor systems
Also provides dispatcher objects which may act as either mutexes and
semaphores Dispatcher objects may also provide events
An event acts much like a condition variable
Linux Synchronization
Linux:lPrior to kernel Version 2.6, disables interrupts to implement short critical sections
Version 2.6 and later, fully preemptive
Linux provides:
semap
hores
spin
locks

Pthreads Synchronization
Pthreads API is OS-
independent It provides:
mutex locks
condition variablesnNon-portable extensions
include: read-write locks
spin locks

Atomic
Transactions
System Model
Log-based
Recovery
Checkpoints
Concurrent Atomic Transactions

System Model
Assures that operations happen as a single logical unit of work, in its entirety, or not at all
Related to field of database systems
Challenge is assuring atomicity despite computer system failures
Transaction - collection of instructions or operations that performs single logical function
Here we are concerned with changes to stable storage – disk
Transaction is series of read and write operations
Terminated by commit (transaction successful) or abort (transaction failed) operation Aborted
transaction must be rolled back to undo any changes it performed
Types of Storage Media
Volatile storage – information stored here does not survive system
crashes Example: main memory, cache
Nonvolatile storage – Information usually survives
crashes Example: disk and tape
Stable storage – Information never lost
Not actually possible, so approximated via replication or RAID to devices with independent failure
modes
Goal is to assure transaction atomicity where failures cause loss of information on volatile storage

Log-Based Recovery
Record to stable storage information about all modifications by a
transaction Most common is write-ahead logging
Log on stable storage, each log record describes single transaction write operation, including
Transaction name
Data item
name Old
value
New value
<Ti starts> written to log when transaction Ti starts
<Ti commits> written when Ti commits
Log entry must reach stable storage before operation on data occurs

Log-Based Recovery Algorithm


Using the log, system can handle any volatile memory
errors Undo(Ti) restores value of all data updated
by Ti
Redo(Ti) sets values of all data in transaction T i to new
values Undo(Ti) and redo(Ti) must be idempotent
Multiple executions must have the same result as one execution
If system fails, restore state of all updated data via log
If log contains <Ti starts> without <Ti commits>, undo(Ti)
If log contains <Ti starts> and <Ti commits>, redo(Ti)

Checkpoints
Log could become long, and recovery could take
long Checkpoints shorten log and recovery time.
Checkpoint scheme:
1.Output all log records currently in volatile storage to stable
storage 2.Output all modified data from volatile to stable storage
3. Output a log record <checkpoint> to the log on stable storage
Now recovery only includes Ti, such that Ti started executing before the most recent checkpoint, and all
transactions after Ti All other transactions already on stable storage

Concurrent Transactions
Must be equivalent to serial execution –
serializability Could perform all transactions in
critical section Inefficient, too restrictive
Concurrency-control algorithms provide serializability

Serializability

Consider two data items A and


B Consider Transactions T0 and
T1 Execute T0, T1 atomically
Execution sequence called
schedule
Atomically executed transaction order called serial
schedule For N transactions, there are N! valid serial
schedules
Schedule 1: T0 then T1

Nonserial Schedule
Nonserial schedule allows overlapped
execute Resulting execution not
necessarily incorrect Consider schedule S,
operations Oi, Oj
Conflict if access same data item, with at least one write
If Oi, Oj consecutive and operations of different transactions & O i and Oj don’t conflict
Then S’ with swapped order Oj Oi equivalent to S
If S can become S’ via swapping nonconflicting
operations S is conflict serializable
Schedule 2: Concurrent Serializable Schedule

Locking Protocol

Ensure serializability by associating lock with each data


item Follow locking protocol for access control
Locks
Shared – Ti has shared-mode lock (S) on item Q, T i can read Q but not write Q
Exclusive – Ti has exclusive-mode lock (X) on Q, T i can read and write Q
Require every transaction on item Q acquire appropriate lock
If lock already held, new request may have to
wait Similar to readers-writers algorithm

Two-phase Locking Protocol


Generally ensures conflict serializability
Each transaction issues lock and unlock requests in two
phases Growing – obtaining locks
Shrinking – releasing
locks Does not prevent
deadlock
Timestamp-based Protocols
Select order among transactions in advance – timestamp-
ordering Transaction Ti associated with timestamp TS(T i) before
Ti starts TS(Ti) < TS(Tj) if Ti entered system before Tj
TS can be generated from system clock or as logical counter incremented at each entry of transaction
Timestamps determine serializability order
If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to serial schedule where T i
appears before Tj
Timestamp-based Protocol Implementation

Data item Q gets two timestamps


W-timestamp(Q) – largest timestamp of any transaction that executed write(Q) successfully
R-timestamp(Q) – largest timestamp of successful read(Q)
Updated whenever read(Q) or write(Q) executed
Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order
Suppose Ti executes read(Q)

If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already overwritten read operation rejected
and Ti rolled back
If TS(Ti) ≥ W-timestamp(Q) read executed, R-timestamp(Q) set to max(R-timestamp(Q), TS(Ti))
Timestamp-ordering Protocol

Supose Ti executes write (Q)


If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti assumed it would never be
produced Write operation rejected, T i rolled back If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete
value of Q Write operation rejected and T i rolled back Otherwise, write executed Any rolled back transaction T i is
assigned new timestamp and restarted Algorithm ensures conflict serializability and freedom from deadlock
Schedule Possible Under Timestamp Protocol
UNIT IV

Memory Management

To provide a detailed description of various ways of organizing memory hardware


To discuss various memory-management techniques, including paging and segmentation
To provide a detailed description of the Intel Pentium, which supports both pure segmentation and
segmentation with paging
Program must be brought (from disk) into memory and placed within a process for it to be run
Main memory and registers are only storage CPU can access directly
Register access in one CPU clock (or
less) Main memory can take many
cycles
Cache sits between main memory and CPU registers
Protection of memory required to ensure correct
operation

Base and Limit Registers

A pair of base and limit registers define the logical address space

Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile
code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time Execution
time: Binding delayed until run time if the process can be moved during its execution from one memory
segment to another. Need hardware support for address maps (e.g., base and limit registers)
Multistep Processing of a User Program

Logical vs. Physical Address Space


The concept of a logical address space that is bound to a separate physical address space is central to
proper memory management
Logical address – generated by the CPU; also referred to as virtual
address Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in execution-time address-binding scheme
Memory-Management Unit (MMU)
Hardware device that maps virtual to physical address
In MMU scheme, the value in the relocation register is added to every address generated by a user
process at the time it is sent to memory
The user program deals with logical addresses; it never sees the real physical addresses
Dynamic relocation using a relocation register

Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required implemented through program design

Dynamic Linking
Linking postponed until execution time
Small piece of code, stub, used to locate the appropriate memory-resident library
routine Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes’ memory
address Dynamic linking is particularly useful for libraries
System also known as shared libraries

Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory
for continued executionnBacking store – fast disk large enough to accommodate copies of all memory images
for all users; must provide direct access to these memory imagesnRoll out, roll in – swapping variant used for
priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be
loaded and executednMajor part of swap time is transfer time; total transfer time is directly proportional to the
amount of memory swappednModified versions of swapping are found on many systems (i.e., UNIX, Linux, and
Windows)
System maintains a ready queue of ready-to-run processes which have memory images on disk
Shri Vishnu Engineering College for Women
Schematic View of Swapping

Contiguous Allocation
Main memory usually into two partitions:
Resident operating system, usually held in low memory with interrupt vector
User processes then held in high memorynRelocation registers used to protect user processes from each
other, and from changing operating-system code and data
Base register contains value of smallest physical address
Limit register contains range of logical addresses – each logical address must be less than the limit
register
MMU maps logical address dynamically
Hardware Support for Relocation and Limit Registers

Multiple-partition allocation
Hole – block of available memory; holes of various size are scattered throughout memory When a process
arrives, it is allocated memory from a hole large enough to accommodate it

Page 118
Operating system maintains information about:

a) allocated partitions b) free partitions (hole)

OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2

Dynamic Storage-Allocation Problem


First-fit: Allocate the first hole that is big enough
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
Produces the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Fragmentation
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together in one large block
Compaction is possible only if relocation is dynamic, and is done at execution time.
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers

Paging

Logical address space of a process can be noncontiguous; process is allocated physical memory
whenever the latter is available
Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes
and 8,192 bytes)
Divide logical memory into blocks of same size called pagesnKeep track of all free frames
To run a program of size n pages, need to find n free frames and load
program Set up a page table to translate logical to physical addresses
Internal fragmentation

Address Translation Scheme

Address generated by CPU is divided into


Page number (p) – used as an index into a page table which contains base address of each
page in physical memory
Page offset (d) – combined with base address to define the physical memory address that is sent to the
memory unit
For given logical address space 2m and page size 2n

Paging Hardware

p age number page offset


p d
m-n n

Paging Model of Logical and Physical Memory


Paging Example

32-byte memory and 4-byte pages

Free Frames
Implementation of Page Table

Page table is kept in main memory


Page-table base register (PTBR) points to the page table
Page-table length register (PRLR) indicates size of the page table
In this scheme every data/instruction access requires two memory accesses. One for the page table and
one for the data/instruction.
The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs)
Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies
each process to provide address-space protection for that process

Associative Memory
Associative memory – parallel
search Address translation (p, d)
If p is in associative register, get frame # out
Otherwise get frame # from page table in
memory

Page # Frame #

Paging Hardware With TLB


Effective Access Time

Associative Lookup = e time unit


Assume memory cycle time is 1 microsecond
Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to
number of associative registers
Hit ratio = an Effective Access Time (EAT)
EAT = (1 + e) a + (2 + e)(1 – a)
=2+e–a
Memory Protection
Memory protection implemented by associating protection bit with each frame
Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page
“invalid” indicates that the page is not in the process’ logical address space
Valid (v) or Invalid (i) Bit In A Page Table

Shared

P
a
g
e
s
S
h
a
r
e
d
c
o
d
e
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window
systems).
Shared code must appear in same location in the logical address space of all processes
Private code and data
Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address space
Shared Pages Example

Structure of the Page Table

Hierarchical
Paging Hashed
Page Tables
Inverted Page
Tables

Hierarchical Page Tables

Break up the logical address space into multiple page


tables A simple technique is a two-level page table

Two-Level Page-Table Scheme


Two-Level Paging Example
A logical address (on 32-bit machine with 1K page size) is divided into:
a page number consisting of 22 bits
a page offset consisting of 10 bits
Since the page table is paged, the page number is further divided into:
a 12-bit page
number a 10-bit
page offset
Thus, a logical address is as follows:

where pi is an index into the outer page table, and p2 is the displacement within the page of the outer
page table

p
age number page offset
pi p2 d

12 10 10

Address-Translation Scheme

Three-level Paging Scheme

Dept. of Computer Science and Engineering Page 78


Hashed Page Tables

Common in address spaces > 32 bits


The virtual page number is hashed into a page table
This page table contains a chain of elements hashing to the same
location Virtual page numbers are compared in this chain searching for a
match
If a match is found, the corresponding physical frame is extracted

Hashed Page Table

Inverted Page Table

One entry for each real page of memory


Entry consists of the virtual address of the page stored in that real memory location, with information
about the process that owns that page
Decreases memory needed to store each page table, but increases time needed to search the table when
a page reference occurs
Use hash table to limit the search to one — or at most a few — page-table entries

Page 79
Inverted Page Table Architecture

Segmentation
Memory-management scheme that supports user view of
memory A program is a collection of segments
A segment is a logical unit such
as: main program
procedure
function
method
object
local variables, global
variables common block
stack
symbol
table
arrays

User’s View of a Program

Page 80
Logical View of Segmentation

1 4

4 2

user space

Segmentation Architecture
Logical address consists of a two tuple:
o <segment-number, offset>,
Segment table – maps two-dimensional physical address each table entity has:
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a
program;
segment number s is legal if s < STLR
Protection
With each entry in segment table associate:
validation bit = 0 Þ illegal
segment read/write/execute
privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation problem
A segmentation example is shown in the following diagram
Page 81
Segmentation Hardware

Example of Segmentation

Example: The Intel Pentium


Supports both segmentation and segmentation with
paging CPU generates logical address
Given to segmentation unit
Which produces linear
addresses Linear address given to
paging unit
Which generates physical address in main
memory Paging units form equivalent of MMU

Page 82
Logical to Physical Address Translation in Pentium

Intel Pentium Segmentation

Page 83
Pentium Paging Architecture

Linear Address in Linux

Three-level Paging in Linux

Page 84
UNIT – 5

VIRTUAL MEMORY

Objective
To describe the benefits of a virtual memory system.

To explain the concepts of demand paging, page-replacement algorithms, and allocation of page
frames.

To discuss the principle of the working-set model.

Virtual Memory
Virtual memory is a technique that allows the execution of process that may not be completely in
memory. The main visible advantage of this scheme is that programs can be larger than physical memory.

Virtual memory is the separation of user logical memory from physical memory this separation allows an
extremely large virtual memory to be provided for programmers when only a smaller physical memory
is available ( Fig ).
Following are the situations, when entire program is not required to load fully.
1. User written error handling routines are used only when an error occurs in the data or computation.
2. Certain options and features of a program may be used rarely.
3. Many tables are assigned a fixed amount of address space even though only a small amount of
the table is actually used.
The ability to execute a program that is only partially in memory would counter many benefits.
1. Less number of I/O would be needed to load or swap each user program into memory.
2. A program would no longer be constrained by the amount of physical memory that is available.
Page 85
3. Each user program could take less physical memory, more programs could be run the same time,
with a corresponding increase in CPU utilization and throughput.

Fig. Diagram showing virtual memory that is larger than physical memory.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a


segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

A demand paging is similar to a paging system with swapping(Fig 5.2). When we want to execute a
process, we swap it into memory. Rather than swapping the entire process into memory.

When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again Instead of swapping in a whole process, the pager brings only those necessary pages into
memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time
and the amount of physical memory needed.

Hardware support is required to distinguish between those pages that are in memory and those pages
that are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can be checked checking the
bit and marking a page will have no effect if the process never attempts to access the pages. While the process
executes and accesses pages that are memory resident, execution proceeds normally.

Page 86
Fig. Transfer of a paged memory to continuous disk space

Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory. But page fault can be handled as following (Fig 5.3):

Fig. Steps in handling a page fault

Page 87
1. We check an internal table for this process to determine whether the reference was a valid or
invalid memory access.
2. If the reference was invalid, we terminate the process. If .it was valid, but we have not yet
brought in that page, we now page in the latter.
3. We find a free frame.

4. We schedule a disk operation to read the desired page into the newly allocated frame.

5. When the disk read is complete, we modify the internal table kept with the process and the page
table to indicate that the page is now in memory.

6. We restart the instruction that was interrupted by the illegal address trap. The process can now
access the page as though it had always been memory.

Therefore, the operating system reads the desired page into memory and restarts the process as though
the page had always been in memory.

The page replacement is used to make the frame free if they are not in used. If no frame is free then other
process is called in.

Advantages of Demand Paging:


1. Large virtual memory.
2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of multiprogramming.

Disadvantages of Demand Paging:


4. Number of tables and amount of processor over head for handling page interrupts are greater than
in the case of the simple paged management techniques.
5. due to the lack of an explicit constraints on a jobs address space size.

Page Replacement Algorithm


There are many different page replacement algorithms. We evaluate an algorithm by running it on a
particular string of memory reference and computing the number of page faults. The string of memory references
is called reference string. Reference strings are generated artificially or by tracing a given system and recording
the address of each memory reference. The latter choice produces a large number of data.

1. For a given page size we need to consider only the page number, not the entire address.

2. if we have a reference to a page p, then any immediately following references to page p will never
cause a page fault. Page p will be in memory after the
first reference; the immediately following references will not fault.

Eg:- consider the address sequence


0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101, 0610, 0102,

Page 88
0103, 0104, 0104, 0101, 0609, 0102, 0105
and reduce to 1, 4, 1, 6,1, 6, 1, 6, 1, 6, 1

To determine the number of page faults for a particular reference string and page replacement algorithm,
we also need to know the number of page frames available. As the number of frames available increase, the
number of page faults will decrease.

FIFO Algorithm

The simplest page-replacement algorithm is a FIFO algorithm. A FIFO replacement algorithm associates
with each page the time when that page was brought into memory. When a page must be replaced, the oldest
page is chosen. We can create a FIFO queue to hold all pages in memory.
The first three references (7, 0, 1) cause page faults, and are brought into these empty eg. 7, 0, 1, 2, 0, 3,
0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1 and consider 3 frames. This replacement means that the next reference to 0 will fault.
Page 1 is then replaced by page 0.

Optimal Algorithm

An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-
replacement algorithm exists, and has been called OPT or MIN. It is simply
Replace the page that will not be used for the
longest period of time.

Now consider the same string with 3 empty frames.


The reference to page 2 replaces page 7, because 7 will not be used until reference 15, whereas page 0
will be used at 5, and page 1 at 14. The reference to page 3 replaces page 1, as page 1 will be the last of the
three pages in memory to be referenced again. Optimal replacement is much better than a FIFO.

The optimal page-replacement algorithm is difficult to implement, because it requires future knowledge of
the reference string.

LRU Algorithm

The FIFO algorithm uses the time when a page was brought into memory; the OPT algorithm uses the
time when a page is to be used. In LRU replace the page that has not been used for the longest period of time.

LRU replacement associates with each page the time of that page's last use. When a page must be
replaced, LRU chooses that page that has not been used for the longest period of time.

Let SR be the reverse of a reference string S, then the page-fault rate for the OPT algorithm on S is the
same as the page-fault rate for the OPT algorithm on SR.

Page 89
LRU Approximation Algorithms

Some systems provide no hardware support, and other page-replacement algorithm. Many systems
provide some help, however, in the form of a reference bit. The reference bit for a page is set, by the hardware,
whenever that page is referenced. Reference bits are associated with each entry in the page table Initially, all bits
are cleared (to 0) by the operating system. As a user process executes, the bit associated with each page
referenced is set (to 1) by the hardware.

Additional-Reference-Bits Algorithm

The operating system shifts the reference bit for each page into the high-order or of its 5-bit byte, shifting
the other bits right 1 bit, discarding the low-order bit.
These 5-bit shift registers contain the history of page use for the last eight time periods. If the shift register
contains 00000000, then the page has not been

used for eight time periods; a page that is used at least once each period would have a shift register value of
11111111.

Second-Chance Algorithm

The basic algorithm of second-chance replacement is a FIFO replacement algorithm. When a page gets a
second chance, its reference bit is cleared and its arrival e is reset to the current time.

Enhanced Second-Chance Algorithm

The second-chance algorithm described above can be enhanced by considering troth the reference bit
and the modify bit as an ordered pair.

1. (0,0) neither recently used nor modified best page to replace.


2. (0,1) not recently used but modified not quite as good, because the page will need to be
written out before replacement.
3. (1,0) recently used but clean probably will be used again soon.
4. (1,1) recently used and modified probably will be used again, and write out will be needed
before replacing it

Counting Algorithms

There are many other algorithms that can be used for page replacement.

• LFU Algorithm: The least frequently used (LFU) page-replacement algorithm requires that the page
with the smallest count be replaced. This algorithm suffers from the situation in which a page is used
heavily during the initial phase of a process, but then is never used again.

Page 90
• MFU Algorithm: The most frequently used (MFU) page-replacement algorithm is based on the
argument that the page with the smallest count was probably just brought in and has yet to be used.

Page Buffering Algorithm


When a page fault occurs, a victim frame is chosen as before. However, the desired page is read into a
free frame from the pool before the victim is written out.
This procedure allows the process to restart as soon as possible, without waiting for the victim page to be written
out. When the victim is later written out, its frame is added to the free-frame pool.
When the FIFO replacement algorithm mistakenly replaces a page mistakenly replaces a page that is still in
active use, that page is quickly retrieved from the free-frame buffer, and no I/O is necessary. The free-frame buffer
provides protection against the relatively poor, but simple, FIFO replacement algorithm.

Page 91
UNIT VI

Principles of deadlock

To develop a description of deadlocks, which prevent sets of concurrent processes from completing their
tasks.To present a number of different methods for preventing or avoiding deadlocks in a computer system
The Deadlock Problem

A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in
the set
Example
System has 2 disk drives
P1 and P2 each hold one disk drive and each needs another one
Example
semaphores A and B, initialized to 1
P0 P1

wait (A); wait(B)


wait (B); wait(A)

Bridge Crossing Example

Traffic only in one direction


Each section of a bridge can be viewed as a resource
If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback)
Several cars may have to be backed up if a deadlock occurs
Starvation is possible
Note – Most OSes do not prevent or deal with deadlocks

Page 92
System Model
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O
devices Each resource type Ri has Wi
instances. Each process utilizes a
resource as follows:
re
q
u
es
t
u
se
re
le
as
e

Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously


Mutual exclusion: only one process at a time can use a resource
Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other
processes
No preemption: a resource can be released only voluntarily by the process holding it, after that process has
completed its task
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held
by P1, P1 is waiting for a resource that is held by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
n
Resource-Allocation Graph

A set of vertices V and a set of edges E


V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system R =
{R1, R2, …, Rm}, the set consisting of all resource types in the system
request edge – directed edge P1 ® Rj
assignment edge – directed edge Rj ® Pi

Process

Page 93
Resource Type with 4 instances

Pi
Pi requests instance of Rjn

Rj

Page 94
Pi is holding an instance of Rj

Pi
Rj
Example of a Resource Allocation Graph

Resource Allocation Graph With A Deadlock

Page 95
Graph With A Cycle But No Deadlock

Basic Facts
If graph contains no cycles Þ no deadlocknIf graph contains a cycle Þlif only one instance per resource type, then
deadlock
if several instances per resource type, possibility of deadlock
Methods for Handling Deadlocks
Ensure that the system will never enter a deadlock statenAllow the system to enter a deadlock state and then
recovernIgnore the problem and pretend that deadlocks never occur in the system; used by most operating
systems, including UNIX
Deadlock Prevention
Restrain the ways request can be made
Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other
resources
Require process to request and be allocated all its resources before it begins execution, or allow process to
request resources only when the process has none
Low resource utilization; starvation possible
No Preemption –
If a process that is holding some resources requests another resource that cannot be immediately allocated to it,
then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting
Circular Wait – impose a total ordering of all resource types, and require that each process requests resources
in an increasing order of enumeration

Page 96
Deadlock Avoidance
Requires that the system has some additional a priori
information available
Simplest and most useful model requires that each process declare the maximum number of resources of each
type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can
never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated resources, and the maximum
demands of the processes
Safe State
When a process requests an available resource, system must decide if immediate allocation leaves the system in
a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes is the systems such that for
each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by
all the Pj, with j < inThat is:
If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate
When Pi terminates, Pi +1 can obtain its needed resources, and so on
Basic Facts
If a system is in safe state Þ no deadlocksnIf a system is in unsafe state Þ possibility of deadlocknAvoidance Þ
ensure that a system will never enter an unsafe state.
Safe, Unsafe , Deadlock State

Avoidance algorithms
Single instance of a
resource type Use a
resource-allocation graph
Multiple instances of a resource
type Use the banker’s algorithm

Page 97
Resource-Allocation Graph Scheme
nClaim edge Pi ® Rj indicated that process Pj may request resource Rj; represented by a dashed linenClaim edge
converts to request edge when a process requests a resourcenRequest edge converted to an assignment edge
when the resource is allocated to the process
nWhen a resource is released by a process, assignment edge reconverts to a claim edgenResources must be
claimed a priori in the system

Resource-Allocation Graph

Unsafe State In Resource-Allocation Graph

Page 98
Resource-Allocation Graph Algorithm
Suppose that process Pi requests a resource Rj.
nThe request can be granted only if converting the request edge to an assignment edge does not result in the
formation of a cycle in the resource allocation graph

Banker’s Algorithm
Multiple instancesnEach process must a priori claim maximum usenWhen a process requests a resource it may
have to wait nWhen a process gets all its resources it must return them in a finite amount of time

Data Structures for the Banker’s Algorithm


Let n = number of processes, and m = number of resources types.
nAvailable: Vector of length m. If available [j] = k, there are k instances of resource type Rj available
nMax: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type
RjnAllocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of RjnNeed: n x m
matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task

Need [i,j] = Max[i,j] – Allocation [i,j]


Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find and i such that both:
(a) Finish [i] = false(b) Needi £ Work
If no such i exists, go to step 4
3. Work = Work +
Allocationi Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state

Resource-Request Algorithm for Process Pi

Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type
Rj1. If Requesti £ Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum
claim
2. If Requesti £ Available, go to step 3. Otherwise Pi must wait, since resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available –
Request; Allocationi =
Allocationi + Requesti; Needi =
Needi – Requesti;
lIf safe Þ the resources are allocated to Pi
lIf unsafe Þ Pi must wait, and the old resource-allocation state is restored

Page 99
Example of Banker’s Algorithm

5 processes P0 through
P4; 3 resource types:
A (10 instances), B (5instances), and C (7
instances) Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433

The content of the matrix Need is defined to be Max – Allocation Need ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria

Example: P1 Request (1,0,2)


Check that Request £ Available (that is, (1,0,2) £ (3,3,2) Þ true Allocation
Need Available ABC
ABC ABC
P0 010 743 230
P1 302 020
P2 301 600
P3 211 011
P4 002 431
Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety requirement
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?

Deadlock Detection
Allow system to enter deadlock state Detection algorithmRecovery scheme

Page 100
Single Instance of Each Resource Type
Maintain wait-for
graph Nodes are
processes
Pi ® Pj if Pi is waiting for Pj
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock
An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices in
the graph

Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph


Several Instances of a Resource Type
Available: A vector of length m indicates the number of available resources of each type.Allocation: An n x m
matrix defines the number of resources of each type currently allocated to each process.Request: An n x m
matrix indicates the current request of each process. If Request [ij] = k, then process Pi is requesting k more
instances of resource type. Rj.

Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available(b) For i = 1,2, …, n, if Allocationi ¹ 0, then
Finish[i] = false;otherwise, Finish[i] = true2. Find an index i such that both:
(a) Finish[i] == false(b) Requesti £ WorkIf no such i exists, go to step 4
3. Work = Work +
Allocationi Finish[i] = true
go to step 24. If Finish[i] == false, for some i, 1 £ i £ n, then the system is in deadlock state. Moreover, if
Finish[i] == false, then Pi is deadlocked

Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state

Page 101
Example of Detection Algorithm
Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances)
Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i

P2 requests an additional instance of type C Reques


tABC
P0 000
P1 201
P2 001
P3 100
P4 002
State of system?
Can reclaim resources held by process P0, but insufficient resources to fulfill other processes; requests
Deadlock exists, consisting of processes P1, P2, P3, and P4

Detection-Algorithm Usage
When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cyclenIf detection algorithm is invoked arbitrarily, there may be many cycles in the
resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the
deadlock
Recovery from Deadlock: Process Termination
Abort all deadlocked processesnAbort one process at a time until the deadlock cycle is eliminatednIn which
order should we choose to abort?
Priority of the process
lHow long process has computed, and how much longer to completion
lResources the process has used
lResources process needs to complete
lHow many processes will need to be terminated
lIs process interactive or batch?

Recovery from Deadlock: Resource Preemption


Selecting a victim – minimize cost
Rollback – return to some safe state, restart process for that state
Starvation – same process may always be picked as victim, include number of rollback in cost factor

Page 102
UNIT VII

FILE SYSTEM INTERFACE


7.1 The Concept Of a File
A file is a named collection of related information that is recorded on secondary storage. The
information in a file is defined its creator. Many different types of information may be stored in a
file.
File attributes:-
A file is named and for the user’s convince is referred to by its name. A name is
usually a string of characters. One user might create file, where as another user might edit that file by
specifying its name. There are different types of attributes.
1) name:- the name can be in the human readable form.
2) type:- this information is needed for those systems that support different types.
3)location:- this information is used to a device and to the location of the file on that device.
4)size:- this indicates the size of the file in bytes or words. 5)protection:-
6)time,date, and user identifications:-
the information about all files is kept in the directory structure, that also resides on secondary
storage.
File operations:- Creating a file:-
Two steps are necessary to create a file first, space in the file system must be found for the file.
Second , an entry for the new file must be made in the directory. The directory entry records the
name of the file and the location in the system.
Writing a file:-
To write a file give the name of the file, the system search the directory to find the location of the
file. The system must keep the writer pointer to the location in the file where the next write is to take
place. The write pointer must be updated whenever a write occurs.
Reading a file:- to read from a file, specifies the name of the file and directory is search for the
associated directory entry, and the system needs to keep read pointer to the location in the file where
the next read is to take place. Once the read has taken place, read pointer is updated.
Repositioning with in a file:-
The directory is searched for the appropriate entry and the current file position is set to given value.
this is also known as a file seek.
Deleting a file:- to delete a file , we search the directory for the name file. Found that file in the
directory entry, we release all file space and erase the directory entry.
Truncate a file:- this function allows all attributes to remain unchanged(except for file length) but
for the file to be reset to length zero.
Appending:- add new information to the end of an existing file .
Renaming:- give new name to an existing file.
Open a file:-if file need to be used, the first step is to open the file, using the open
system call.
Close:- close is a system call used to terminate the use of an already used file.

Page 103
File Types:-
A common technique for implementing file type is to include the type as part of the file name. The
name is split in to two parts
1) the name 2) and an extension .
the system uses the extension to indicate the type of the file and the type of operations that can be
done on that file.

7.2 : ACCESSMETHODS:-
There are several ways that the information in the file can be accessed. 1)sequential method 2)
direct access method 3) other access methods. 1)sequential access method:-
the simplest access method is S.A. information in the file is processed in order, one after the other.
the bulk of the operations on a file are reads & writes. It is based on a tape model of a file. Fig 10.3
2)Direct access:- or relative access:-
a file is made up of fixed length records, that allow programs to read and write record rapidly in no
particular order. For direct access, file is viewed as a numbered sequence of blocks or records. A
direct access file allows, blocks to be read & write. So we may read block15, block 54 or write
block10. there is no restrictions on the order of reading or writing for a direct access file. It is great
useful for immediate access to large amount of information.
The file operations must be modified to include the block number as a parameter. We have read n,
where n is the block number.
3)other access methods:-
the other access methods are based on the index for the file. The indexed contain pointers to the
various blocks. To find an entry in the file , we first search the index and then use the pointer to
access the file directly and to find the desired entry. With large files. The index file itself, may
become too large to be kept in memory. One solution is to create an index for the index file. The
primary index file would contain pointers to secondary index files which would point to the actual
data iteams

7.3 Directory Structures:-


operations that are be on a directory (read in text book)
single level directory:-
the simple directory structure is the single level directory. All files are contained in
the same directory. Which is easy to understand. Since all files are in same directory, they must have
unique names.
In a single level directory there is some limitations. When the no.of files
increases or when there is more than one user some problems can occurs. If the no.of files increases,
it becomes difficult to remember the names of all the files. FIG 10.7 Two-level directory:-
The major disadvantages to a single level directory is the confusion of file names between different
users. The standard solution is to create separate directory for each user.
In 2-level directory structure, each user has her own user file directory(ufd). Each ufd has a similar
structure, the user first search the master file directory . the mfd is indexed by user name and each
entry point to the ufd for that user.fig 10.8

Page 104
To create a file for a user, the O.S search only that user’s ufd to find whether another file of that
name exists. To delete a file the O.S only search to the local ufd and it can not accidentally delete
another user’s file that has the same name.
This solves the name collision problem, but it still have another. This is disadvantages when the user
wants to cooperate on some task and to access one another’s file . some systems simply do not allow
local user files to be accessed by other user.
Any file is accessed by using path name. Here the user name and a file name defines a path name.
Ex:- user1/ob
In MS-DOS a file specification is C:/directory name/file name Tree structured directory:-
This allows users to create their own subdirectories and to organize their files accordingly. here the
tree hasa root directory. And every file in the system has a unique path name. A path name is the path
from the root, through all the subdirectories to a specified file.FIG 10.9.
A directory contains a set of subdirectories or files. A directory is simply another file, but it is treated
in a special way. Here the path names can be of two types. 1)absolute path and 2) relative path.
An absolute path name begins at the root and follows a path down to the specified file, giving the
directory name on the path.
Ex:- root/spell/mail/prt/first.
A relative pathname defines a path from the current directory ex:- prt/first is relative path name.
A cyclic- graph directory:-
Consider two programmers who are working on a joint project. The files associated with that project
can be stored in a sub directory , separating them from
other projects and files of the two programmers. The common subdirectory is shared by both
programmers. A shared directory or file will exist in the file system in two places at once. Notice that
a shared file is not the same as two copies of the file with two copies, each programmer can view the
copy rather than the original but if one programmer changes the file the changes will not appear in
the others copy with a shared file there is only one actual file, so any changes made by one person
would be immediately visible to the other.
A tree structure prohibits the sharing of files or directories. An acyclic graph allows directories to
have shared subdirectories and files
FIG 10.10 . it is more complex and more flexiable. Also several problems may occurs at the traverse
and deleting the file contents.
.

Page 105
7.4 : File System Mounting
A file system must be mounted before it can be accessed
A unmounted file system (i.e. Fig. 11-11(b)) is mounted at a mount point
(a) Existing. (b) Unmounted Partition

(b)

Page 106
Mount Point

7.5 :File Sharing


Sharing of files on multi-user systems is desirable nSharing may be done through a protection
schemenOn distributed systems, files may be shared
across a networknNetwork File System (NFS) is a common distributed filesharing method
File Sharing – Multiple Users
User IDs identify users, allowing permissions and protections to be peruser nGroup IDs allow users
to be in groups, permitting group access rights
File Sharing – Remote File Systems
nUses networking to allow file system access between systems
lManually via programs like FTP
lAutomatically, seamlessly using distributed file systems
lSemi automatically via the world wide web
nClient-server model allows clients to mount remote file systems from servers
lServer can serve multiple clients
lClient and user-on-client identification is insecure or complicated
lNFS is standard UNIX client-server file sharing protocol
lCIFS is standard Windows protocol
lStandard operating system file calls are translated into remote calls nDistributed Information
Systems (distributed naming services) such as LDAP, DNS, NIS, Active Directory implement
unified access to information needed for remote computing
File Sharing – Failure Modes
Remote file systems add new failure modes, due to network failure, server failure
Recovery from failure can involve state information about status of each remote request

Page 107
Stateless protocols such as NFS include all information in each request, allowing easy recovery but
less security
File Sharing – Consistency Semantics
nConsistency semantics specify how multiple users are to access a shared file simultaneously
lSimilar to Ch 7 process synchronization algorithms
Tend to be less complex due to disk I/O and network latency (for remote file systems
lAndrew File System (AFS) implemented complex remote file sharing semantics
lUnix file system (UFS) implements:
Writes to an open file visible immediately to other users of the same open file
Sharing file pointer to allow multiple users to read and write concurrently
lAFS has session semantics
Writes only visible to sessions starting after the file is closed
7.6 :Protection
File owner/creator should be able to control:
lwhat can be done
lby whomnTypes of access
lRead lWrite lExecute lAppend lDelete lList

Protection:-]
When the information is kept in the system the major worry is its protection from the both physical
damage (Reliability) and improper access(Protection).
The reliability is generally provided by duplicate copies of files.
The protection can be provided in many ways . for some single system user, we might provide
protection by physically removing the floppy disks . in a multi-user systems, other mechanism are
needed.
1) types of access:-
if the system do not permit access to the files of other users, protection is not needed. Protection
mechanism provided by controlling accessing. This can be provided by types of file access. Access is
permitted or denied depending on several factors. Suppose we mentioned read that file allows only
for read .
Read:- read from the file. Write:- write or rewrite the file.
Execute:- load the file in to memory and execute it. Append:- write new information at the end of the
file. Delete:- delete the file and free its space for possible reuse.

Page 108
FILE SYSTEM IMPLEMENTATION

7.7 :File allocation methods:-


There are 3 major methods of allocating disk space.
1) Contiguous allocation:-
1) The contiguous allocation method requires each file to occupy a set of contiguous block on
the disk.
2) Contiguous allocation of a file is defined by the disk address and length of the first block. If
the file is ‘n’ block long and starts at location ‘b’ , then it occupies blocks b,b+1,b+2,…..,b+n-1;
3) The directory entry for each file indicates the address of the starting block and length of the
area allocated for this file. Fig 11.3
4) Contiguous allocation of file is very easy to access. For the sequential access , the file
system remembers the disk address of the last block referenced and, when necessary read next
block. For direct access to block ‘i’ of a file that starts at block ‘b’ , we can immediately access
block b+i. Thus both sequential and direct access can be supported by contagious allocation.
5) One difficulty with this method is finding space for a new file.
6) Also there are many problems with this method
a) external fragmentation:- files are allocated and deleted , the free disk space is broken in
to little pieces. The E.F exists when free space is broken in to chunks(large piece) and these chunks
are not sufficient for a request of new file.
There is a solution for E.F i.e compaction. All free space compact in to one contiguous space. But the
cost of compaction is time.
b) Another problem is determining how much space is needed for a file. When file is created
the creator must specifies the size of
that file. This becomes to big problem. Suppose if we allocate too little space to a file , some times it
may not sufficient.
Suppose if we allocate large space some times space is wasted.
c) Another problem is if one large file is deleted, that large space is becomes to empty.
Another file is loaded in to that space whose size is very small then some space is wasted . that
wastage of space is called internal fragmentation.
2) Linked allocation:-
1) Linked allocation solves all the problems of contagious allocation. With linked allocation ,
each file is a linked list of disk blocks, the disk block may be scattered any where on the disk.
2) The directory contains a pointer to the first and last blocks of the file. Fig11.4 Ex:- a file
have five blocks start at block 9, continue at block 16,then block 1, block 10 and finally block 25.
each block contains a ponter to the next block. These pointers are not available to the user.
3) To create a new file we simply creates a new entry in directory. With linked allocation, each
directory entry has a pointer to the first disk block of the file.
3) There is no external fragmentation with linked allocation. Also there is no need to declare
the size of a file when that file is created. A file can continue to grows as long as there are free
blocks.

Page 109
4) But it have disadvantage. The major problem is that it can be used only for sequential
access-files.
5) To find the I th block of a file , we must start at the beginning of that file, and follow the
pointers until we get to the I th block. It can not support the direct access.
6) Another disadvantage is it requires space for the pointers. If a pointer requires 4 bytes out
of 512 byte block, then 0.78% of disk is being used for pointers, rather than for information.
7) The solution to this problem is to allocate blocks in to multiples, called clusters and to
allocate the clusters rather than blocks.
8) Another problem is reliability. The files are linked together by pointers scattered all over the
disk what happen if a pointer were lost or damaged. FAT( file allocation table):-
An important variation on the linked allocation method is the use of a file allocation table.
The table has one entry for each disk block, and is indexed by block number. The FAT is used much
as is a linked list.
The directory entry contains the block number of the first block of the file. The table entry contains
the block number then contains the block number of the next block in the file. This chain continuous
until the last block, which has a special end of file values as the table entry. Unused blocks are
indicated by a ‘0’ table value. Allocation a new block to a file is a simple. First finding the first 0-
value table entry, and replacing the previously end of file value with the address of the new block.
The 0 is then replaced with end of file value.
Fig 11.5
3)Indexed allocation:-
1) linked allocation solves the external fragmentation and size declaration problems of
contagious allocation. How ever in the absence of a FAT , linked allocation can not support efficient
direct access.
2) The pointers to the blocks are scattered with the blocks themselves all over the disk and
need to be retrieved in order.
3) Indexed allocation solves this problem by bringing all the pointers together in to one
location i.e the index block.
4) Each file has its own index block ,which is an array of disk block addresses. The I th entry in
the index block points to the ith block of the file.
5) The directory contains the address of the index block. Fig 11.6
To read the ith block we use the pointer in the ith index block entry to find and read the desired
block.
6) When the file is created, all pointers in the index block are set to nil. When the ith block is
first written, a block is obtained from the free space manager, and
its address is put in the ith index block entry.
7) It supports the direct access with out suffering from external fragmentation, but it suffer
from the wasted space. The pointer overhead of the index block is generally greater than the
pointer over head of linked allocation.

Page 110
7.8 :Free space management:-
1) to keep track of free disk space, the system maintains a free space list. The free space list
records all disk blocks that are free.
2) To create a file we search the free space list for the required amount of space, and allocate
that space to the new file. This space is then removed from the free space list.
3) When the file is deleted , its disk space is added to the free space list. There are many
methods to find the free space.
1) bit vector:-
The free space list is implemented as a bit map or bit vector. Each block is represented by 1 bit. If the
block is free the bit is 1 if the block is allocated the bit is 0.
Ex:- consider a disk where blocks 2,3,4,5,8,9,10,11,12,13,17,18,25, are free and rest of blocks are
allocated the free space bit map would be 001111001111110001100000010000……..
the main advantage of this approach is that it is relatively simple and efficient to find the first free
block or ‘n’ consecutive free blocks on the disk
2) Linked list:-
Another approach is to link together all the free disk blocks, keeping a pointer
to the first free block in a special location on the disk and caching it in memory. This first block
contain a pointer to the next free disk block, and so on.
How ever this scheme is not efficient to traverse the list, we must read each block, which requires I/O
time.
Disk space is also wasted to maintain the pointer to next free space.
3) Grouping:-
Another method is store the addresses of ‘n’ free blocks in the first free block.
The first (n-1) of these blocks are actually free. The last block contains the addresses of another ‘n’
free blocks and so on. Fig 11.8
Advantages:- the main advantage of this approach is that the addresses of a large no.of blocks can be
found quickly.
4) Counting:-
Another approach is counting. Generally several contiguous blocks may be allocated or freed
simultaneously. Particularly when space is allocated with the contiguous allocation algorithm rather
than keeping a list of ‘n’ free disk address. We can keep the address of first free block and the
number ‘n’ of free contiguous blocks that follow the first block. Each entry in the free space list then
consists of a disk address and a count.

7.9 :Directory Implementation:-


1) Linear list:-
1) the simple method of implement ting a directory is to use a linear list of file names with
pointers to the data blocks.
2) A linear list of directory entries requires a linear search to find a particular entry.
3) This method is simple to program but is time consuming to execute.
4) To create a new file, we must first search the directory to be sure that no existing file has
the same name. Then, we add a new entry at the end of the directory.

Page 111
5) To delete a file we search the directory for the named file, then release the space allocated
to it.
6) To reuse directory entry, we can do one of several things.
7) We can mark the entry as unused or we can attach it to a list of free directory entries.
Disadvantage:- the disadvantage of a linear list of directory entries is the linear search to find a file.

Page 112
UNIT VIII

MASS-STORAGE STRUCTURE
Mass-Storage Systems
nDescribe the physical structure of secondary and tertiary storage devices and the resulting effects on
the uses of the devicesnExplain the performance characteristics of mass-storage devices nDiscuss
operating-system services provided for mass storage, including RAID and HSM
8.1 :Overview of Mass Storage Structure
Magnetic disks provide bulk of secondary storage of modern computers Drives rotate at 60 to 200 times per
second
Transfer rate is rate at which data flow between drive and computer
Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for
desired sector to rotate under the disk head (rotational latency) Head crash results from disk head making
contact with the disk surface
That’s bad
Disks can be removable
Drive attached to computer via I/O bus
Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI
Host controller in computer uses bus to talk to disk controller built into drive or storage array
Moving-head Disk Mechanism

Page 113
Magnetic tape
Was early secondary-storage medium
Relatively permanent and holds large quantities of data Access time slow
Random access ~1000 times slower than disk
Mainly used for backup, storage of infrequently-used data, transfer medium between systems
Kept in spool and wound or rewound past read-write head Once data under head, transfer rates
comparable to disk 20-200GB typical storage
Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT
8.2 :Disk Structure
Disk drives are addressed as large 1-dimensional arrays of logical
blocks, where the logical block is the smallest unit of transfer nThe 1-dimensional array of logical
blocks is mapped into the sectors of the disk sequentially
Sector 0 is the first sector of the first track on the outermost cylinder
Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then
through the rest of the cylinders from outermost to innermost 8.3:Disk Attachment
Host-attached storage accessed through I/O ports talking to I/O busses SCSI itself is a bus, up to 16
devices on one cable, SCSI initiator requests operation and SCSI targets perform tasks
Each target can have up to 8 logical units (disks attached to device controller FC is high-speed serial
architecture
Can be switched fabric with 24-bit address space – the basis of storage area networks (SANs) in
which many hosts attach to many storage units
Can be arbitrated loop (FC-AL) of 126 devices
Network-Attached Storage
Network-attached storage (NAS) is storage made available over a network rather than over a local
connection (such as a bus)
NFS and CIFS are common protocols
Implemented via remote procedure calls (RPCs) between host and storage New iSCSI protocol uses
IP network to carry the SCSI protocol

Storage Area Network


Common in large storage environments (and becoming more common) Multiple hosts attached to
multiple storage arrays – flexible

Page 114
8.4 :Disk Scheduling
The operating system is responsible for using hardware efficiently — for the disk drives, this means
having a fast access time and disk bandwidth
Access time has two major components
Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector
Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk
head
Minimize seek time
Seek time » seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total time between the first
request for service and the completion of the last transfer Several algorithms exist to schedule the
servicing of disk I/O requests nWe illustrate them with a request queue (0-199)
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
FCFS
Illustration shows total head movement of 640 cylinders

SSTF
Selects the request with the minimum seek time from the current head position SSTF scheduling is a
form of SJF scheduling; may cause starvation of some requests
nIllustration shows total head movement of 236 cylinders
SCAN*
The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it
gets to the other end of the disk, where the head movement is reversed and servicing
continues.nSCAN algorithm Sometimes called the elevator algorithm

Page 115
Illustration shows total head movement of 208 cylinders
C-SCAN
Provides a more uniform wait time than SCAN
The head moves from one end of the disk to the other, servicing requests as it goes
When it reaches the other end, however, it immediately returns to the beginning of the disk, without
servicing any requests on the return trip
Treats the cylinders as a circular list that wraps around from the last cylinder to the first one
C-LOOK
Version of C-SCAN
Arm only goes as far as the last request in each direction, then reverses direction immediately,
without first going all the way to the end of the disk Selecting a Disk-Scheduling
Algorithm
SSTF is common and has a natural appeal
SCAN and C-SCAN perform better for systems that place a heavy load on the disk
Performance depends on the number and types of requests
Requests for disk service can be influenced by the file-allocation method The disk-scheduling
algorithm should be written as a separate module of the
operating system, allowing it to be replaced with a different algorithm if necessary Either SSTF or
LOOK is a reasonable choice for the default algorithm
Disk Management
Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller
can read and write
To use a disk to hold files, the operating system still needs to record its own data structures on the
disk
Partition the disk into one or more groups of cylinders Logical formatting or “making a file system”
To increase efficiency most file systems group blocks into clusters
Disk I/O done in blocks
File I/O done in clusters Boot block initializes system
The bootstrap is stored in ROM Bootstrap loader program
Methods such as sector sparing used to handle bad blocks
Booting from a Disk in Windows 2000

8.5 :Swap-Space Management


Swap-space — Virtual memory uses disk space as an extension of main memory
Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate
disk partition
Swap-space management
4.3BSD allocates swap space when process starts; holds text segment (the program) and data
segment
Kernel uses swap maps to track swap-space use
Solaris 2 allocates swap space only when a page is forced out of physical

Page 116
memory, not when the virtual memory page is first created
Data Structures for Swapping on Linux Systems
RAID Structure
RAID – multiple disk drives provides reliability via redundancy nIncreases the mean time to
failurenFrequently combined with NVRAM to improve write performance
RAID is arranged into six different levels
Several improvements in disk-use techniques involve the use of multiple disks working
cooperativelynDisk striping uses a group of disks as one storage unitnRAID schemes improve
performance and improve the reliability of the storage system by storing redundant data
Mirroring or shadowing (RAID 1) keeps duplicate of each disk
Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high
performance and high reliabilitylBlock interleaved parity (RAID 4, 5, 6) uses much less redundancy
RAID within a storage array can still fail if the array fails, so automatic
replication of the data between arrays is common
Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed
disk and having data rebuilt onto them
RAID (0 + 1) and (1 + 0)
Extensions
RAID alone does not prevent or detect data corruption or other errors, just disk failures Solaris ZFS
adds checksums of all data and metadata
Checksums kept with pointer to object, to detect if object is the right one and whether it changed
Can detect and correct data and metadata corruption ZFS also removes volumes, partititions
Disks allocated in pools
Filesystems with a pool share that pool, use and release space like “malloc” and “free” memory
allocate / release calls
ZFS Checksums All Metadata and Data Traditional and Pooled Storage
Stable-Storage Implementation
Write-ahead log scheme requires stable storagenTo implement stable storage: Replicate information
on more than one nonvolatile storage media with independent failure modes Update information in a
controlled manner to ensure that we can recover the stable data after any failure during data transfer
or recovery
Tertiary Storage Devices
Low cost is the defining characteristic of tertiary storage nGenerally, tertiary storage is built using
removable medianCommon examples of removable media are floppy disks and CD-ROMs; other
types are available
Removable Disks
Floppy disk — thin flexible disk coated with magnetic material, enclosed in a protective plastic
caselMost floppies hold about 1 MB; similar technology is used for removable disks that hold more
than 1 GB
Removable magnetic disks can be nearly as fast as hard disks, but they are at a greater risk of damage
from exposure

Page 117
A magneto-optic disk records data on a rigid platter coated with magnetic material Laser heat is used
to amplify a large, weak magnetic field to record a bit Laser light is also used to read data (Kerr
effect)
The magneto-optic head flies much farther from the disk surface than a magnetic disk head, and the
magnetic material is covered with a protective layer of plastic or glass; resistant to head
crashesnOptical disks do not use magnetism; they employ special materials that are altered by laser
light
WORM Disks
The data on read-write disks can be modified over and over
WORM (“Write Once, Read Many Times”) disks can be written only once Thin aluminum film
sandwiched between two glass or plastic platters
To write a bit, the drive uses a laser light to burn a small hole through the aluminum; information can
be destroyed by not altered
Very durable and reliable
Read-only disks, such ad CD-ROM and DVD, com from the factory with the data pre-recorded
Tapes
Compared to a disk, a tape is less expensive and holds more data, but random access is much slower
Tape is an economical medium for purposes that do not require fast random access, e.g., backup
copies of disk data, holding huge volumes of data Large tape installations typically use robotic tape
changers that move tapes between tape drives and storage slots in a tape library
stacker – library that holds a few tapes silo – library that holds thousands of tapes
A disk-resident file can be archived to tape for low cost storage; the computer can stage it back into
disk storage for active use
Operating System Support
nMajor OS jobs are to manage physical devices and to present a virtual machine abstraction to
applicationsnFor hard disks, the OS provides two abstraction:
Raw device – an array of data blockslFile system – the OS queues and schedules the interleaved
requests from
several applications
Application Interface
Most OSs handle removable disks almost exactly like fixed disks — a new cartridge is formatted and
an empty file system is generated on the disk Tapes are presented as a raw storage medium, i.e., and
application does not not open a file on the tape, it opens the whole tape drive as a raw device Usually
the tape drive is reserved for the exclusive use of that application
Since the OS does not provide file system services, the application must decide how to use the array
of blocks
Since every application makes up its own rules for how to organize a tape, a tape full of data can
generally only be used by the program that created it Tape Drives
The basic operations for a tape drive differ from those of a disk drive locate() positions the tape to a
specific logical block, not an entire track (corresponds to seek())
The read position() operation returns the logical block number where the

Page 118
tape head is
The space() operation enables relative motion
Tape drives are “append-only” devices; updating a block in the middle of the tape also effectively
erases everything beyond that block
An EOT mark is placed after a block that is written
File Naming
The issue of naming files on removable media is especially difficult when we want to write data on a
removable cartridge on one computer, and then use the cartridge in another computer
Contemporary OSs generally leave the name space problem unsolved for removable media, and
depend on applications and users to figure out how to access and interpret the data
Some kinds of removable media (e.g., CDs) are so well standardized that all computers use them the
same way
(Hierarchical Storage Management HSM)
A hierarchical storage system extends the storage hierarchy beyond
primary memory and secondary storage to incorporate tertiary storage — usually implemented as a
jukebox of tapes or removable disks
Usually incorporate tertiary storage by extending the file system Small and frequently used files
remain on disk
Large, old, inactive files are archived to the jukebox
HSM is usually found in supercomputing centers and other large installations that have enormous
volumes of data
Speed
Two aspects of speed in tertiary storage are bandwidth and latencynBandwidth is measured in bytes
per second
Sustained bandwidth – average data rate during a large transfer; # of bytes/transfer time
Data rate when the data stream is actually flowing
Effective bandwidth – average over the entire I/O time, including seek() or
locate(), and cartridge switching Drive’s overall data rate
Access latency – amount of time needed to locate data
Access time for a disk – move the arm to the selected cylinder and wait for the rotational latency; <
35 milliseconds
Access on tape requires winding the tape reels until the selected block reaches the tape head; tens or
hundreds of seconds
Generally say that random access within a tape cartridge is about a thousand times slower than
random access on disk
The low cost of tertiary storage is a result of having many cheap cartridges share a few expensive
drives
A removable library is best devoted to the storage of infrequently used data, because the library can
only satisfy a relatively small number of I/O requests per hour
Reliability
A fixed disk drive is likely to be more reliable than a removable disk or tape drivenAn optical
cartridge is likely to be more reliable than a magnetic disk or tapenA head crash in a fixed hard disk
generally destroys the data, whereas the failure of a tape drive or optical disk drive often leaves the
data cartridge

Page 119
unharmed
Cost
Main memory is much more expensive than disk storage nThe cost per megabyte of hard disk
storage is competitive with magnetic tape if only one tape is used per drivenThe cheapest tape
drives and the cheapest disk drives have had about the same storage capacity over the
yearsnTertiary storage gives a cost savings only when the number of cartridges is considerably
larger than the number of drives

Page 120

You might also like