Cs6303comparchnotes PDF
Cs6303comparchnotes PDF
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
2. SKILLS ADDRESSED:
Listening
4.OUTCOMES:
i. Explain the concept of components of computer system
ii. Know the instructions and addressing modes
iii.
5.LINK SHEET:
i. What are the components of Computer system?
ii. What are the topics covered in overview and instructions
iii.
6.EVOCATION: (5 Minutes)
7.Lecture Notes: (40 Minutes)
Computer Architect must balance speed and cost across the system
System is measured against specification
Benchmark programs measure performance of systems/subsystems
Subsystems are designed to be in balance between each other
Usage:
Normal: Data communications, time, clock frequencies
Power of 2: Memory (often)
Memory units:
Bit (b): 1 binary digit
Nibble: 4 binary digits
Byte (B): 8 binary digits
Word: Commonly 32 binary digits (but may be 64).
Half Word: Half the binary digits of a word
Double Word: Double the binary digits of a word
Common Use:
10 Mbps = 10 Mb/s = 10 Megabits per second
10 MB = 10 Megabytes
10 MIPS = 10 Million Instructions Per Second
Moore’s Law:
Component density increase per year: 1.6
Processor performance increase: 1.5 more recently 1.2 and < 1.2
Memory capacity improvement: 4/3: 1.33
A LAN operates at 10 Mbps. How long will it take to transfer a packet of 1000 bytes?
(Optimistically assuming 100% efficiency)
10 Mb = 8 bits 10 Mb = 8000
1 sec x sec 1 sec x sec
10,000,000x = 8 10,000,000x = 8000
x = 8/10,000,000 = 0.000,000,8 = 800ns x = 8000/10,000,000=8/10,000
1000 x 800 ns = 800us x = 0.0008 = 800us
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Performance via prediction
A particular pattern of parallelism is so prevalent in computer architecture that it merits
its own name: pipelining. For example, before fire engines, a "bucket brigade" would
respond to a fire, which many cowboy movies show in response to a dastardly act by the
villain. Th e townsfolk form a human chain to carry a water source to fi re, as they could
much more quickly move buckets up the chain instead of individuals running back and
forth. Our pipeline icon is a sequence of pipes, with each section representing one stage
of the pipeline.
Hierarchy of memories
Programmers want memory to be fast, large, and cheap, as memory speed often shapes
performance, capacity limits the size of problems that can be solved, and the cost of
memory today is often the majority of computer cost. Architects have found that they can
address these conflicting demands with a hierarchy of memories, with the fastest,
smallest, and most expensive memory per bit at the top of the hierarchy and the slowest,
largest, and cheapest per bit at the bottom. Caches give the programmer the illusion that
main memory is nearly as fast as the top of the hierarchy and nearly as big and cheap as
the bottom of the hierarchy. We use a layered triangle icon to represent the memory
hierarchy. The shape indicates speed, cost, and size: the closer to the top, the faster and
more expensive per bit the memory; the wider the base of the layer, the bigger the
memory.
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Input/Output
Mouse:
Electromechanical: Rolling ball indicates change in position as (x,y) coordinates.
Optical: Camera samples 1500 times per second. Optical processor compares images and
determines distance moved.
Displays:
Raster Refresh Buffer: Holds the bitmap or matrix of pixel values.
Matrix of Pixels: low resolution: 512 x 340 pixels to high resolution: 2560 x 1600 pixels
Black & White: 1 bit per pixel
Grayscale: 8 bits per pixel
Color: (one method): 8 bits each for red, blue, green = 24 bits
Required: Refresh the screen periodically to avoid flickering
Optical Disk: Laser uses spiral pattern to write bits as pits or flats.
Compact Disc (CD): Stores music
Digital Versatile Disc (DVD): Multi-gigabyte capacity required for films
Read-write procedure similar to Magnetic Disk (but optical write, not magnetic)
Primary or Main Memory: Programs are retained while they are running. Uses:
Dynamic Random Access Memory (DRAM)
Built as an integrated circuit, equal speed to any location in memory
Access time: 50-70 ns.
SIMM (Single In-line Memory Module): DRAM memory chips lined up in a row, often on
a daughter card
DIMM (Dual In-line Memory Module): Two rows of memory chips
ROM (Read Only Memory) or EPROM (Erasable Programmable ROM)
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system, Notebook Computers, Handheld Computers
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
CPUs
Device density: 2x every 1.5 years (~60% per year)
Latency: 2x every 5 years (~15% per year)
Memory (DRAM)
Capacity: 4x every 3 years (~60% per year)
(2x every two years lately)
Latency: 1.5x every 10 years
Cost per bit: decreases about 25% per year
Hard drives:
Capacity: 4x every 3 years (~60% per year)
Bandwidth: 2.5x every 4 years
Latency: 2x every 5 years
Boards:
Wire density: 2x every 15 years
Cables:
No change
Physical Hardware
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Memory, Processing speed, Usability, Maintainability
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Purchasing perspective
given a collection of machines, which has the
best performance ?
least cost ?
best cost/performance?
Design perspective
faced with design options, which has the
best performance improvement ?
least cost ?
best cost/performance?
Both require
basis for comparison
metric for evaluation
Our goal is to understand what factors in the architecture contribute to overall system
performance and the relative importance (and cost) of these factors
Which airplane has the best performance?
Boeing 777
Boeing 747
BAC/Sud
Concorde
Douglas
DC-8-50
Passenger Capacity
Performance X Performance Y
Execution time Y Execution time X n
Elapsed time
Total response time, including all aspects
Processing, I/O, OS overhead, idle time
Determines system performance
CPU time
Time spent processing a given job
Discounts I/O time, other jobs‘ shares
Comprises user CPU time and system CPU time
Different programs are affected differently by CPU and system performance
CPU Clocking
Performance improved by
Reducing number of clock cycles
Increasing clock rate
Hardware designer must often trade off clock rate against cycle count
Clock Cycles Instruction Count Cycles per Instruction
CPU Time Instruction Count CPI Clock Cycle Time
Instruction Count CPI
Clock Rate
The processor is the active part of the board, following the instructions of a program
to the letter. It adds numbers, tests numbers, signals I/O devices to activate, and so on..
Occasionally, people call the processor the CPU, for the more bureaucratic-sounding central
processor unit.
Descending even lower into the hardware, The processor logically comprises two main
components: datapath and control, the respective brawn and brain of the processor.
The datapath performs the arithmetic operations, and control tells the datapath, memory, and I/O
devices what to do according to the wishes of the instructions of the program. This explains the
datapath and control for a higher-performance design Descending into the depths of any
component of the hardware reveals insights into the computer. Inside the processor is another
type of memory—cache memory.
Cache memory consists of a small, fast memory that acts as a buffer for the DRAM memory.
(The nontechnical definition of cache is a safe place for hiding things.)
Cache is built using a different memory technology, static random access memory (SRAM).
SRAM is faster but less dense, and hence more expensive, than DRAM You may have noticed a
common theme in both the software and the hardware descriptions: delving into the depths of
hardware or software reveals more information or, conversely, lower-level details are hidden to
offer a simpler model at higher levels. The use of such layers, or abstractions, is a principal
technique for designing very sophisticated computer systems.
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system, DVD
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Instruction Fetch:
The instruction is fetched from the memory location whose address is in PC.This is placed in IR.
Instruction Execution:
Instruction in IR is examined to determine whose operation is to be performed.
Program execution Steps:
To begin executing a program, the address of first instruction must be placed in PC.
The processor control circuits use the information in the PC to fetch & execute instructions one
at a time in the order of increasing order.
This is called Straight line sequencing.During the execution of each instruction,the PC is
incremented by 4 to point the address of next instruction.
Branching:
The Address of the memory locations containing the n numbers are symbolically given as
NUM1,NUM2…..NUMn.
Separate Add instruction is used to add each number to the contents of register R0.
After all the numbers have been added,the result is placed in memory location SUM.
\
Fig:Straight Line Sequencing Program for adding ‘n’ numbers
Using loop to add ‘n’ numbers:
Number of enteries in the list „n‟ is stored in memory location M.Register R1 is used as a
counter to determine the number of times the loop is executed.
Content location M are loaded into register R1 at the beginning of the program.
It starts at location Loop and ends at the instruction.Branch>0.During each pass,the address of
the next list entry is determined and the entry is fetched and added to R0
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system, Microchip, Intel cores
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Representing instructions
2. SKILLS ADDRESSED:
Learning
understanding
3.OBJECTIVE OF THIS LESSON PLAN:
To make the students know the representation of instructions
4.OUTCOMES:
i. Learn the set of representation involve in computer architecture
ii. know the way of representing the instructions
5.LINK SHEET:
i. Give the different types of representation involved in instruction set
ii. Discuss in detail the way to represent instruction set.
6.EVOCATION: (5 Minutes)
7.Lecture Notes: (40 Minutes)
INSTRUCTION AND INSTRUCTION SEQUENCING
A computer must have instruction capable of performing the following operations. They are,
Data transfer between memory and processor register.
Arithmetic and logical operations on data.
Program sequencing and control.
I/O transfer.
Instruction Fetch:
The instruction is fetched from the memory location whose address is in PC.This is placed in IR.
Instruction Execution:
Instruction in IR is examined to determine whose operation is to be performed.
Program execution Steps:
To begin executing a program, the address of first instruction must be placed in PC.
The processor control circuits use the information in the PC to fetch & execute instructions one
at a time in the order of increasing order.
This is called Straight line sequencing.During the execution of each instruction,the PC is
incremented by 4 to point the address of next instruction.
Branching:
The Address of the memory locations containing the n numbers are symbolically given as
NUM1,NUM2…..NUMn.
Separate Add instruction is used to add each number to the contents of register R0.
After all the numbers have been added,the result is placed in memory location SUM.
\
Fig:Straight Line Sequencing Program for adding ‘n’ numbers
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9.Application
Processor, Embedded system, intel core and microchip
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Logical operations were useful to operate on fields of bits within a word or even on individual
bits. Examining characters within a word, each of which is stored as 8 bits, is one example of
such an operation (see Section 2.9).
It follows that operations were added to programming languages and instruction set architectures
to simplify, among other things, the packing and unpacking of bits into words. These instructions
are called logical operations. Figure 2.8 shows logical operations in C, Java, and MIPS.
The first class of such operations is called shifts. They move all the bits in a word to the left or
right, filling the emptied bits with Os.
For example, if register $s0 contained 0000 0000 0000 0000 0000 0000 0000 1001two= 9t en
and the instruction to shift left by 4 was executed, the new value would be:0000 0000 0000 0000
0000 1001 0000two= 144t en The dual of a shift left is a shift right. The actual name of the two
MIPS shift instructions are called shift left logical ( s l l ) and shift right logical ( s r l )
To place a value into one of these seas of Os, there is the dual to AND, called OR. It is a bit-by-
bit operation that places a 1 in the result if either operand bit is a 1.
To elaborate, if the registers $ 11 and $ 12 are unchanged from the preceding example,the result
of the MIPS instruction g $ t 0 = r e g $ t l | reg $ t 2 is this value in register $ t 0 :
NOT A logical bit-bybit operation with one operand that inverts the bits; that is, it replaces
every 1 with a 0, and every 0 with a 1.NOR
A logical bit-bybit operation with two operands that calculates the NOT of the OR of the two
operands.
That is, it calculates a 1 only if there is a 0 in both operands.0000 0000 0000 0000 0011 1101
1100 0000 two The final logical operation is a contrarian. NOT takes one operand and places a 1
in the result if one operand bit is a 0, and vice versa.
In keeping with the three-operand format, the designers of MIPS decided to include the
instruction NOR (NOT OR) instead of NOT. If one operand is zero, then it is equivalent to NOT:
A NOR 0 = NOT (A OR 0) = NOT (A).
Control Unit
Makes all the other parts work together Uses a FSM (like our Traffic FSM but much bigger -
many inputs/outputs - and more complicated)
Program Counter (PC)
•Tells control unit which instruction to execute next – Recall program is a sequence of
instructions
•Holds address of next instruction (program is in memory)
•Normally, the next PC is the current PC plus one instruction
Instruction Register (IR)
•Holds the instruction currently being executed
•Decoded to feed signals to other units and inputs to FSM
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
9. Application
Processor, Embedded system, Digital logic gates.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
ADDRESSING MODES
The name(address) of the register is given in the instruction.
Absolute Mode(Direct Mode): The different ways in which the location of an operand is
specified in an instruction is called as Addressing mode.
Generic Addressing Modes:
Immediate mode
Register mode
Absolute mode
Indirect mode
Index mode
Base with index
Base with index and offset
Relative mode
Auto-increment mode
Auto-decrement mode
Register Mode:
The operand
The operand is in new location.
The address of this location is given explicitly in the instruction.
The index register R1 contains the address of a new location and the value of X defines an offset(also
called a displacement).
To find operand,
First go to Reg R1 (using address)-read the content from R1-1000
Here the constant X refers to the new address and the contents of index register define the offset to
the operand.
The sum of two values is given explicitly in the instruction and the other is stored in register.
Relative Addressing:
It is same as index mode. The difference is, instead of general purpose register, here we can use
program counter(PC).
Relative Mode:
The Effective Address is determined by the Index mode using the PC in place of the general purpose
register (gpr).
This mode can be used to access the data operand. But its most common use is to specify the target
address in branch instruction.Eg. Branch>0 Loop
It causes the program execution to goto the branch target location. It is identified by the name loop if
the branch condition is satisfied.
Auto-increment mode
Auto-decrement mode
Auto-increment mode:
The Effective Address of the operand is the contents of a register in the instruction.
After accessing the operand, the contents of this register is automatically incremented to point to
the next item in the list.
Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, ―Computer Organization‖, Fifth Edition,
Tata McGraw Hill, 2002, PP. 3-9
Application
Processor, Embedded system.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
ALU Design
In computing an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and
logical operations. The ALU is a fundamental building block of the central processing unit
(CPU) of a computer, and even the simplest microprocessors contain one for purposes such as
maintaining timers. The processors found inside modern CPUs and graphics processing units
(GPUs) accommodate very powerful and very complex ALUs; a single component may contain
a number of ALUs.
Mathematician John von Neumann proposed the ALU concept in 1945, when he wrote a
report on the foundations for a new computer called the EDVAC. Research into ALUs
remains an important part of computer science, falling under Arithmetic and logic
structures in the ACM Computing Classification System
1-Bit ALU
This is an one-bit ALU which can do Logical AND and Logical OR operation.
Result = a AND b when operation = 0
Result = a OR b when operation = 1
The operation line is the input of a MUX
32-Bit ALU
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, “Computer Organization”, Fifth
Edition, Tata McGraw Hill, 2002, PP. 3-9
9. APPLICATIONS
They present special design challenges, because there are simply too many inputs
to list all possible combinations in a truth table.
In applying this method, bus-wide operations are broken into simpler bit-by-bit
operations that are more easily defined by truth-tables, and more tractable to
familiar design techniques
Operations performed by computer
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Boolean Expression
C = xy
S = x’y + xy’
Implementation of Half Adder Circuit
Full Adder
One that performs the addition of three bits (two significant bits and a previous carry) is
a full adder.
Truth Table
Boolean expression using K map
Binary adder
This is also called Ripple Carry Adder, because of the construction with full adders are
connected in cascade.
Truth Table
9. APPLICATIONS
They present special design challenges, because there are simply too many inputs
to list all possible combinations in a truth table.
In applying this method, bus-wide operations are broken into simpler bit-by-bit
operations that are more easily defined by truth-tables, and more tractable to
familiar design techniques
Operations performed by computer
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
9. APPLICATIONS
They present special design challenges, because there are simply too many inputs
to list all possible combinations in a truth table.
In applying this method, bus-wide operations are broken into simpler bit-by-bit
operations that are more easily defined by truth-tables, and more tractable to
familiar design techniques
Operations performed by computer
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
Multiplication Basics
Multiplies two bit operands_and_[1, 2]
Product is _2 _bit unsigned number or _2 1_bit signed number
Example : unsigned multiplication
Algorithm
1) Generation of partial products
2) Adding up partial products :
a) sequentially (sequential shiftandadd),
b) serially (combinational shiftandadd),
or
c) in parallel
Speedup techniques
�Reduce number of partial products
�Accelerate addition of partial products
Booth Recoding
Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, “Computer Organization”, Fifth
Edition, Tata McGraw Hill, 2002, PP. 3-9
9. APPLICATIONS
Applicable to sequential, array, and parallel multip
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
9. APPLICATIONS
They present special design challenges, because there are simply too many inputs
to list all possible combinations in a truth table.
In applying this method, bus-wide operations are broken into simpler bit-by-bit
operations that are more easily defined by truth-tables, and more tractable to
familiar design techniques
Operations performed by computer
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
ADDITION
3.25 x 10 ** 3
+ 2.63 x 10 ** -1
-----------------
3.25 x 10 ** 3
+ 0.000263 x 10 ** 3
--------------------
3.250263 x 10 ** 3
(presumes use of infinite precision, without regard for accuracy)
third step: normalize the result (already normalized!)
SUBTRACTION
like addition as far as alignment of radix points then the algorithm for subtraction of
sign mag. numbers takes over.
before subtracting,
compare magnitudes (don't forget the hidden bit!)
change sign bit if order of operands is changed.
don't forget to normalize number afterward.
MULTIPLICATION
example on decimal values given in scientific notation:
3.0 x 10 ** 1
+ 0.5 x 10 ** 2
algorithm: multiply mantissas
add exponents
3.0 x 10 ** 1
+ 0.5 x 10 ** 2
-----------------
1.50 x 10 ** 3
DIVISION
similar to multiplication.
true division:
do unsigned division on the mantissas (don't forget the hidden bit)
subtract TRUE exponents
The IEEE standard is very specific about how all this is done.
Unfortunately, the hardware to do all this is pretty slow.
they do a x 1/b.
figure out a reciprocal for b, and then use the fl. pt.
multiplication hardware.
example of a result that isn't the same as with true division.
8. Textbook :
Carl Hamacher, Zvonko Vranesic and Safwat Zaky, “Computer Organization”, Fifth
Edition, Tata McGraw Hill, 2002, PP. 3-9
9. APPLICATIONS
They present special design challenges, because there are simply too many inputs
to list all possible combinations in a truth table.
In applying this method, bus-wide operations are broken into simpler bit-by-bit
operations that are more easily defined by truth-tables, and more tractable to
familiar design techniques
Operations performed by computer
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Information Technology
The structure of the arithmetic element can be altered under program control. Each
instruction specifies a particular form of machine in which to operate, ranging from a full
36-bit computer to four 9-bit computers with many variations.
Not only is such a scheme able to make more efficient use of the memory in storing data of
various word lengths, but it also can be expected to result in greater over-all machine
speed because of the increased parallelism of operation.
Peak operating rates must then be referred to particular configurations. For addition and
multiplication, these peak rates are given in the following table:
From the CPU's perspective, an I/O device appears as a set of special-purpose registers, of three
general types:
Status registers provide status information to the CPU about the I/O device. These
registers are often read-only, i.e. the CPU can only read their bits, and cannot change
them.
Configuration/control registers are used by the CPU to configure and control the device.
Bits in these configuration registers may be write-only, so the CPU can alter them, but
not read them back. Most bits in control registers can be both read and written.
Data registers are used to read data from or send data to the I/O device.
In some instances, a given register may fit more than one of the above categories, e.g. some bits
are used for configuration while other bits in the same register provide status information.
The logic circuit that contains these registers is called the device controller, and the software that
communicates with the controller is called a device driver.
+-------------------+ +-----------+
| Device controller | | |
+-------+ | |<--------->| Device |
| |---------->| Control register | | |
| CPU |<----------| Status register | | |
| |<--------->| Data register | | |
+-------+ | | | |
+-------------------+ +-----------+
Simple devices such as keyboards and mice may be represented by only a few registers, while
more complex ones such as disk drives and graphics adapters may have dozens.
Each of the I/O registers, like memory, must have an address so that the CPU can read or write
specific registers.
Some CPUs have a separate address space for I/O devices. This requires separate instructions to
perform I/O operations.
Other architectures, like the MIPS, use memory-mapped I/O. When using memory-mapped I/O,
the same address space is shared by memory and I/O devices. Some addresses represent memory
cells, while others represent registers in I/O devices. No separate I/O instructions are needed in a
CPU that uses memory-mapped I/O. Instead, we can perform I/O operations using any
instruction that can reference memory.
+---------------+
| Address space |
| +-------+ |
| | ROM | |
| +-------+ |
+-------+address| | | |
| |------>| | RAM | |
| CPU | | | | |
| |<----->| +-------+ |
+-------+ data | | | |
| | I/O | |
| +-------+ |
+---------------+
On the MIPS, we would access ROM, RAM, and I/O devices using load and store instructions.
Which type of device we access depends only on the address used!
The 32-bit MIPS architecture has a 32-bit address, and hence an address space of 4 gigabytes.
Addresses 0x00000000 through 0xfffeffff are used for memory, and addresses 0xffff0000 -
0xffffffff (the last 64 kilobytes) are reserved for I/O device registers. This is a very small fraction
of the total address space, and yet far more space than is needed for I/O devices on any one
computer.
Each register within an I/O controller must be assigned a unique address within the address
space. This address may be fixed for certain devices, and auto-assigned for others. (PC plug-and-
play devices have auto-assigned I/O addresses, which are determined during boot-up.)
8. TEXT BOOKS:
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
MEMORY TECHNOLOGIES
Much of the success of computer technology stems from the tremendous progress in
storage technology.
Early computers had a few kilobytes of random-access memory. The earliest IBM PCs didn’t
even have a hard disk.
That changed with the introduction of the IBM PC-XT in 1982, with its 10-megabyte
disk. By the year 2010, typical machines had 150,000 times as much disk storage, and the
amount of storage was increasing by a factor of 2 every couple of years.
Random-Access Memory
Random-access memory (RAM) comes in two varieties—static and dynamic. Static RAM
(SRAM) is faster and significantly more expensive than Dynamic RAM (DRAM). SRAM is used
for cache memories, both on and off the CPU chip. DRAM is used for the main memory plus the
frame buffer of a graphics system. Typically, a desktop system will have no more than a few
megabytes of SRAM, but hundreds or thousands of megabytes of DRAM.
Static RAM
SRAMstores each bit in a bistable memory cell. Each cell is implemented with a six-transistor
circuit. This circuit has the property that it can stay indefinitely in either of two different voltage
configurations, or states. Any other state will be unstable—starting from there, the circuit will
quickly move toward one of the stable
Dynamic RAM
DRAM stores each bit as charge on a capacitor. This capacitor is very small—typically
around 30 femtofarads,that is, 30 × 10−15 farads. Recall, however, that a farad is a very large
unit of measure. DRAM storage can be made very dense—each cell consists of a capacitor and a
single access-transistor. Unlike SRAM, however, a DRAM memory cell is very sensitive to any
disturbance. When the capacitor voltage is disturbed, it will never recover. Exposure to light rays
will cause the capacitor voltages to change. In fact, the sensors in digital cameras and
camcorders are essentially arrays of DRAM cells.
Conventional DRAMs
The cells (bits) in a DRAM chip are partitioned into d supercells, each consisting of w
DRAM cells. A d × w DRAM stores a total of dw bits of information. The supercells are
organized as a rectangular array with r rows and c columns, where rc = d. Each supercell has an
address of the form (i, j), where i denotes the row, and j denotes the column.
For example, Figure 6.3 shows the organization of a 16 × 8 DRAM chip with d = 16 supercells,
w = 8 534 bits per supercell, r = 4 rows, and c = 4 columns. The shaded box denotes the
supercell at address (2, 1).
Information flows in and out of the chip via external connectors called pins. Each pin carries a 1-
bit signal.
Figure shows two of these sets of pins: eight data pins that can transfer 1 byte in or out of the
chip, and two addr pins that carry two-bit row and column supercell addresses. Other pins that
carry control information are not shown.
Data flows back and forth between the processor and the DRAM main memory over
shared electrical conduits called buses. Each transfer of data between the CPU and memory is
accomplished with a series of steps called a bus transaction. A read transaction transfers data
from the main memory to the CPU. A write transaction transfers data from the CPU to the main
memory.
A bus is a collection of parallel wires that carry address, data, and control signals.
Depending on the particular bus design, data and address signals can share the same set of wires,
or they can use different sets. Also, more than two devices can share the same bus. The control
wires carry signals that synchronize the transaction and identify what kind of transaction is
currently being performed.
Figure : Example bus structure that connects the CPU and main memory.
Disk Storage
Disks are workhorse storage devices that hold enormous amounts of data, on the order of
hundreds to thousands of gigabytes, as opposed to the hundreds or thousands of megabytes in a
RAM-based memory. However, it takes on the order of milliseconds to read information from a
disk, a hundred thousand times longer than from DRAM and a million times longer than from
SRAM.
8. TEXT BOOKS:
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
One focuses on reducing the miss rate by reducing the probability that two different memory
blocks will contend for the same cache location. The second technique reduces the miss penalty
by adding an additional level to the hierarchy. This technique, called multilevel caching, first
appeared in high-end computers selling for more than $100,000 in 1990; since then it has
become common on desktop computers selling for less than $500! CPU time can be divided into
the clock cycles that the CPU spends executing the program and the clock cycles that the CPU
spends waiting for the memory system. Normally, we assume that the costs of cache accesses
that are hits are part of the normal CPU execution cycles. Thus,
CPU time = (CPU execution clock cycles T Memory-stall clock cycles)
The memory-stall clock cycles come primarily from cache misses, and we make that assumption
here. We also restrict the discussion to a simplified model of the memory system. In real
processors, the stalls generated by reads and writes can be quite complex, and accurate
performance prediction usually requires very detailed simulations of the processor and memory
system.
Reads Read-stall cycles =Pr ogram x Read miss rate x Read miss penalty Writes are more
complicated. For a write-through scheme, we have two sources of stalls: write misses, which
usually require that we fetch the block before continuing the write (see the Elaboration on page
467 for more details on dealing with writes), and write buffer stalls, which occur when the write
buffer is full when a write occurs.
Assume the miss rate of an instruction cache is 2% and the miss rate of the data cache is 4%. If a
processor has a CPI of 2 without any memory stalls and the miss penalty is 100 cycles for all
misses, determine how much faster a processor would run with a perfect cache that never missed.
Assume the frequency of all loads and stores is 36%.
So far, when we place a block in the cache, we have used a simple placement scheme: A block
can go in exactly one place in the cache. As mentioned earlier, it is called direct mapped because
there is a direct mapping from any block address in memory to a single location in the upper
level of the hierarchy. However, there is actually a whole range of schemes for placing blocks.
Direct mapped, where a block can be placed in exactly one location, is at one extreme. At the
other extreme is a scheme where a block can be placed in any location in the cache. Such a
scheme is called fully associative, because a block in memory may be associated with any entry
in the cache. To find a given block in a fully associative cache, all the entries in the cache must
be searched because a block can be placed in any one. To make the search practical, it is done in
parallel with a comparator associated with each cache entry. These comparators significantly
increase the hardware cost, effectively making fully associative placement practical only for
caches with small numbers of blocks.
Choosing Which Block to Replace :
When a miss occurs in a direct-mapped cache, the requested block can go in exactly one
position, and the block occupying that position must be replaced. In an associative cache, we
have a choice of where to place the requested block, and hence a choice of which block to
replace. In a fully associative cache, all blocks are candidates for replacement. In a set-
associative cache, we must choose among the blocks in the selected set. The most commonly
used scheme is least recently used (LRU), which we used in the previous example. In an LRU
scheme, the block replaced is the one that has been unused for the longest time. The set
associative example on page 482 uses LRU, which is why we replaced Memory(O) instead of
Memory(6). LRU replacement is implemented by keeping track of when each element in a set
was used relative to the other elements in the set. For a two-way set-associative cache, tracking
when the two elements were used can be implemented by keeping a single bit in each set and
setting the bit to indicate an element whenever that element is referenced. As associativity
increases, implementing LRU gets harder; in Section 5.5, we will see an alternative scheme for
replacement.
8. TEXT BOOKS:
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
8. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
Similarly, the main memory can act as a "cache" for the secondary storage, usually implemented
with magnetic disks. This technique is called virtual memory. Historically, there were two major
motivations for virtual memory: to allow efficient and safe sharing of memory among multiple
programs, and to remove the programming burdens of a small, limited amount of main memory.
Four decades after its invention, it's the former reason that reigns today.
Consider a collection of programs running all at once on a computer. Of course,
to allow multiple programs to share the same memory, we must be able to protect the programs
from each other, ensuring that a program can only read and write the portions of main memory
that have been assigned to it. Main memory need contain only the active portions of the many
programs, just as a cache contains only the active portion of one program. Thus, the principle of
locality enables virtual memory as well as caches, and virtual memory allows us to efficiently
share the processor as well as the main memory.
The second motivation for virtual memory is to allow a single user program to exceed the size of
primary memory. Formerly, if a program became too large for memory, it was up to the
programmer to make it fit. Programmers divided programs into pieces and then identified the
pieces that were mutually exclusive. These overlays were loaded or unloaded under user program
control during execution, with the programmer ensuring that the program never tried to access an
overlay that was not loaded and that the overlays loaded never exceeded the total size of the
memory. Overlays were traditionally organized as modules, each containing both code and data.
In virtual memory, the address is broken into a virtual page number and a page offset. Figure
5.20 shows the translation of the virtual page number to a physical page number. The physical
page number constitutes the upper portion of the physical address, while the page offset, which is
not changed, constitutes the lower portion. The number of bits in the page offset field determines
the page size. The number of pages addressable with the virtual address need not match the
number of pages addressable with the physical address. Having a larger number of virtual pages
than physical pages is the basis for the illusion of an essentially unbounded amount of virtual
memory.
8. TEXT BOOKS:
V.Carl Hamacher, Zvonko G. Varanesic and Safat G. Zaky, ―Computer Organisation―,
VI edition, McGraw-Hill Inc,. 2012
David A. Patterson and John L. Hennessey, ―Computer organization and design‟,
Morgan auffman lsevier, Fifth edition, 2014
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
8. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
Input/Output
The computer system’s I/O architecture is its interface to the outside world. This
architecture is designed to provide a systematic means of controlling interaction with the outside
world and to provide the operating system with the information it needs to manage I/O activity
effectively.
There are three principal I/O techniques: programmed I/O, in which I/O occurs under he
direct and continuous control of the program requesting the I/O operation; interrupt-driven I/O,
in which a program issues an I/O command and then continues to execute, until it is interrupted
by the I/O hardware to signal the end of the I/O operations; and direct memory access (DMA), in
which a specialized I/O processor takes over control of an I/O operation to move a large block of
data.
Two important examples of external I/O interfaces are FireWire and Infiniband.
Keyboard/Monitor
• Most common means of computer/user interaction
• Keyboard provides input that is transmitted to the computer
• Monitor displays data provided by the computer
• The character is the basic unit of exchange
• Each character is associated with a 7 or 8 bit code
Disk Drive
• Contains electronics for exchanging data, control, and status signals with an I/O module
• Contains electronics for controlling the disk read/write mechanism
• Fixed-head disk – transducer converts between magnetic patterns on the disk surface and bits in
the buffer
• Moving-head disk – must move the disk arm rapidly across the surface
I/O Modules
Module Function
• Control and timing
• Processor communication
• Device communication
• Data buffering
• Error detection
Processor communication
Command decoding: I/O module accepts commands from the processor sent as signals on the
control bus
Data: data exchanged between the processor and I/O module over the data bus Status reporting:
common status signals BUSY and READY are used because peripherals are slow
Address recognition: I/O module must recognize a unique address for each peripheral that it
controls I/O module communication
Device communication: commands, status information, and data
Data buffering: data comes from main memory in rapid burst and must be buffered by the I/O
module and then sent to the device at the device’s rate
Error detection: responsible for reporting errors to the processor
Module connects to the computer through a set of signal lines – system bus
• Data transferred to and from the module are buffered with data registers
• Status provided through status registers – may also act as control registers
• Module logic interacts with processor via a set of control signal lines
• Processor uses control signal lines to issue commands to the I/O module
• Module must recognize and generate addresses for devices it controls
• Module contains logic for device interfaces to the devices it controls
• I/O module functions allow the processor to view devices is a simple-minded way
• I/O module may hide device details from the processor so the processor only functions in terms
of simple read and write operations – timing, formats, etc…
• I/O module may leave much of the work of controlling a device visible to the processor –
rewind a tape, etc…
I/O mapping
Memory-mapped I/O
Single address space for both memory and I/O devices
o Disadvantage – uses up valuable memory address space
I/O module registers treated as memory addresses
Same machine instructions used to access both memory and I/O devices
o Advantage – allows for more efficient programming
Single read line and single write lines needed
Commonly used
• Isolated I/O
Separate address space for both memory and I/O devices
Separate memory and I/O select lines needed
Small number of I/O instructions
Commonly used
8. TEXT BOOKS:
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
Design Issues
How does the processor determine which device issued the interrupt
How are multiple interrupts dealt with
Device identification
Multiple interrupt lines – each line may have multiple I/O modules
Software poll – poll each I/O module
Separate command line – TESTI/O
Processor read status register of I/O module
Time consuming
Daisy chain
Hardware poll
Common interrupt request line
Processor sends interrupt acknowledge
Requesting I/O module places a word of data on the data lines – ―vector‖ that uniquely
identifies the I/O module – vectored interrupt
• Bus arbitration
I/O module first gains control of the bus
I/O module sends interrupt request
The processor acknowledges the interrupt request
I/O module places its vector of the data lines
Multiple interrupts
• The techniques above not only identify the requesting I/O module but provide methods of
assigning priorities
• Multiple lines – processor picks line with highest priority
• Software polling – polling order determines priority
• Daisy chain – daisy chain order of the modules determines priority
• Bus arbitration – arbitration scheme determines priority
A, B, C function as 8 bit I/O ports (C can be divided into two 4 bit I/O ports) Left side of
diagram show the interface to the 80386 bus.
Direct Memory Access
Drawback of Programmed and Interrupt-Driven I/O
• I/O transfer rate limited to speed that processor can test and service devices
• Processor tied up managing I/O transfers
DMA Function
• DMA module on system bus used to mimic the processor.
• DMA module only uses system bus when processor does not need it.
• DMA module may temporarily force processor to suspend operations – cycle stealing.
DMA Operation
The processor issues a command to DMA module
Read or write
I/O device address using data lines
Starting memory address using data lines – stored in address register
Number of words to be transferred using data lines – stored in data register
The processor then continues with other work
DMA module transfers the entire block of data – one word at a time – directly to or from
memory without going through the processor DMA module sends an interrupt to the
processor when complete
DMA and Interrupt Breakpoints during Instruction Cycle
• The processor is suspended just before it needs to use the bus.
• The DMA module transfers one word and returns control to the processor.
• Since this is not an interrupt the processor does not have to save context.
• The processor executes more slowly, but this is still far more efficient that either programmed or
interrupt-driven I/O.
DMA Configurations
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.
SRI VIDYA COLLEGE OF ENGINEERING AND
TECHNOLOGY, VIRUDHUNAGAR
Department of Computer Science & Engineering
Command
Instruction that are read form memory by an IOP
Distinguish from instructions that are read by the CPU
Commands are prepared by experienced programmers and are stored in memory
Command word = IOP program Memory
Link layer
• Describes the transmission of data in the packets
• Asynchronous
o Variable amount of data and several bytes of transaction data transferred as a packet
o Uses an explicit address
o Acknowledgement returned
• Isochronous
o Variable amount of data in sequence of fixed sized packets at regular intervals
o Uses simplified addressing
o No acknowledgement
Transaction layer
• Defines a request-response protocol that hides the lower-layer detail of FireWire from
applications.
FireWire Protocol Stack
FireWire Subactions
InfiniBand
• Recent I/O specification aimed at high-end server market
• First version released early 2001
• Standard for data flow between processors and intelligent I/O devices
• Intended to replace PCI bus in servers
• Greater capacity, increased expandability, enhanced flexibility
• Connect servers, remote storage, network devices to central fabric of switches and links
• Greater server density
• Independent nodes added as required
• I/O distance from server up to
o 17 meters using copper
o 300 meters using multimode optical fiber
o 10 kilometers using single-mode optical fiber
• Transmission rates up to 30 Gbps
InfiniBand Operations
• 16 logical channels (virtual lanes) per physical link
• One lane for fabric management – all other lanes for data transport
• Data sent as a stream of packets
• Virtual lane temporarily dedicated to the transfer from one end node to another
• Switch maps traffic from incoming lane to outgoing lane
8. TEXT BOOKS:
9. APPLICATIONS
Real Life Application Of memory concepts Memory is the ability to encode, store, and
retrieve a stimulus.