ddco mod 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

MODULE 4: INPUT/OUTPUT ORGANIZATION

ACCESSING I/O DEVICES


• There are 2 ways to deal with I/O devices (Figure 4.1).
1) Memory mapped I/O
• Memory and I/O devices share a common address-space.
• Any data-transfer instruction (like Move, Load) can be used to exchange information.
• For example, Move DATAIN, R0;this instruction reads data from DATAIN(input-buffer associated with
keyboard) & stores them into processor-register R0.
2) In I/O mapped I/O, memory and I/O address-spaces are different.
• Special instructions named IN and OUT are used for data transfer.
• Advantage of separate I/O space: I/O devices deal with fewer address-lines.

I/O Interface for an Input Device


• Address decoder: decodes address sent on bus, so as to enable input-device (Figure 4.2).
• Data register: holds data being transferred to or from the processor.
• Status register: contains information relevant to operation of I/O device.
• Address decoder, data- and status-registers, and control-circuitry required to coordinate I/O transfers constitute
device's interface-circuit.
COMPUTER ORGANIZATION
MECHANISMS USED FOR INTERFACING I/O DEVICES
1) Program Controlled I/O
• Processor repeatedly checks a status-flag to achieve required synchronization between processor &
input/output device. (We say that the processor polls the device).
• Main drawback: The processor wastes its time in checking the status of the device before actual data
transfer takes place.
2) Interrupt I/O
• Synchronization is achieved by having I/O device send a special signal over bus whenever it is ready for
a data transfer operation.
3) Direct Memory Access (DMA)
•This involves having the device-interface transfer data directly to or from the memory without continuous
involvement by the processor.

INTERRUPTS
• I/O device initiates the action instead of the processor. This is done by sending a special hardware signal to the
processor called as interrupt(INTR), on the interrupt-request line.
• The processor can be performing its own task without the need to continuously check the I/O device.
• When device gets ready, it will "alert" the processor by sending an interrupt-signal (Figure 4.5).
• The routine executed in response to an interrupt-request is called ISR(Interrupt Service Routine).
• Once the interrupt-request signal comes from the device, the processor has to inform the device that its request
has been recognized and will be serviced soon. This is indicated by a special control signal on the bus called interrupt-
acknowledge(INTA).
Difference between subroutine & ISR
• A subroutine performs a function required by the program from which it is called.
However, the ISR may not have anything in common with the program being executed at the
time the interrupt-request is received. Before starting execution of ISR, any information that
may be altered during the execution of that routine must be saved. This information must be
restored before the interrupted-program resumed.
• Another difference is that an interrupt is a mechanism for coordinating I/O transfers
whereas a subroutine is just a linkage of 2 or more function related to each other.

• The speed of operation of the processor and I/O devices differ greatly. Also, since I/O devices are manually
operated in many cases (like pressing a key on keyboard), there may not be synchronization between the CPU
operations and I/O operations with reference to CPU clock. To cater to the different needs of I/O operations, 3
mechanisms have been developed for interfacing I/O devices. 1) Program controlled I/O 2) Interrupt I/O 3) Direct
memory access (DMA).
• Saving registers increases the delay between the time an interrupt request is received and the start of execution
of the ISR. This delay is called interrupt latency.
• Since interrupts can arrive at any time, they may alter the sequence of events. Hence, facility must be provided
to enable and disable interrupts as desired.
• Consider the case of a single interrupt request from one device. The device keeps the interrupt request signal
activated until it is informed that the processor has accepted its request. This activated signal, if not deactivated
may lead to successive interruptions, causing the system to enter into an infinite loop.
COMPUTER ORGANIZATION
INTERRUPT HARDWARE
• An I/O device requests an interrupt by activating a bus-line called interrupt-request(IR).
• A single IR line can be used to serve „n‟ devices (Figure 4.6).
• All devices are connected to IR line via switches to ground.
• To request an interrupt, a device closes its associated switch. Thus, if all IR signals are inactive(i.e. if all switches
are open), the voltage on the IR line will be equal to Vdd.
• When a device requests an interrupt by closing its switch, the voltage on the line drops to 0, causing the INTR
received by the processor to goto 1.
• The value of INTR is the logical OR of the requests from individual devices

• A special gate known as open-collector or open-drain are used to drive the INTR line.
• Resistor R is called a pull-up resistor because
it pulls the line voltage up to the high-voltage state when the switches are open.

ENABLING & DISABLING INTERRUPTS


• To prevent the system from entering into an infinite-loop because of interrupt, there are 3 possibilities:
1) The first possibility is to have the processor-hardware ignore the interrupt-request line until the
execution of the first instruction of the ISR has been completed.
2) The second option is to have the processor automatically disable interrupts before starting the
execution of the ISR.
3) In the third option, the processor has a special interrupt-request line for which the interrupt-handling
circuit responds only to the leading edge of the signal. Such a line is said to be edge-triggered.
• Sequence of events involved in handling an interrupt-request from a single device is as follows:
1) The device raises an interrupt-request.
2) The program currently being executed is interrupted.
3) All interrupts are disabled(by changing the control bits in the PS).
4) The device is informed that its request has been recognized, and
in response, the device deactivates the interrupt-request signal.
5) The action requested by the interrupt is performed by the ISR.
6) Interrupts are enabled again and execution of the interrupted program is resumed.
COMPUTER ORGANIZATION
HANDLING MULTIPLE DEVICES
Polling
• Information needed to determine whether a device is requesting an interrupt is available in its status-register.
• When a device raises an interrupt-request, it sets IRQ bit to 1 in its status-register (Figure 4.3).
• KIRQ and DIRQ are the interrupt-request bits for keyboard & display.
• Simplest way to identify interrupting device is to have ISR poll all I/O devices connected to bus.
• The first device encountered with its IRQ bit set is the device that should be serviced. After servicing this device,
next requests may be serviced.
• Main advantage: Simple & easy to implement.
Main disadvantage: More time spent polling IRQ bits of all devices (that may not be requesting
any service).

Vectored Interrupts
• A device requesting an interrupt identifies itself by sending a special-code to processor over bus. (This enables
processor to identify individual devices even if they share a single interrupt-request line).
• The code represents starting-address of ISR for that device.
• ISR for a given device must always start at same location.
• The address stored at the location pointed to by interrupting-device is called the interrupt-vector.
• Processor
→ loads interrupt-vector into PC &
→ executes appropriate ISR
• Interrupting-device must wait to put data on bus only when processor is ready to receive it.
• When processor is ready to receive interrupt-vector code, it activates INTA line.
• I/O device responds by sending its interrupt-vector code & turning off the INTR signal.
COMPUTER ORGANIZATION
INTERRUPT NESTING
• A multiple-priority scheme is implemented by using separate INTR & INTA lines for each device
• Each of the INTR lines is assigned a different priority-level (Figure 4.7).
• Priority-level of processor is the priority of program that is currently being executed.
• During execution of an ISR, interrupt-requests will be accepted from some devices but not from others
depending upon device’s priority.
• Processor accepts interrupts only from devices that have priority higher than its own.
• At the time of execution of an ISR for some device is started, priority of processor is raised to that of the device
• Processor's priority is encoded in a few bits of processor-status (PS) word. This can be changed by program
instructions that write into PS. These are called privileged instructions.
• Privileged-instructions can be executed only while processor is running in supervisor-mode.
• Processor is in supervisor-mode only when executing operating-system routines. (An attempt to execute a
privileged-instruction while in the user-mode leads to a special type of interrupt called a privileged exception).

SIMULTANEOUS REQUESTS
• INTR line is common to all devices (Figure 4.8).
• INTA line is connected in a daisy-chain fashion such that INTA signal propagates serially through devices.
• When several devices raise an interrupt-request and INTR line is activated, processor responds by setting INTA
line to 1. This signal is received by device 1.
• Device 1 passes signal on to device 2 only if it does not require any service.
• If device 1 has a pending-request for interrupt, it blocks INTA signal and proceeds to put its identifying code on
data lines.
• Device that is electrically closest to processor has highest priority.
• Main advantage: This allows the processor to accept interrupt-requests from some devices but not
from others depending upon their priorities.
COMPUTER ORGANIZATION

DIRECT MEMORY ACCESS (DMA)


• The transfer of a block of data directly between an external device & main memory without continuous
involvement by processor is called as DMA.
• DMA transfers are performed by a control-circuit that is part of I/O device interface. This circuit is called as
a DMA controller (Figure 4.19).
• DMA controller performs the functions that would normally be carried out by processor
• In controller, 3 registers are accessed by processor to initiate transfer operations (Figure 4.18):
1) Two registers are used for storing starting-address & word-count
2) Third register contains status- & control-flags
• The R/W bit determines direction of transfer.
When R/W=1, controller performs a read operation(i.e. it transfers data from memory to I/O),
Otherwise it performs a write operation (i.e. it transfers data from I/O device to memory).
• When Done=1, controller
→ completes transferring a block of data &
→ is ready to receive another command.
• When IE=1, controller raises an interrupt after it has completed transferring a block of data (IE=Interrupt
Enable).
• Finally, when IRQ=1, controller requests an interrupt. (Requests by DMA devices for using the bus are always
given higher priority than processor requests).
• There are 2 ways in which the DMA operation can be carried out:
2) In one method, processor originates most memory-access cycles. DMA controller is said to "steal"
memory cycles from processor. Hence, this technique is usually called cycle stealing.
3) In second method, DMA controller is given exclusive access to main-memory to transfer a block of data
without any interruption. This is known as block mode (or burst mode).
COMPUTER ORGANIZATION
BUS ARBITRATION
• The device that is allowed to initiate data transfers on bus at any given time is called bus-master.
• There can be only one bus master at any given time.
• Bus arbitration is the process by which next device to become the bus-master is selected and bus-mastership is
transferred to it.
• There are 2 approaches to bus arbitration:
1) In centralized arbitration, a single bus-arbiter performs the required arbitration.
2) In distributed arbitration, all device participate in selection of next bus-master.

CENTRALIZED ARBITRATION
• A single bus-arbiter performs the required arbitration (Figure: 4.20 & 4.21).
• Normally, processor is the bus. master unless it grants bus mastership to one of the DMA controllers.
• A DMA controller indicates that it needs to become busmaster by activating Bus-Request line(BR).
• The signal on the BR line is the logical OR of bus-requests from all devices connected to it.
• When BR is activated, processor activates Bus-Grant signal(BG1) indicating to DMA controllers that they may
use bus when it becomes free. (This signal is connected to all DMA controllers using a daisy-chain arrangement).
• If DMA controller-1 is requesting the bus, it blocks propagation of grant-signal to other devices.

• Current bus-master indicates to all devices that it is using bus by activating Bus-Busy line (BBSY).
• Arbiter circuit ensures that only one request is granted at any given time according to a predefined priority
scheme

A conflict may arise if both the processor and a DMA controller try to use the bus at the same time to access the
main memory. To resolve these conflicts, a special circuit called the bus arbiter is provided to coordinate the activities
of all devices requesting memory transfers
COMPUTER ORGANIZATION
DISTRIBUTED ARBITRATION
• All device participate in the selection of next bus-master (Figure 4.22)
• Each device on bus is assigned a 4-bit identification number (ID).
• When 1 or more devices request bus, they
→ assert Start-Arbitration signal &
→ place their 4-bit ID numbers on four open-collector lines ARB 0 through ARB 3 .
• A winner is selected as a result of interaction among signals transmitted over these lines by all
contenders.
• Net outcome is that the code on 4 lines represents request that has the highest ID number.
• Main advantage: This approach offers higher reliability since operation of bus is not dependent on
any single device.
THE MEMORY SYSTEM
5.4 Speed, Size and Cost
A big challenge in the design of a computer system is to provide a sufficiently large
memory, with a reasonable speed at an affordable cost.

Static RAM: Very fast, but expensive, because a basic SRAM cell has a complex circuit
making it impossible to pack a large number of cells onto a single chip.

Dynamic RAM: Simpler basic cell circuit, hence are much less expensive, but
significantly slower than SRAMs.

Magnetic disks: Storage provided by DRAMs is higher than SRAMs, but is still less than
what is necessary. Secondary storage such as magnetic disks provides a large amount of
storage, but is much slower than DRAMs.
Fastest access is to the data held in processor registers. Registers are at the top of the
memory hierarchy. Relatively small amount of memory that can be implemented on the
processor chip. This is processor cache. Two levels of cache. Level 1 (L1) cache is on the
processor chip. Level 2 (L2) cache is in between main memory and processor. Next level
is main memory, implemented as SIMMs. Much larger, but much slower than cache
memory. Next level is magnetic disks. Huge amount of inexpensive storage. Speed of
memory access is critical, the idea is to bring instructions and data that will be used in the
near future as close to the processor as possible.
5.5 Cache memories
Processor is much faster than the main memory. As a result, the processor has to spend
much of its time waiting while instructions and data are being fetched from the main
memory. This serves as a major obstacle towards achieving good performance. Speed of
the main memory cannot be increased beyond a certain point. So we use Cache
memories. Cache memory is an architectural arrangement which makes the main memory
appear faster to the processor than it really is. Cache memory is based on the property of
computer programs known as “locality of reference”.

Analysis of programs indicates that many instructions in localized areas of a program are
executed repeatedly during some period of time, while the others are accessed relatively
less frequently. These instructions may be the ones in a loop, nested loop or few
procedures calling each other repeatedly. This is called “locality of reference”. Its types
are:

Temporal locality of reference: Recently executed instruction is likely to be executed


again very soon.

Spatial locality of reference: Instructions with addresses close to a recently instruction


are likely to be executed soon.

A simple arrangement of cache memory is as shown above.

• Processor issues a Read request, a block of words is transferred from the main
memory to the cache, one word at a time.

• Subsequent references to the data in this block of words are found in the cache.

• At any given time, only some blocks in the main memory are held in the cache.
Which blocks in the main memory are in the cache is determined by a “mapping
function”.
• When the cache is full, and a block of words needs to be transferred from the main
memory, some block of words in the cache must be replaced. This is determined
by a “replacement algorithm”.

Cache hit:

Existence of a cache is transparent to the processor. The processor issues Read and Write
requests in the same manner. If the data is in the cache it is called a Read or Write hit.

Read hit: The data is obtained from the cache.

Write hit: Cache has a replica of the contents of the main memory. Contents of the cache
and the main memory may be updated simultaneously. This is the write-through protocol.

Update the contents of the cache, and mark it as updated by setting a bit known as the
dirty bit or modified bit. The contents of the main memory are updated when this block is
replaced. This is write-back or copy-back protocol.

Cache miss:

• If the data is not present in the cache, then a Read miss or Write miss occurs.

• Read miss: Block of words containing this requested word is transferred from the
memory. After the block is transferred, the desired word is forwarded to the
processor. The desired word may also be forwarded to the processor as soon as it
is transferred without waiting for the entire block to be transferred. This is called
load-through or early-restart.

• Write-miss: Write-through protocol is used, then the contents of the main memory
are updated directly. If write-back protocol is used, the block containing the
addressed word is first brought into the cache. The desired word is overwritten
with new information.

Cache Coherence Problem:

A bit called as “valid bit” is provided for each block. If the block contains valid data, then
the bit is set to 1, else it is 0. Valid bits are set to 0, when the power is just turned on.

When a block is loaded into the cache for the first time, the valid bit is set to 1. Data
transfers between main memory and disk occur directly bypassing the cache. When the
data on a disk changes, the main memory block is also updated. However, if the data is
also resident in the cache, then the valid bit is set to 0.

The copies of the data in the cache, and the main memory are different. This is called the
cache coherence problem
Mapping functions: Mapping functions determine how memory blocks are placed in the
cache.

A simple processor example:



Cache consisting of 128 blocks of 16 words each.
Total size of cache is 2048 (2K) words.
Main memory is addressable by a 16-bit address.
Main memory has 64K words.
Main memory has 4K blocks of 16 words each.

Three mapping functions can be used.

1. Direct mapping
2. Associative mapping
3. Set-associative mapping.
Main
memory Block 0 •Block j of the main memory maps to j modulo 128 of
Block 1
the cache. 0 maps to 0, 129 maps to 1.
Cache
tag
•More than one memory block is mapped onto the same
Block 0 position in the cache.
tag •May lead to contention for cache blocks even if the
Block 1
cache is not full.
Block 127
•Resolve the contention by allowing new block to
Block 128 replace the old block, leading to a trivial replacement
tag
Block 127
algorithm.
Block 129
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
- When a new block is brought into the cache,
Tag Block Word Block 255 the the next 7 bits determine which cache
5 7 4 Block 256 block this new block is placed in.
- High order 5 bits determine which of the possible
Main memory address Block 257
32 blocks is currently present in the cache. These
are tag bits.
•Simple to implement but not very flexible.

Block 4095
Main
memory Block 0

Cache Block 1 •Main memory block can be placed into any


cache position.
tag
Block 0 •Memory address is divided into two fields:
tag
Block 1 - Low order 4 bits identify the word within a block.
- High order 12 bits or tag bits identify a memory
Block 127
block when it is resident in the cache.
Block 128 •Flexible, and uses cache space efficiently.
tag
Block 127 •Replacement algorithms can be used to replace an
Block 129
existing block in the cache when the cache is full.
•Cost is higher than direct-mapped cache because
of the need to search all 128 patterns to determine
whether a given block is in the cache.
Tag Word Block 255

12 4 Block 256

Main memory address Block 257

Block 4095
Cache
Main
memory
Block 0 Blocks of cache are grouped into sets. Mapping
tag Block 0 function allows a block of the main memory to reside
Block 1
tag Block 1
in any block of a specific set. Divide the cache into 64
sets, with two blocks per set. Memory block 0, 64, 128
tag Block 2 etc. map to block 0, and they can occupy either of the
tag Block 3 two positions. Memory address is divided into three
Block 63 fields:
Block 64 - 6 bit field determines the set number.
tag - High order 6 bit fields are compared to the tag
Block 126 Block 65
fields of the two blocks in a set.
tag Block 127 Set-associative mapping combination of direct and
associative mapping.
Number of blocks per set is a design parameter.
Tag Block Word
Block 127 - One extreme is to have all the blocks in one set,
Block 128 requiring no set bits (fully associative mapping).
6 6 4
- Other extreme is to have one block per set,
Block 129
Main memory address is the same as direct mapping.

Block 4095
Solved Problems:-

1. A block set associative cache consists of a total of 64 blocks divided into 4 block
sets. The MM contains 4096 blocks each containing 128 words.

a) How many bits are there in MM address?

b) How many bits are there in each of the TAG, SET & word fields

Solution:- Number of sets = 64/4 = 16

Set bits = 4(24 = 16)

Number of words = 128

Word bits = 7 bits (27 = 128)

MM capacity : 4096 x 128 (212 x 27 = 219)

a) Number of bits in memory address = 19 bits


b)

8 4 7

TAG SET WORD

TAG bits = 19 – (7+4) = 8 bits.

2. A computer system has a MM capacity of a total of 1M 16 bits words. It also has a


4K words cache organized in the block set associative manner, with 4 blocks per set
& 64 words per block. Calculate the number of bits in each of the TAG, SET &
WORD fields of MM address format.

Solution: Capacity: 1M (220 = 1M)


Number of words per block = 64
Number of blocks in cache = 4k/64 = 64
Number of sets = 64/4 = 16

Set bits = 4 (24 = 16)

Word bits = 6 bits (26 = 64)


Tag bits = 20-(6+4) = 10 bits

MM address format: 10 tag bits, 6 word bits and 4 set bits.

You might also like