0% found this document useful (0 votes)
28 views22 pages

MOdule 4

Uploaded by

vinaygv0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views22 pages

MOdule 4

Uploaded by

vinaygv0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

MODULE 4

INTERRUPTS
HANDLING MULTIPLE DEVICES:-

 Let us now consider the situation where a number of devices capable of initiating interrupts are
connected to the processor.
 Because these devices are operationally independent, there is no definite order in which they
will generate interrupts.
 For example, device X may request in interrupt while an interrupt caused by device Y is being
serviced, or several devices may request interrupts at exactly the same time. This gives rise to a
number of questions
 The means by which these problems are resolved vary from one computer to another, And the
approach taken is an important consideration in determining the computer’s suitability for a
given application.
 a single interrupt-request line is used to connect multiple devices.
HANDLING MULTIPLE DEVICES BY POLLING SCHEME

 When a device raises an interrupt request, one of the bits in its status register is set to 1, which
we will call the IRQ bit. For example, bits KIRQ and DIRQ are the interrupt request bits for the
keyboard and the display respectively.
 The first device encountered with its IRQ bit set is the device that should be serviced.

 An appropriate subroutine is called to provide the requested service.

 Its main disadvantage is the time spent interrogating the IRQ bits of all the devices that may not
be requesting any service.
 An alternative approach is to use vectored interrupts,
VECTORED INTERRUPTS

 Device requesting an interrupt may identify itself directly to the processor

 a special code to the processor over the bus

 The code supplied by the device may represent the starting address of the interrupt-service
routine for that device.
 The code length is typically in the range of 4 to 8 bits.

 The remainder of the address is supplied by the processor based on the area in its memory
where the addresses for interrupt-service routines are located.
INTERRUPT NESTING:
 Interrupts should be disabled during the execution of an interrupt-service routine, to ensure that
a request from one device will not cause more than one interruption. a special code to the
processor over the bus
 once started, always continues to completion before the processor accepts an interrupt request
from a second device.
 Delay in responding to Interrupts service is small, However if delay is long it might lead to
erroneous operation
 for example, a computer that keeps track of the time of day using a real-time clock.

 A multiple-level priority organization means that during execution of an interrupt-service routine,


interrupt requests will be accepted from some devices but not from others, depending upon the
device’s priority.
 The priority level of the processor is the priority of the program that is currently being executed.
INTERRUPT NESTING:
 A multiple-priority scheme can be implemented easily by using separate interrupt request and
interrupt-acknowledge lines for each device, as shown in figure.
 Each of the interrupt-request lines is assigned a different priority level.

 Interrupt requests received over these lines are sent to a priority arbitration circuit in the
processor. A request is accepted only if it has a higher priority level than that currently assigned
to the processor.
SIMULTANEOUS REQUESTS:
 simultaneous arrivals of interrupt requests from two or more devices when many device share
one IRL(Interrupt request line).
 Polling the status registers of the I/O devices is the better solution.

 In this case, priority is determined by the order in which the devices are polled
DIRECT MEMORY ACCESS
 It’s a Mechanism where Data is moved between memory without the involvement of processor.
DIRECT MEMORY ACCESS

 Move DATAIN, R0

 When the I/O device is ready

 To do this, the processor either polls a status flag in the device interface or waits for the
device to send an interrupt request
 several program instructions must be executed for each data word transferred.

 incrementing the memory address and keeping track of the word count.

 DMA

 DMA transfers are performed by a control circuit that is part of the I/O device interface.

 Although a DMA controller can transfer data without intervention by the processor, its operation
must be under the control of a program executed by the processor.
 the processor sends the starting address, the number of words in the block, and the direction of
the transfer.
DIRECT MEMORY ACCESS  Two registers are used for storing the Figure 4
Registers in DMA interface Starting address and the
word count.
 The third register contains status and control flags.
The R/W bit determines the direction of the
transfer.
 When this bit is set to 1 by a program instruction,
the controller performs a read operation,
Otherwise, it performs a write operation.
 When the controller has completed transferring a
block of data and is ready to receive another
command, it sets the Done flag to 1.
 When IE flag is set to 1, it causes the controller to
raise an interrupt after it has completed transferring
a block of data.
DIRECT MEMORY ACCESS

It’s a Mechanism where Data is moved between


memory without the involvement of processor.
BUS ARBITRATION

 A conflict may arise if both the processor and a DMA controller or two DMA controllers try to use
the bus at the same time to access the main memory
 Bus arbitration is the process by which the next device to become the bus master is selected
and bus mastership is transferred to it.
 The device that is allowed to initiate data transfers on the bus at any given time is called the
bus master.
 There are two approaches to bus arbitration: centralized and distributed.

 In centralized arbitration, a single bus arbiter performs the required arbitration.

 In distributed arbitration, all devices participate in the selection of the next bus master.
CENTRALIZED ARBITRATION:-

 In this case, the processor is normally the bus master unless it grants bus mastership to one of the
DMA controllers
 A DMA controller indicates that it needs to become the bus master by activating the Bus-Request line
(BR)l.
 The signal on the Bus-Request line is the logical OR of the bus requests from all the devices connected
to it.
 When Bus-Request is activated, the processor activates the Bus-Grant signal, BG1, indicating to the
DMA controllers that they may use the bus when it becomes free.
 This signal is connected to all DMA controllers using a daisy-chain arrangement.

 Thus, if DMA controller 1 is requesting the bus, it blocks the propagation of the grant signal to other
devices. Otherwise, it passes the grant downstream by asserting BG2.
DISTRIBUTED ARBITRATION:-

 Distributed arbitration means that all devices waiting to use the bus have equal responsibility in
carrying out the arbitration process, without using a central arbiter. A simple method for
distributed arbitration is illustrated in figure
 Each device on the bus assigned a 4-bit identification number

 When one or more devices request the bus, they assert the Start - Arbitration signal and
place their 4-bit ID numbers on four open-collector lines, ARB0 through ARB3 .
 A winner is selected as a result of the interaction among the signals transmitted over those liens
by all contenders. The net outcome is that the code on the four lines represents the request that
has the highest ID number
SPEED, SIZE AND COST  The processor fetches the code and data from the
main memory to execute the program.
 The DRAMs which form the main memory are
slower devices. So it is necessary to insert wait
states in memory read/write cycles.
 This reduces the speed of execution.

 The solution for this problem is in the memory


system, small section of SRAM is added along
with the main memory, referred to as cache
memory
 The cache controller looks after this swapping
between main memory and cache memory with the
help of DMA controller, Such cache memory is
called secondary cache.
 Recent processor have the built in cache memory
called primary cache.
CACHE MEMORIES:

 The cache is a smaller, faster memory which stores


copies of the data from the most frequently used
main memory locations.
MAPPING FUNCTIONS

 1. Direct-mapping technique

 2. Associative mapping Technique

 3.Set associative mapping technique


DIRECT-MAPPING TECHNIQUE

As shown in figure, the lower order 4-bits Selects 16


words in a block constitute a word field.
The second field is known as block field used to
distinguish a block from other blocks. Its length is 7-bits,
when a new block enters the cache, the 7-bit cache block
field determines the cache position in which this block
must be stored.
The third field is a Tag field, used to store higher order 5-
bits of the memory address of the block, and to identify
which of the 32blocks are mapped into the cache.
ASSOCIATIVE MAPPING

The tag bits of an address received


from the processor are compared to
the tag bits of each block of the cache,
to see if the desired block is present.
This is called associative - mapping
technique
SET-ASSOCIATIVE MAPPING

• It is a combination of the direct and associative-


mapping techniques can be used. Blocks of the
cache are grouped into sets and the mapping allows
a block of main memory to reside in any block of
the specific set.
• In this case memory blocks 0, 64,128……4032
mapped into cache set 0, and they can occupy
either of the two block positions within this set.
• The cache might contain the desired block. The tag
field of the address must then be associatively
compared to the tags of the two blocks of the set to
check if the desired block is present this two
associative search is simple to implement
6
BASIC PROCESSING UNIT
 To execute an instruction, processor has to
perform following 3 steps:
1) Fetch contents of memory-location pointed to
by PC. Content of this location is an instruction
to be executed. The instructions are loaded into
IR, Symbolically, this operation can be written
as
IR← [[PC]]
2) Increment PC by 4
PC← [PC] +4
3) Carry out the actions specified by instruction
(in the IR).
 The first 2 steps are referred to as fetch
phase; Step 3 is referred to as execution
phase
Single bus organization of the data path inside a processor
PERFORMING AN ARITHMETIC OR LOGIC OPERATION

 The ALU performs arithmetic operations on the 2 operands applied to its A and B inputs.

 One of the operands is output of MUX & the other operand is obtained directly from bus.

 The result (produced by the ALU) is stored temporarily in register Z.

 The sequence of operations for [R3]←[R1]+[R2] is as follows


1) R1out, Yin //transfer the contents of R1 to Y register
//R2 content s are transfer red
directly to B input of ALU.
2) R2out, SelectY, Add, Zin // The numbers of added. Sum stored in register Z
3) Zout, R3in //sum is transferred to register R3

 The signals are activated for the duration of the clock cycle corresponding to that step. All
other signals are inactive.

You might also like