Module 2 Notes
Module 2 Notes
Consider the problem of moving a character-code from the keyboard to the processor. For this transfer,
buffer-register(DATAIN) & a status control flags(SIN) are used.
Striking a key stores the corresponding character-code in an 8-bit buffer-register(DATAIN) associated with
the keyboard. To inform the processor that a valid character is in DATAIN, a SIN is set to 1.
A program monitors SIN, and when SIN is set to 1, the processor reads the contents of DATAIN. When the
character is transferred to the processor, SIN is automatically cleared to 0. If a second character is entered
at the keyboard, SIN is again set to 1 and the process repeats.
An analogous process takes place when characters are transferred from the processor to the display. A
buffer-register, DATAOUT, and a status control flag, SOUT are used for this transfer. When SOUT=1, the
display is ready to receive a character. The transfer of a character to DATAOUT clears SOUT to 0.
The buffer registers DATAIN and DATAOUT and the status flags SIN and SOUT are part of circuitry
commonly known as a device interface.
MEMORY-MAPPED I/O
Some address values are used to refer to peripheral device buffer-registers such as DATAIN and
DATAOUT. No special instructions are needed to access the contents of the registers; data can be
transferred between these registers and the processor using instructions such as Move, Load or Store.
For example, contents of the keyboard character buffer DATAIN can be transferred to register R1 in the
processor by the instruction
MoveByte DATAIN, R1
The MoveByte operation code signifies that the operand size is a byte. The Testbit instruction tests the state
of one bit in the destination, where the bit position to be tested is indicated by the first operand.
Sends the contents of register R0 to location DATAOUT, which may be the output data buffer of a display unit
or a printer.
Most computer systems use memory-mapped I/O. some processors have special In and Out instructions to
perform I/O transfers. When building a computer system based on these processors, the designer had the option
of connecting I/O devices to use the special I/O address space or simply incorporating them as part of the memory
address space. The I/O devices examine the low-order bits of the address bus to determine whether they should
respond.
The hardware required to connect an I/O device to the bus. The address decoder enables the device to recognize
its address when this address appears on the address lines. The data register holds the data being transferred to
or from the processor. The status register contains information relevant to the operation of the I/O device. Both
the data and status registers are connected to the data bus and assigned unique addresses. The address decoder,
the data and status registers, and the control circuitry required to coordinate I/O transfers constitute the device’s
interface circuit.
I/O devices operate at speeds that are vastly different from that of the processor. When a human operator is
entering characters at a keyboard, the processor is capable of executing millions of instructions between
successive character entries. An instruction that reads a character from the keyboard should be executed only
when a character is available in the input buffer of the keyboard interface. Also, we must make sure that an input
character is read only once.
This example illustrates program-controlled I/O, in which the processor repeatedly checks a status flag to achieve
the required synchronization between the processor and an input or output device. We say that the processor
polls the device. There are two other commonly used mechanisms for implementing I/O operations: interrupts
and direct memory access. In the case of interrupts, synchronization is achieved by having the I/O device send a
special signal over the bus whenever it is ready for a data transfer operation. Direct memory access is a technique
used for high-speed I/O devices. It involves having the device interface transfer data directly to or from the
memory, without continuous involvement by the processor.
The routine executed in response to an interrupt request is called the interrupt- service routine, which is the
PRINT routine in our example. Interrupts bear considerable resemblance to subroutine calls. Assume that an
interrupt request arrives during execution of instruction i in figure 1
The processor first completes execution of instruction i. Then, it loads the program counter with the address of
the first instruction of the interrupt-service routine. For the time being, let us assume that this address is hardwired
in the processor.
After execution of the interrupt-service routine, the processor has to come back to instruction i +1. Therefore,
when an interrupt occurs, the current contents of the PC, which point to instruction i+1, must be put in temporary
storage in a known location. A Return-from- interrupt instruction at the end of the interrupt-service routine
reloads the PC from the temporary storage location, causing execution to resume at instruction i +1. In many
processors, the return address is saved on the processor stack.
We should note that as part of handling interrupts, the processor must inform the device that its request has been
recognized so that it may remove its interrupt-request signal. This may be accomplished by means of a special
control signal on the bus. An interrupt-acknowledge signal. The execution of an instruction in the interrupt-
service routine that accesses a status or data register in the device interface implicitly informs that device that its
interrupt request has been recognized.
Another difference is that an interrupt is a mechanism for coordinating I/O transfers whereas a subroutine
is just a linkage of 2 or more function related to each other
So far, treatment of an interrupt-service routine is very similar to that of a subroutine. An important departure
from this similarity should be noted. A subroutine performs a function required by the program from which it is
called. However, the interrupt-service routine may not have anything in common with the program being
executed at the time the interrupt request is received. In fact, the two programs often belong to different users.
Therefore, before starting execution of the interrupt-service routine, any information that may be altered during
the execution of that routine must be saved. This information must be restored before execution of the interrupt
program is resumed. In this way, the original program can continue execution without being affected in any way
by the interruption, except for the time delay. The information that needs to be saved and restored typically
includes the condition code flags and the contents of any registers used by both the interrupted program and the
interrupt-service routine.
The task of saving and restoring information can be done automatically by the processor or by program
instructions. Most modern processors save only the minimum amount of information needed to maintain the
registers involves memory transfers that increase the total execution time, and hence represent execution
overhead. Saving registers also increase the delay between the time an interrupt request is received and the start
of execution of the interrupt-service routine. This delay is called interrupt latency.
INTERRUPT HARDWARE:
We pointed out that an I/O device requests an interrupt by activating a bus line called interrupt-request. Most
computers are likely to have several I/O devices that can request an interrupt. A single interrupt-request line may
be used to serve n devices as depicted. All devices are connected to the line via switches to ground. To request
an interrupt, a device closes its associated switch. Thus, if all interrupt-request signals INTR1 to INTRn are
inactive, that is, if all switches are open, the voltage on the interrupt- request line will be equal to Vdd. This is
the inactive state of the line. Since the closing of one or more switches will cause the line voltage to drop to 0,
the value of INTR is the logical OR of the requests from individual devices, that is,
It is customary to use the complemented form, INTR , to name the interrupt-request signal on the common line,
because this signal is active when in the low-voltage state.
instruction. The processor must guarantee that execution of the Return- from-interrupt instruction is completed
before further interruption can occur.
The second option, which is suitable for a simple processor with only one interrupt-request line, is to
have the processor automatically disable interrupts before starting the execution of the interrupt-service routine.
After saving the contents of the PC and the processor status register (PS) on the stack, the processor performs
the equivalent of executing an Interrupt-disable instruction. It is often the case that one bit in the PS register,
called Interrupt-enable, indicates whether interrupts are enabled.
In the third option, the processor has a special interrupt-request line for which the interrupt-handling
circuit responds only to the leading edge of the signal. Such a line is said to be edge-triggered.
Before proceeding to study more complex aspects of interrupts, let us summarize the sequence
of events involved in handling an interrupt request from a single device. Assuming that interrupts are
enabled, the following is a typical scenario.
The device raises an interrupt request.
The processor interrupts the program currently being executed.
Interrupts are disabled by changing the control bits in the PS (except in the case of edge-
triggered interrupts).
The device is informed that its request has been recognized, and in response, it deactivates the
interrupt-request signal.
The action requested by the interrupt is performed by the interrupt-service routine.
Interrupts are enabled and execution of the interrupted program is resumed.
The hardware required to connect an I/O device to the bus. The address decoder enables the device to
recognize its address when this address appears on the address lines. The data register holds the data being
transferred to or from the processor. The status register contains information relevant to the operation of the
I/O device. Both the data and status registers are connected to the data bus and assigned unique addresses. The
address decoder, the data and status registers, and the control circuitry required to coordinate I/O transfers
constitute the device’s interface circuit.
I/O devices operate at speeds that are vastly different from that of the processor. When a human
operator is entering characters at a keyboard, the processor is capable of executing millions of instructions
between successive character entries. An instruction that reads a character from the keyboard should be
executed only when a character is available in the input buffer of the keyboard interface. Also, we must make
sure that an input character is read only once.
This example illustrates program-controlled I/O, in which the processor repeatedly checks a status flag
to achieve the required synchronization between the processor and an input or output device. We say that the
processor polls the device. There are two other commonly used mechanisms for implementing I/O operations:
interrupts and direct memory access. In the case of interrupts, synchronization is achieved by having the I/O
device send a special signal over the bus whenever it is ready for a data transfer operation. Direct memory
access is a technique used for high-speed I/O devices. It involves having the device interface transfer data
directly to or from the memory, without continuous involvement by the processor.
The routine executed in response to an interrupt request is called the interrupt- service routine, which
is the PRINT routine in our example. Interrupts bear considerable resemblance to subroutine calls. Assume
that an interrupt request arrives during execution of instruction i in figure 1
An instruction to transfer input or output data is executed only after the processor determines that the I/O
device is ready. To do this, the processor either polls a status flag in the device interface or waits for the
device to send an interrupt request. In either case, considerable overhead is incurred, because several program
instructions must be executed for each data word transferred. In addition to polling the status register of the
device, instructions are needed for incrementing the memory address and keeping track of the word count. When
interrupts are used, there is the additional overhead associated with saving and restoring the program counter
and other state information.
To transfer large blocks of data at high speed, an alternative approach is used. A special control unit
may be provided to allow transfer of a block of data directly between an external device and the main memory,
without continuous intervention by the processor. This approach is called direct memory access, or DMA.
DMA transfers are performed by a control circuit that is part of the I/O device interface. We refer to this
circuit as a DMA controller. The DMA controller performs the functions that would normally be carried out by
the processor when accessing the main memory. For each word transferred, it provides the memory address and
all the bus signals that control data transfer. Since it has to transfer blocks of data, the DMA controller must
increment the memory address for successive words and keep track of the number of transfers.
Although a DMA controller can transfer data without intervention by the processor, its operation must
be under the control of a program executed by the processor. To initiate the transfer of a block of words, the
processor sends the starting address, the number of words in the block, and the direction of the transfer. On
receiving this information, the DMA controller proceeds to perform the requested operation. When the entire
block has been transferred, the controller informs the processor by raising an interrupt signal.
While a DMA transfer is taking place, the program that requested the transfer cannot continue, and the
processor can be used to execute another program. After the DMA transfer is completed, the processor can
return to the program that requested the transfer.
I/O operations are always performed by the operating system of the computer in response to a request
from an application program. The OS is also responsible for suspending the execution of one program and
starting another. Thus, for an I/O operation involving DMA, the OS puts the program that requested the transfer
in the Blocked state, initiates the DMA operation, and starts the execution of another program. When the transfer
is completed, the DMA controller informs the processor by sending an interrupt request. In response, the OS
puts the suspended program in the Runnable state so that it can be selected by the scheduler to continue
execution.
Figure 4 shows an example of the DMA controller registers that are accessed by the processor to
initiate transfer operations.
Starting address and the word count. The third register contains status and control flags. The R/W bit determines
the direction of the transfer. When this bit is set to 1 by a program instruction, the controller performs a read
operation, that is, it transfers data from the memory to the I/O device. Otherwise, it performs a write operation.
When the controller has completed transferring a block of data and is ready to receive another command, it sets
the Done flag to 1. Bit 30 is the Interrupt-enable flag, IE. When this flag is set to 1, it causes the controller to
raise an interrupt after it has completed transferring a block of data. Finally, the controller sets the IRQ bit to 1
when it has requested an interrupt.
An example of a computer system is given in above figure, showing how DMA controllers may be used.
A DMA controller connects a high-speed network to the computer bus. The disk controller, which controls two
disks, also has DMA capability and provides two DMA channels. It can perform two independent DMA
operations, as if each disk had its own DMA controller. The registers needed to store the memory address, the
word count, and so on are duplicated, so that one set can be used with each device.
To start a DMA transfer of a block of data from the main memory to one of the disks, a program writes
the address and word count information into the registers of the corresponding channel of the disk controller. It
also provides the disk controller with information to identify the data for future retrieval. The DMA controller
proceeds independently to implement the specified operation. When the DMA transfer is completed. This fact
is recorded in the status and control register of the DMA channel by setting the Done bit. At the same time,
if the IE bit is set, the controller sends an interrupt request to the processor and sets the IRQ bit. The status
register can also be used to record other information, such as whether the transfer took place correctly or errors
occurred.
Memory accesses by the processor and the DMA controller are interwoven. Requests by DMA devices
for using the bus are always given higher priority than processor requests. Among different DMA devices, top
priority is given to high-speed peripherals such as a disk, a high-speed network interface, or a graphics display
device. Since the processor originates most memory access cycles, the DMA controller can be said to “steal”
memory cycles from the processor. Hence, the interweaving technique is usually called cycle stealing.
Alternatively, the DMA controller may be given exclusive access to the main memory to transfer a block of data
without interruption. This is known as block or burst mode.
Most DMA controllers incorporate a data storage buffer. In the case of the network interface in figure 5
for example, the DMA controller reads a block of data from the main memory and stores it into its input buffer.
This transfer takes place using burst mode at a speed appropriate to the memory and the computer bus. Then, the
data in the buffer are transmitted over the network at the speed of the network.
A conflict may arise if both the processor and a DMA controller or two DMA controllers try to use the
bus at the same time to access the main memory. To resolve these conflicts, an arbitration procedure is
implemented on the bus to coordinate the activities of all devices requesting memory transfers.