0% found this document useful (0 votes)
4 views

Module 4

Uploaded by

Shyam Kaushik
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Module 4

Uploaded by

Shyam Kaushik
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Rashtreeya Sikshana Samithi Trust

RV Institute of Technology and Management®


(Affiliated to VTU, Belagavi)

JP Nagar, Bengaluru – 560076

Department of Computer Science and


Engineering

Course Name: Computer Organization

Course Code:

18CS34 III

Semester 2018

Scheme

Prepared By:
Dr. Anitha J,
Professor and Head of the Department,
Department of Computer Science and Engineering
RVITM, Bengaluru - 560076
Email: [email protected]

Dr. Priyanga P,
Assistant Professor,
Department of Computer Science and Engineering
RVITM, Bengaluru - 560076
Email: [email protected]
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II


MODULE II
INPUT/OUTPUT ORGANIZATION

One of the feature of the computer is exchanging data between other devices like
input and output devices. This communication capability enables human to interact with the
computer and control some of its functions. Here we discuss about how I/O operations are
performed from programmer point of view. Then on some of the hardware details associated
with buses and I/O interfaces.

ACCESSING I/O DEVICES


A simple arrangement to connect I/O devices to a computer is to use a single bus arrangement.
The bus enables all the devices connected to it to exchange information as shown in Fig 2.1.

Fig 2.1 A computer system.

It consists of three sets of lines used to carry address, data, and control signals. Each
I/O device is assigned a unique set of addresses. When the processor places a particular
address on the address line, the device that recognizes this address responds to the commands
issued on the control lines. The processor requests either a read or a write operation, and the
requested data are transferred over the data lines, when I/O devices and the memory share the
same address space, the arrangement is called memory-mapped I/O. With memory-mapped
I/O, any machine instruction that can access memory can be used to transfer data to or from
an I/O device. For example

Move DATAIN, R0 : Read the data from keyboard buffer


Move R0, DATAOUT : Writes data into display buffer

Most computer systems use memory-mapped I/O. Some processors have special In and Out
instructions to perform I/O transfers. When building a computer system based on these
processors, the designer had the option of connecting I/O devices to use the special I/O address
space or simply incorporating them as part of the memory address space. The I/O devices

III-Semester, Computer Organization (18CS34) P a g e 2 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

examine the low-order bits of the address bus to determine whether they should respond.

Fig: 2.2 I/O interface for an input device.

Consider I/O interface as shown in Fig 2.2. The hardware required to connect an I/O device
to the bus.

The address decoder enables the device to recognize its address when this address appears on
the address lines.
The data register holds the data being transferred to or from the processor.
The status register contains information relevant to the operation of the I/O device.
The data and status registers are connected to the data bus and assigned unique addresses.
The address decoder, the data and status registers, and the control circuitry required to
coordinate I/O transfers constitute the device’s interface circuit.
The I/O buffers are introduced to synchronize the speed between I/O and the processor.

In program-controlled I/O the processor repeatedly checks a status flag to achieve the
required synchronization between the processor and an input or output device. We say that
the processor polls the device. There are two other commonly used mechanisms for
implementing I/O operations: interrupts and direct memory access.

Interrupts, synchronization is achieved by having the I/O device send a special signal over the
bus whenever it is ready for a data transfer operation.

Direct memory access is a technique used for high-speed I/O devices. It involves having the
device interface transfer data directly to or from the memory, without continuous
involvement by the processor.

Fig 2.3 shows the status and control register of I/O devices along with the buffers. Theses I/O
devices program controlled by the fallowing example program.

III-Semester, Computer Organization (18CS34) P a g e 3 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.3: Register in keyboard and display interface.

INTERRUPTS

I/O wasting processor time in checking I/O devices are ready or not by testing device
status in program controlled. Instead we arrange I/O devices to inform processor when it is
ready by sending hardware signal called as an interrupt to the processor. At least one of the
control lines is dedicated for this called interrupt request line.

Consider the example of two routines COMPUTE sand PRINT. COMPUTE


subroutine produces n lines as output to be printed by PRINT subroutine which prints only
one line at a time.

COMPUTE subroutine is executed to produce the first line of text to the printer. PRINT
subroutine is executed to send the first line to the printer.

Instead of waiting till printer prints the line the PRINT subroutine will be suspended and
COMPUTE subroutine will be resumed.

After printing line printer will send and interrupt to processor informing its availability.
Then COMPUTE suspends its computation and transfers the control to the
PRINT ROUTINE.

III-Semester, Computer Organization (18CS34) P a g e 4 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Same will be repeated till n lines are printed. The above example is explained with Fig 2.4.

Fig 2.4: transfer of control through the use of interrupts.

The processor first completes execution of instruction i. Then, it loads the program counter
with the address of the first instruction of the interrupt-service routine.
After execution of the interrupt-service routine, the processor has to come back to instruction
i +1.

Therefore, when an interrupt occurs, the current contents of the PC, which point to instruction
i+1, must be put in temporary storage in a known location.

A Return-from-interrupt instruction at the end of the interrupt-service routine reloads the PC


from the temporary storage location, causing execution to resume at instruction i +1.

In many processors, the return address is saved on the processor stack.


As part of handling interrupts, the processor must inform the device that its request has been
recognized so that it may remove its interrupt-request signal.
An interrupt-acknowledge signal. The execution of an instruction in the interrupt-service
routine that accesses a status or data register in the device interface implicitly informs that
device that its interrupt request has been recognized.

Interrupt service routine V/S Subroutine

Treatment of an interrupt-service routine is very similar to that of a subroutine.


A subroutine performs a function required by the program from which it is called. Whereas
interrupt-service routine may not have anything in common with the program being executed
at the time the interrupt request is received.

Subroutine belongs to the same users. The two programs often belong to different users. Before
starting execution of the interrupt-service routine. Any information that may be altered during the
execution of that routine must be saved. This information must be
restored before execution of the interrupt program is resumed.

III-Semester, Computer Organization (18CS34) P a g e 5 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II


The task of saving and restoring information can be done automatically by the
processor or by program instructions. Most modern processors save only the minimum
amount of information needed to maintain the registers involves memory transfers that
increase the total execution time, and hence represent execution overhead. Saving registers
also increase the delay between the time an interrupt request is received and the start of
execution of the interrupt-service routine. This delay is called interrupt latency.

Some computers provide two types of interrupts. One saves all register contents, and
the other does not. A particular I/O device may use either type, depending upon its response-
time requirements. Another interesting approach is to provide duplicate sets of processor
registers. In this case, a different set of registers can be used by the interrupt-service routine,
thus eliminating the need to save and restore registers.

INTERRUPT HARDWARE

We pointed out that an I/O device requests an interrupt by activating a bus line called
interrupt-request. Most computers are likely to have several I/O devices that can request an
interrupt. A single interrupt-request line may be used to serve n devices. All devices are
connected to the line via switches to ground as shown in Fig 2.5. To request an interrupt, a
device closes its associated switch. Thus, if all interrupt-request signals INTR1 to INTRn are
inactive, that is, if all switches are open, the voltage on the interrupt-request line will be equal
to Vdd.

Fig 2.5: implementation of common interrupt service routine.

This is the inactive state of the line. Since the closing of one or more switches will
cause the line voltage to drop to 0, the value of INTR is the logical OR of the requests from
individual devices, that is,

INTR = INTR1 + ………+INTRn

To implement the Fig 2.5 open collector (open-drain) gates are used to derive the INTR
line. The output of such a gate is equivalent to a switch to ground that is open when the gate’s
input is in 0 state and closed when it is in the 1 state. The voltage level is determined by the data
applied to all the gates connected to the bus. Register R is called as Pull-up register because it
pulls the line voltage upto the high –voltage state when all the gates are open.

III-Semester, Computer Organization (18CS34) P a g e 6 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.6: Status register of I/O Device

Status registers are used in determining the device that activated the line. Referring Fig
2.6 the IRQ bit is set to one when device raises the request. If two or more devices have
activated the line at the same time, then the tie is to be broken and one of them will get the
serve at a time.

ENABLING AND DISABLING INTERRUPTS

The arrival of an interrupt request from an external device causes the processor to
suspend the execution of one program and start the execution of another. Because interrupts can
arrive at any time, they may alter the sequence of events from the expectation of the programmer.
Hence, the interruption of program execution must be carefully controlled.
When a device activates the interrupt-request signal, it keeps this signal activated
until it learns that the processor has accepted its request. This means that the interrupt-request
signal will be active during execution of the interrupt-service routine, perhaps until an
instruction is reached that accesses the device in question.
The default possibility to handle the interrupt request is processor hardware ignore the
interrupt-request line until the execution of the of the interrupt-service routine has been
completed. Other than that there are three ways to handle the interrupt request.

1. By using an Interrupt-disable instruction as the first instruction in the interrupt-service


routine, the programmer can ensure that no further interruptions will occur until an Interrupt-
enable instruction is executed. Interrupt-enable instruction will be included as last instruction
in the interrupt-service routine before the Return-from-interrupt instruction. The processor
must guarantee that execution of the Return-from-interrupt instruction is completed before
further interruption can occur.

2. The second option, which is suitable for a simple processor with only one interrupt-
request line, is to have the processor automatically disable interrupts before starting the
execution of the interrupt-service routine. After saving the contents of the PC and the
processor status register (PS) on the stack, the processor performs the equivalent of executing
an Interrupt-disable instruction. It is often the case that one bit in the PS register, called
Interrupt- enable, indicates whether interrupts are enabled.

3. The third option, the processor has a special interrupt-request line for which the
interrupt-handling circuit responds only to the leading edge of the signal. Such a line is said
to be edge-triggered.

Let us summarize the sequence of events involved in handling an interrupt request from a single
device. Assuming that interrupts are enabled, the following is a typical scenario.

III-Semester, Computer Organization (18CS34) P a g e 7 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

1. The device raises an interrupt request.


2. The processor interrupts the program currently being executed.
3. Interrupts are disabled by changing the control bits in the PS (except in the case of edge-
triggered interrupts).
4. The device is informed that its request has been recognized, and in response, it deactivates
the interrupt-request signal.
5. The action requested by the interrupt is performed by the interrupt-service routine.
6. Interrupts are enabled and execution of the interrupted program is resumed.

3.2.3. Handling Multiple Devices

Consider the situation where a number of devices capable of initiating interrupts are
connected to the processor. Because these devices are operationally independent, there is no
definite order in which they will generate interrupts. For example, device X may request in
interrupt while an interrupt caused by device Y is being serviced, or several devices may
request interrupts at exactly the same time. This gives rise to a number of questions

1. How can the processor recognize the device requesting an interrupts?


2. Given that different devices are likely to require different interrupt-service routines, how
can the processor obtain the starting address of the appropriate routine in each case?
3. Should a device be allowed to interrupt the processor while another interrupt is being
serviced?
4. How should two or more simultaneous interrupt requests be handled?

When a request is received over the common interrupt-request line, extra information is
needed to determine which device is requesting an interrupt is available in its status register.
When a device raises an interrupt request, it sets IRQ bit to 1 in status register. For example,
bits KIRQ and DIRQ are the interrupt request bits for the keyboard and the display,
respectively. The simplest way to identify the interrupting device is to have the interrupt-
service routine poll all the I/O devices connected to the bus. The first device encountered with
its IRQ bit set is the device that should be serviced. An appropriate subroutine is called to
provide the requested service.

The polling scheme is easy to implement. But the disadvantage is the time spent in
interrogating the IRQ bits of all the devices that may not be requesting any service. An
alternative called vectored interrupts is used to over this flaw.

Vectored Interrupts

A device requesting an interrupt can identify itself by sending a special code to the processor
over the bus rather than polling.
This enables the processor to identify individual devices even if they share a single interrupt-
request line.

III-Semester, Computer Organization (18CS34) P a g e 8 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The code supplied by the device may represent the starting address of the interrupt-service
routine for that device.
The code length is typically in the range of 4 to 8 bits.
The remainder of the address is supplied by the processor based on the area in its memory
where the addresses for interrupt-service routines are located.

This arrangement implies that the interrupt-service routine for a given device must
always start at the same location. The programmer can gain some flexibility by storing in this
location an instruction that causes a branch to the appropriate routine.

Interrupt Nesting:

Interrupts should be disabled during the execution of an interrupt-service routine, to


ensure that a request from one device will not cause more than one interruption. The same
arrangement is often used when several devices are involved, in which case execution of a
given interrupt-service routine, once started, always continues to completion before the
processor accepts an interrupt request from a second device. Interrupt-service routines are
typically short, and the delay they may cause is acceptable for most simple devices.

For some devices, a long delay in responding to an interrupt request may lead to
erroneous operation. Consider, for example, a computer that keeps track of the time of day
using a real-time clock. This is a device that sends interrupt requests to the processor at
regular intervals. For each of these requests, the processor executes a short interrupt-service
routine to increment a set of counters in the memory that keep track of time in seconds,
minutes, and so on. Proper operation requires that the delay in responding to an interrupt
request from the real- time clock be small in comparison with the interval between two
successive requests. To ensure that this requirement is satisfied in the presence of other
interrupting devices, it may be necessary to accept an interrupt request from the clock during
the execution of an interrupt- service routine for another device.

This example suggests that I/O devices should be organized in a priority structure. An
interrupt request from a high-priority device should be accepted while the processor is
servicing another request from a lower-priority device.
A multiple-level priority organization means that during execution of an interrupt-service
routine, interrupt requests will be accepted from some devices but not from others, depending
upon the device’s priority.

To implement this scheme, we can assign a priority level to the processor that can be changed
under program control. The priority level of the processor is the priority of the program that is
currently being executed. The processor accepts interrupts only from devices that have
priorities higher than its own.

The processor’s priority is usually encoded in a few bits of the processor status word.
It can be changed by program instructions that write into the PS.

III-Semester, Computer Organization (18CS34) P a g e 9 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

These are privileged instructions, which can be executed only while the processor is running
in the supervisor mode. Thus, a user program cannot accidentally, or intentionally, change
the priority of the processor and disrupt the system’s operation.
An attempt to execute a privileged instruction while in the user mode leads to a special type
of interrupt called a privileged instruction.

A multiple-priority scheme can be implemented easily by using separate interrupt-


request and interrupt-acknowledge lines for each device, as shown in Fig 2.7. Each of the
interrupt-request lines is assigned a different priority level. Interrupt requests received over
these lines are sent to a priority arbitration circuit in the processor. A request is accepted only
if it has a higher priority level than that currently assigned to the processor.

Fig 2.7: Implementation of interrupt priority using individual interrupt-request and acknowledgment
lines.
Simultaneous Requests

Polling the status registers of the I/O devices is the simplest mechanism. In this case, priority
is determined by the order in which the devices are polled. When vectored interrupts are
used, we must ensure that only one device is selected to send its interrupt vector code.
A widely used scheme is to connect the devices to form a daisy chain, as shown in Fig 2.8.
The interrupt-request line INTR is common to all devices. The interrupt-acknowledge line,
INTA, is connected in a daisy-chain fashion, such that the INTA signal propagates serially
through the devices.
When several devices raise an interrupt request and the INTR line is activated, the processor
responds by setting the INTA line to 1.
This signal is received by device 1. Device 1 passes the signal on to device 2 only if it does
not require any service.

Fig 2.8: Daisy chain arrangement of single interrupt line

III-Semester, Computer Organization (18CS34) P a g e 10 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

If device 1 has a pending request for interrupt, it blocks the INTA signal and proceeds to put
its identifying code on the data lines.
In the daisy-chain arrangement, the device that is electrically closest to the processor has the
highest priority. The second device along the chain has second highest priority.

2.9: Arrangement of the priority groups

The main advantage of the scheme in Fig2.7 is that it allows the processor to accept interrupt
requests from some devices but not from others, depending upon their priorities.
The two schemes may be combined to produce the more general structure in as shown in Fig
2.9.
Devices are organized in groups, and each group is connected at a different priority level.
Within a group, devices are connected in a daisy chain.

Controlling Device Requests

Until now, we have assumed that an I/O device interface generates an interrupt
request whenever it is ready for an I/O transfer, for example whenever the SIN flag is 1. It is
important to ensure that interrupt requests are generated only by those I/O devices that are
being used by a given program. Idle devices must not be allowed to generate interrupt
requests, even though they may be ready to participate in I/O transfer operations. Hence, we
need a mechanism in the interface circuits of individual devices to control whether a device is
allowed to generate an interrupt request.

The control needed is usually provided in the form of an interrupt-enable bit in the
device’s interface circuit. The keyboard interrupt-enable, KEN, and display interrupt-enable,
DEN, flags in CONTROL Register perform this function. If either of these flags is set, the
interface circuit generates an interrupt request whenever the corresponding status flag in
register STATUS is set. At the same time, the interface circuit sets bit KIRQ or DIRQ to
indicate that the keyboard or display unit, respectively, is requesting an interrupt. If an
interrupt-enable bit is equal to 0, the interface circuit will not generate an interrupt request,
regardless of the state of the status flag.

There are two independent mechanisms for controlling interrupt requests. At the device end, an
interrupt-enable bit in a control register determines whether the device is allowed to generate an
interrupt request. At the processor end, either an interrupt enable bit in the PS register or a
priority structure determines whether a given interrupt request will be accepted.

III-Semester, Computer Organization (18CS34) P a g e 11 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II


2.2.4: EXCEPTIONS

Any event that causes an interruption is called exception. Hence, I/O interrupts are
one example of an exception. We now describe a few other kinds of exceptions.

Recovery from Errors:


Computers use a variety of techniques to ensure that all hardware components
are operating properly. Many computers include an error-checking code in the main memory,
which allows detection of errors in the stored data. If errors occur, the control hardware
detects it and informs the processor by raising an interrupt.

The processor may also interrupt a program if it detects an error or an unusual


condition while executing the instructions of the program. For example, the OP-code field of
an instruction may not correspond to any legal instruction, or an arithmetic instruction may
attempt a division by zero.

When exception processing is initiated as a result of such errors, the processor


proceeds in exactly the same manner as in the case of an I/O interrupt request. Corresponding
ISR takes appropriate action to recover from the error, if possible, or to inform the user about
it. Recall that in the case of an I/O interrupt, the processor completes execution of the
instruction in progress before accepting the interrupt. However, when an interrupt is caused
by an error, execution of the interrupted instruction cannot usually be completed.

Debugging:
Another important type of exception is used as an aid in debugging programs. System
software usually includes a program called a debugger, which helps the programmer find
errors in a program. The debugger uses exceptions to provide two important facilities called
trace and breakpoints.

1. Trace mode
When a processor is operating in the trace mode, an exception occurs after execution of every
instruction, using the debugging program as the exception-service routine. The debugging
program enables the user to examine the contents of registers, memory locations, and so on.
On return from the debugging program, the next instruction in the program calls the
debugging program again. The trace exception is disabled during the execution of the
completely debugged program.

2. Breakpoint Mode
Breakpoint provides a similar facility as trace mode except that the program being debugged is
interrupted only at specific points selected by the user. An instruction called Trap or Software-
interrupt is usually provided for this purpose. Execution of this instruction results in exactly
the same actions as when a hardware interrupt request is received. While debugging a
program, the user may wish to interrupt program execution after instruction i. The debugging
routine saves instruction i+1 and replaces it with a software interrupt instruction. When the
program is executed and reaches that point, it is interrupted and the debugging routine is
activated.
Privilege Exception:

III-Semester, Computer Organization (18CS34) P a g e 12 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II


To protect the operating system of a computer from being corrupted by user
programs, certain instructions can be executed only while the processor is in supervisor
mode. These are called privileged instructions. For example, when the processor is running in
the user mode, it will not execute an instruction that changes the priority level of the
processor or that enables a user program to access areas in the computer memory that have
been allocated to other users. An attempt to execute such an instruction will produce privilege
exceptions, causing the processor to switch to the supervisor mode and begin executing an
appropriate routine in the operating system.

DIRECT MEMORY ACCESS:

The discussion in the previous sections concentrates on data transfer between the processor
and I/O devices. Data are transferred by executing instructions such as

Move DATAIN, R0

An instruction to transfer input or output data is executed only after the processor determines
that the I/O device is ready. To do this, the processor either polls a status flag in the device
interface or waits for the device to send an interrupt request. In either case, considerable
overhead is incurred, because several program instructions must be executed for each data
word transferred. In addition to polling the status register of the device, instructions are
needed for incrementing the memory address and keeping track of the word count. When
interrupts are used, there is the additional overhead associated with saving and restoring the
program counter and other state information.

Definition: Mechanism which allow transfer of a block of data directly between an external
device and the main memory, without continuous intervention by the processor is called
direct memory access (DMA).

DMA transfers are performed by a control circuit that is part of the I/O device interface. We
refer to this circuit as a DMA controller. The DMA controller performs the functions that would
normally be carried out by the processor when accessing the main memory. For each word
transferred, it provides the memory address and all the bus signals that control data transfer.
Since it has to transfer blocks of data, the DMA controller must increment the memory address
for successive words and keep track of the number of transfers.

Although a DMA controller can transfer data without intervention by the processor, its
operation must be under the control of a program executed by the processor. The steps
involved in DMA are as fallows.
1. The processor sends the starting address, the number of words in the block, and the
direction of the transfer.
2. On receiving this information, the DMA controller initiates data transfer.
o
3. Transfers the block of the data from 2 Storage to main memory.
4. When the entire block has been transferred, the controller informs the processor by
raising an interrupt signal.

III-Semester, Computer Organization (18CS34) P a g e 13 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II


While a DMA transfer is taking place, the program that requested the transfer cannot continue
but the processor can execute another program. After the DMA transfer is completed, the
processor can return to the program that requested the transfer.
OS is responsible for fallowing operations
1. Initiated I/O operations in response to a request from an application program.
(Simple I/O or DMA)
2. Suspending the execution of one program and starting another.
3. Putting a Process in runnable state to block state and vice versa.

Fig 2.10: Register in a DMA

Fig 2.10 shows an example of the DMA controller registers that are accessed by the
processor to initiate transfer operations.
1. Two registers are used for storing the Starting address and the word count.
2. The third register contains status and control flags.
st
a. The R/W bit determines the direction of the transfer (1 Bit)
1-Read Operation
0- Write operation
th
b. Done flag (0 Bit)
0- DMA operation going on.
1- Completed DMA Operation
th
c. Interrupt enable (IE) Flag (30 Bit)
0- Doesn’t allow to raise interrupt even after completion of
DMA. 1- Allows to raise interrupt after completion of DMA.
st
d. Interrupt request (IRQ) Flag (31 Bit)
0- NO interrupt has been raised
1- Interrupt has been raised from DMA.

Fig 2.11: USE of DMA Controller in a computer system

III-Semester, Computer Organization (18CS34) P a g e 14 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.11 showing how DMA controllers may be used. A DMA controller connects a
high-speed network to the computer bus. The disk controller, which controls two disks, also
has DMA capability and provides two DMA channels. It can perform two independent DMA
operations, as if each disk had its own DMA controller. The registers needed to store the
memory address, the word count, and so on are duplicated, so that one set can be used with
each device.

The details of steps involved in DMA operation in addition to the steps given above
referring Fig 2.11 is as follows.

1. To start a DMA transfer a program writes the address and word count information into the
registers of the corresponding channel of the disk controller.
2. Processor provides the disk controller with information to identify the data for future
retrieval.
3. DMA controller proceeds independently to implement the specified(R/W) operation.
4. When the DMA transfer is completed. This fact is recorded in the status and control
register of the DMA channel by setting the Done bit, IE bit to 1.
5. When IE bit is set to 1, the controller sends an interrupt request to the processor and sets
the IRQ bit.
6. The status register can also be used to record other information, such as whether the
transfer took place correctly or errors occurred etc.,

Memory accesses by the processor and the DMA controller are interwoven. Requests
by DMA devices for using the bus are always given higher priority than processor requests.
Among different DMA devices, top priority is given to high-speed peripherals such as a disk,
a high-speed network interface, or a graphics display device.
Cycle Steal Mode: The processor originates most memory access cycles, the DMA controller
can be said to “steal” memory cycles from the processor. Hence, the interweaving technique
is usually called cycle stealing.

Burst Mode: The DMA controller may be given exclusive access to the main memory to transfer
a block of data without interruption. This is known as block or burst mode.

Most DMA controllers incorporate a data storage buffer. In the case of the network
interface as in Fig2.11. For example, the DMA controller reads a block of data from the main
memory and stores it into its input buffer. This transfer takes place using burst mode at a
speed appropriate to the memory and the computer bus. Then, the data in the buffer are
transmitted over the network at the speed of the network. Bus Arbitration techniques are used
for resolving any conflicts between two DMA or Processor and DMA for accessing Data bus.

2.3.1: Bus Arbitration


The device that is allowed to initiate data transfers on the bus at any given time is
called the bus master. When the current master withdraws control of the bus, another device
can acquire this status. Bus arbitration is the process by which the next device to become the
bus master is selected and bus mastership is transferred to it.

III-Semester, Computer Organization (18CS34) P a g e 15 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

There are two approaches to bus arbitration:


1. Centralized: A single bus arbiter performs the required arbitration.
2. Distributed: All devices participate in the selection of the next bus master.

1. Centralized Arbitration

The bus arbiter may be the processor or a separate unit connected to the bus. A basic
arrangement in which the processor contains the bus arbitration circuitry. In this case, the
processor is normally the bus master unless it grants bus mastership to one of the DMA
controllers. A DMA controller indicates that it needs to become the bus master by activating

the Bus request line BR . The signal on the Bus-Request line is the logical OR of the bus requests
from all the devices connected to it. When Bus-Request is activated, the processor activates the
Bus-Grant signal, BG1, indicating to the DMA controllers that they may use the bus when it
becomes free. This signal is connected to all DMA controllers using a daisy-chain arrangement.
Thus, if DMA controller 1 is requesting the bus, it blocks the propagation of the grant signal to
other devices. Otherwise, it passes the grant downstream by asserting BG2. The current bus
master indicates to all device that it is using the bus by activating another open-
controller line called Bus-Busy BBSY . Hence, after receiving the Bus-Grant signal, a DMA
controller waits for Bus-Busy to become inactive, then assumes mastership of the bus. At this
time, it activates Bus-Busy to prevent other devices from using the bus at the same time. The
connection for centralized arbitration is as shown in Fig 2.12.

Fig 2.12: Simple arrangement for bus arbitration using a daisy chain.

Fig 2.13: Sequence of signal during transfer of bus mastership for the devices in Fig 2.12

Fig 2.13 shows the timing sequence of events for the devices in Fig 2.12.as DMA
controller2 request and acquires bus mastership and later releases the bus.

III-Semester, Computer Organization (18CS34) P a g e 16 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

2. Distributed arbitration:

Distributed arbitration means that all devices waiting to use the bus have equal responsibility
in carrying out the arbitration process, without using a central arbiter. A simple method for
distributed arbitration is illustrated in Fig 2.14. Each device on the bus assigned a 4-bit
identification number. When one or more devices request the bus, they assert the
Start Arbitration signal and place their 4-bit ID numbers on four open-collector lines, ARB0
through . A winner is selected as a result of the interaction among the signals transmitted
over those liens by all contenders. The net outcome is that the code on the four lines
represents
the request that has the highest ID number.

Assume that two devices, A and B, Having ID numbers 5 and 6, respectively are
requesting the use of the buses. Device A transmits the pattern 0101, and device B transmits
pattern 0110. These codes seen by the both devices is 0111. Each device compares the pattern
on the arbitration line to its ID, from most significant bit to least significant bit. If it finds
difference at any bit position sets that bit and lower bit to 0. In our case and is
set to 0. Then the new pattern will 0110 which matches devices B there by device B wins the
control over the bus. Distributed arbitration is having higher reliability as all the devices are
involved in the selecting next master of the bus.

2.14: A Distributed arbitration scheme

: BUSES
The processor, main memory, and I/O devices can be interconnected by means of a
common bus whose primary function is to provide a communication path for the transfer of
data. The bus includes the lines needed to support interrupts and arbitration. In this section,
we discuss the main features of the bus protocols used for transferring data. A bus protocol is
the set of rules that govern the behavior of various devices connected to the bus as to when to
place information on the bus, assert control signals, and so on. After describing bus protocols,
we will present examples of interface circuits that use these protocols. The buses are
categorized into three types based on type of signal they carry. Control bus, data bus and
address bus. Based on its working buses are again categorized into two types Synchronous
bus and Asynchronous bus.

III-Semester, Computer Organization (18CS34) P a g e 17 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

: Synchronous Bus:
In a synchronous bus, all devices derive timing information from a common clock
line. Equally spaced pulses on this line define equal time intervals. In the simplest form of a
synchronous bus, each of these intervals constitutes a bus cycle during which one data
transfer can take place. Such a scheme is illustrated in Fig 2.15.

Let us consider the sequence of events during an input (read) operation.

At time t0, the master places the device address on the address lines and sends an appropriate
command on the control lines.
The command will indicate an input operation and specify the length of the operand to be
read, if necessary.
Information travels over the bus at a speed determined by its physical and electrical
characteristics.
The clock pulse width, t1 – t0, must be longer than the maximum propagation delay between
two devices connected to the bus. It also has to be long enough to allow all devices to decode
the address and control signals so that the addressed device can respond at time t1.

It is important that slaves take no action or place any data on the bus before t1. The
information on the bus is unreliable during the period t 0 to t1 because signals are changing
state.
ARB3
The addressed slave places the requested input data on the data lines at time t1. At
the end of the clock cycle, at time t2, the master strobes the data on the data lines into its
input buffer.

Fig 2.15: Timing of an input transfer on a synchronous bus

For data to be loaded correctly into any storage device, such as a register built with flip-flops,
the data must be available at the input of that device for a period greater than the setup time of
the device. Hence, the period t2 - t1 must be greater than the maximum propagation time on
the bus plus the setup time of the input buffer register of the master.

A similar procedure is followed for an output operation. The master places the output data on
the data lines when it transmits the address and command information at time t2, the
addressed device strobes the data lines and loads the data into its data buffer.

III-Semester, Computer Organization (18CS34) P a g e 18 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The timing diagram in Fig 2.15 is an idealized representation of the actions that take place on
the bus lines. The exact times at which signals actually change state are somewhat different
from those shown because of propagation delays on bus wires and in the circuits of the
devices. Fig 2.16 gives a more realistic picture of what happens in practice. It shows two
views of each signal, except the clock. Because signals take time to travel from one device to
another, a given signal transition is seen by different devices at different times. One view
shows the signal as seen by the master and the other as seen by the slave.

The master sends the address and command signals on the rising edge at the beginning of
clock period 1 (t0).

These signals do not actually appear on the bus until f AM, largely due to the delay in the bus
driver circuit.
Later, at tAS, the signals reach the slave.
The slave decodes the address and at t1 sends the requested data, the data signals do not
appear on the bus until tDS. And travel toward the master and arrive at tDM.
At t2, the master loads the data into its input buffer. Therefore, the period t 2-tDM is the setup
time for the master’s input buffer. The data must continue to be valid after t2 for a period
equal to the hold time of that buffer.

Fig 2.16: A detailed timing diagram for the input transfer of Fig 2.15

Multiple-Cycle transfers:
The scheme described above results in a simple design for the device interface,
however, it has some limitations. Because a transfer has to be completed within one clock
cycle, the clock period, t2-t0, must be chosen to accommodate the longest delays on the bus
and the lowest device interface. This forces all devices to operate at the speed of the slowest
device. Also, the processor has no way of determining whether the addressed device has
actually responded. It simply assumes that, at t 2, the output data have been received by the
I/O device or the input data are available on the data lines. If, because of a malfunction, the
device does not respond, the error will not be detected.

III-Semester, Computer Organization (18CS34) P a g e 19 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

To overcome these limitations, most buses incorporate control signals that represent a
response from the device. These signals inform the master that the slave has recognized its
address and that it is ready to participate in a data-transfer operation. They also make it
possible to adjust the duration of the data-transfer period to suit the needs of the participating
devices. To simplify this process, a high-frequency clock signal is used such that a complete
data transfer cycle would span several clock cycles. Then, the number of clock cycles
involved can vary from one device to another.

An example of this approach is shown in Fig 2.17.

 During clock cycle 1, the master sends address and command information on the bus,
requesting a read operation.
 The slave receives this information and decodes it following active edge of the clock.
At the beginning of clock cycle 2, it decides to respond and begins to access
 the requested data.
 Assuming that some delay is involved in getting the data, and hence the slave cannot
respond immediately.
 The data become ready and are placed on the bus in clock cycle 3. At the same time,
the slave asserts a control signal called Slave-ready.
 The Slave-ready signal is an acknowledgment from the slave to the master,
confirming that valid data have been sent.
 The Slave-ready signal allows the duration of a bus transfer to change from one
device to another.
 If the addressed device does not respond at all, the master waits for some predefined
maximum number of clock cycles, then aborts the operation. This could be the result
of an incorrect address or a device malfunction.

Fig 2.17: An input transfer using multiple clock cycles

ASYNCHRONOUS BUS
An alternative scheme for controlling data transfers on the bus is based on the use of
a handshake between the master and the salve. The concept of a handshake is a generalization
of the idea of the Slave-ready signal in Fig2.17. The common clock is replaced by two timing
control lines, Master-ready and Slave-ready. The first is asserted by the master to indicate that
it is ready for a transaction, and the second is a response from the slave.

III-Semester, Computer Organization (18CS34) P a g e 20 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

In principle, a data transfer controlled by a handshake protocol proceeds as follows.


The master places the address and command information on the bus. Then it indicates to all
devices that it has done so by activating the Master-ready line.

It causes all devices on the bus to decode the address. The selected slave performs the
required operation and informs the processor it has done so by activating the Slave-ready line.
The master waits for Slave-ready to become asserted before it removes its signals from the
bus. In the case of a read operation, it also strobes the data into its input buffer.

An example of the timing of an input data transfer using the handshake scheme is given in
Fig 2.18, which depicts the following sequence of events.

t0 – The master places the address and command information on the bus, and all devices on
the bus begin to decode this information.

t1 – The master sets the Master-ready line to 1 to inform the I/O devices that the address and
command information is ready. The delay t1-t0 is intended to allow for any skew that may
occur o the bus. Skew occurs when two signals simultaneously transmitted from one source
arrive at the destination at different times. This happens because different lines of the bus
may have different propagation speeds. Thus, to guarantee that the Master-ready signal does
not arrive at any device ahead of the address and command information, the delay t1-t0 should
be larger than the maximum possible bus skew.

t2 – The selected slave, having decoded the address and command information performs the
required input operation by placing the data from its data register on the data lines.

t3 – The Slave-ready signal arrives at the master, indicating that the input data are available
on the bus.
t4 – The master removes the address and command information from the bus. The delay
between t3 and t4 is again intended to allow for bus skew.

t5 – When the device interface receives the 1 to 0 transition of the Master-ready signal, it
removes the data and the Slave-ready signal from the bus. This completes the input transfer.

Fig 2.18: Handshake control of data during an input operation

III-Semester, Computer Organization (18CS34) P a g e 21 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The timing for an output operation is illustrated as shown in the Fig 2.19. Essentially the
same as for an input operation. In this case, the master places the output data on the data
lines at the same time that it transmits the address and command information. The selected
slave stores the data into its output buffer when it receives the master-ready signal and
indicates that it has done so by setting the slave –ready signal to 1.

: INTERFACE CIRCUIT

The I/O interface of a device consists of the circuitry needed to connect that device to the
bus. On one side of the interface are the bus lines for address, data, and control. On the other
side are the connections needed to transfer data between the interface and the I/O device. This
side is called a port. There are two type of the ports.
Parallel port: Transfers multiple bits of data simultaneously to or from the device.
Serial port: Sends and receives data one bit at a time.

Fig 2.19: handshake control of data transfer during an output operation

2.5.1 PARALLEL PORT:

Parallel port transfers multiple bits at a time usually in multiples of 8. The connection
between devices and computer uses a multiple pin connector and a cable with many wires
arraigning in the flat configuration. The interface is simple as no need for parallel to serial
conversion. This arrangement is used when I/O devices are close to the computer.

Working of the parallel port is descried with 8-bit I/O interfaces i.e, keyboard and printer.
Fig 2.20 shows a circuit that can be used to connect a keyboard to a processor.

There are only two registers: a data register, DATAIN, and a status register containing
keyboard status flag SIN.
keyboard consists of mechanical switches that are normally open. When a key is pressed, its
switch closes and establishes a path for an electrical signal. This signal is detected by an
encoder circuit that generates the ASCII code for the corresponding character.

III-Semester, Computer Organization (18CS34) P a g e 22 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.20: Keyboard to processor connection.

The error occurred during key press due to bouncing of mechanical circuit is addressed by
simple debouncing circuit, which is part of the keyboard hardware/encoder circuit.
When a key is pressed, the Valid signal changes from 0 to 1, causing the ASCII code of the
corresponding character to be loaded into the DATAIN register and the status flag SIN to be
set to 1. The status flag is cleared to 0 when the processor reads the contents of DATAIN
register.

Fig 2.21: An input interface circuit.

The two addressable locations in this interface are DATAIN and STATUS. They occupy
adjacent word locations in the address space.

III-Semester, Computer Organization (18CS34) P a g e 23 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Bit b1 of STATUS register corresponds SIN (status flag).


Output lines of DATAIN Register are connected to the data lines of the bus by means of
tristate drivers.
The tristate drivers are turned on when the processor issue a read instruction with address that
selects the DATAIN Register i.e., A0 is equal to 1. And the STATUS register is sent when A0
is equal to 0.
If D0 is equal to 0 then data on data lines will become invalid.
Read -Data or Read-Status signal will become ready only when master ready signal is received.

Intern Read -Data or Read-Status signal will activates the Slave-Ready signal. The
implementation of status flag is as shown in the Fig 2.22.

Fig 2.22: Circuit for the status flag block in Fig 2.21

An Edge-triggered D flip-flop is used to implement the status flag which is set to 1 by rising
edge on valid signal which changes the output of NOR gate i.e, SIN to 1.
SIN is set to one only when the Master-Ready signal is set to 1, else the D-flipflop will be
cleared to 0.

OUTPUT INTERFACE

The output interface shown in Fig 2.23, which can be used to connect an output device such
as a printer. Printer uses two handshake signals, Valid and Idle, in a manner similar to the
handshake between the bus signals Master-ready and Slave-ready.

Idle signal indicates its ready to receive the data from the processor. Valid
signal is generated when it receives new data on the datalines.

SOUT is set to 1 when printer is ready to accept another character and cleared 0 when new
character is loaded into DATAOUT.

The parallel 8-bit output port is as shown in the Fig 2.24.

III-Semester, Computer Organization (18CS34) P a g e 24 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.23: Printer to processor connection

Fig 2.24: Output Parallel Interface circuit.

III-Semester, Computer Organization (18CS34) P a g e 25 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

PARALLEL INTERFACE

Fig 2.25: combined input/output interface circuit

The Fig shown above is the combination of the Fig 2.21 and 2.23.
The above circuit shows the different data line configuration for input (PA0 to PA7) and output
(PB0 to PB7).
Address bits A0 & A1 renamed as RS0 and RS1 selects anyone of the three addressable locations
DATAIN, DATAOUT and STATUS Register. S0 –SIN and S1-SOUT.
For Bidirectional I/O devices the simple and flexible circuit can be designed as shown in the
Fig 2.26.
P0 to P7 can be used either as input or output.
The DDR (Data Direction Registers) direct the data lines from the port to DATAIN or
DATAOUT registers based on the value given to it.
1- output lines, 0- input lines
C1 and C2 are provided to control the interaction between the interface circuit and I/O.
C2 is bidirectional and used to provide different modes of singling .

III-Semester, Computer Organization (18CS34) P a g e 26 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.26: A general 8-bit parallel interface.

Fig 2.27: updated parallel port interface for Fig 2.25

III-Semester, Computer Organization (18CS34) P a g e 27 |39


R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

In above Fig 2.26 my address is the output of the address decoder. RS0, RS1 and RS2 are used
to select one of the eight registers in the interface. Input and output data, DDR control and
status registers.
Adopting above changes the Fig 2.25 can be designed as shown in Fig 2.27
below. The timing diagram for the same is demonstrated in Fig 2.28.

The modified circuit will be in the idle state when the output of the address decoder My-address
is high, which asserts Load-Data or Read-Status depending on address bit A0 and state of R/ ̅.

Fig 2.28: Timing Diagram for output interface shown in Fig 2.27

2.5.3 : SERIAL PORT

A serial interface is used to connect the processor to I/O devices that transmit data one bit
at a time.
Data are transferred in a bit-serial fashion on the device side and in a bit-parallel fashion on the
processor side.

The transformation between the parallel and serial formats is achieved with shift registers that
have parallel access capability as shown in Fig 2.29.

The input shift register accepts bit-serial input from the I/O device. When all 8 bits of data have
been received, the contents of this shift register are loaded in parallel into the DATAIN register.

Output data in the DATAOUT register are transferred to the output shift register, from which the
bits are shifted out and sent to the I/O device. The part of the interface that deals with the bus is the
same as in the parallel interface. Two status flags, which we will refer to as SIN and SOUT, are
maintained by the Status and control block.

The SIN flag is set to 1 when new data are loaded into DATAIN from the shift register, and
cleared to 0 when these data are read by the processor.

P a g e 28 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.29: Serial bus interface

The SOUT flag indicates whether the DATAOUT register is available. It is cleared to 0 when the
processor writes new data into DATAOUT and set to 1 when data are transferred from
DATAOUT to the output shift register.

The double buffering used in the input and output paths. It is possible to implement DATAIN and
DATAOUT themselves as shift registers but leads to delay as interface would not be able to start
receiving the next character until the processor reads the contents of DATAIN.

The serial bus is used to connect devices which are far away from the devices.

STANDARD INTERFACE

The processor bus is defined by the processor clock through which cache and main memory are
connected. These buses are printed on the mother board through which only few devices can be
connected to the processor.

On motherboard another bus(expansion Bus) is also provided which can be used to connect
devices to mother board through the bridge.
The bridge will convert the signal and protocol of one bus type to other.
The expansion buses will differ in electrical properties, signaling scheme & data transfer speed,
therefore can’t be governed by same clock.
It is difficult to have uniform protocol to bus on motherboard and different expansion bus.

P a g e 29 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The practical solution to this problem is to define a standard for certain classes of
interconnection.
Collaborative and cooperative efforts of IEEE, ANSI and ISO organizations has given the
standard interfaces for different devices and computers.
Following are three broad category of standard interface
1. PCI (Peripheral Component interconnect)
2. SCSI (standard Computer System Interface)
3. USB (Universal Serial Bus)
4. ISA (Industry Standard architecture)
PCI defines standard for expansion bus on mother board
SCSI is a high-speed parallel bus intended for devices like disks and video displays.
USB uses serial transmission to suit the devices like keyboard mouse, internet connection etc.,
ISA is used for integrated device electronics.
The interface standard generally gives information about the following
1. Type of the devices to be connected
2. Data transfer rate
3. Communication medium like parallel wires, serial cable, telephone lines etc.
4. Electrical and physical properties of the medium used for the data transfer
5. Bandwidth of the medium.
6. Minimum and maximum voltage and current level on the channel.

The Fig 2.30 illustrates way these standard interfaces are used in typical computers.

Fig 2.30: example of different standard interface usage in the computer.

P a g e 30 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

: PERIPHERAL COMPONENT INTERCONNECT (PCI)

PCI supports the functions found on a processor bus but in a standardized format that is
independent of any processor.
Devices connect through PCI appears as if they were connected to the processor bus
directly.
They assigned the address in the memory space on the processor for memory mapped
I/O and I/O space for I/O Mapped I/O.
In PCI device just connect to the system interface board and PCI software will take care rest
of the operations.

The Fig 2.31 shows use of a device connected to computer system through PCI bus.

Fig 2.30: use of a PCI bus in a computer system.

The PCI bus is connected to the processor bus via a controller called a bridge.
The bridge has a special port for connecting the computer’s main memory. It translates and
relays commands and responses from one bus to the other and transfers data between them.
The PCI bus supports three independent address spaces: memory, I/O, and configuration. The
system designer may choose to use memory-mapped I/O even with a processor that has a
separate I/O address space.
The PCI bus is designed primarily to support multiple-word transfers. A Read or a Write
operation involving a single word is simply treated as a burst of length one.
The PCI bus uses the same lines to transfer both address and data. The address is needed only
long enough for the slave to be selected, freeing the lines for sending data in subsequent
clock cycles.

For transfers involving multiple words, the slave can store the address in an internal register
and increment it to access successive address locations.

P a g e 31 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Data transfer using PCI Bus


The bus master, which is the device that initiates data transfers by issuing Read and Write
commands, is called the initiator in PCI terminology. The addressed device that responds to
these commands is called a target.
The main bus signals used for transferring data are listed in Table 2.1

Table 2.1: Data transfer signals on the PCI Bus

The target-ready, TRDY#, signal is equivalent to the Slave-ready signal. initiator-ready


signal, IRDY# is equivalent to master ready which is used to support burst transfers.

A complete transfer operation on the PCI bus, involving an address and a burst of data, is
called a transaction.

The bus transaction in which an initiator reads four consecutive 32-bit words from the
memory is illustrated in Fig 2.31.

Fig 2.31: Read operation on the PCI bus.

P a g e 32 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Clock cycle 1:

The bus master, acting as the initiator, asserts FRAME# to indicate the beginning of a
transaction.
At the same time, it sends the address on the AD lines and a command on the C/BE# lines.

Clock cycle 2:

The initiator removes the address, disconnects its drivers from the AD lines, and asserts
IRDY# to indicate that it is ready to receive data.
The selected target asserts DEVSEL# to indicate that it has recognized its address and is
ready to respond. Enables its drivers on the AD lines, so that it can send data to the initiator in
subsequent cycles.

Clock cycle 3:

The target asserts TRDY# in clock cycle 3 and begins to send data. It maintains DEVSEL# in
the asserted state until the end of the transaction.
Target is ready to send data. If not, it would delay asserting TRDY# until it is ready.
The C/BE# lines are used to indicate byteenable.
Signals continue to transfer the data without any change.

Clock cycle 6:

After sending the fourth word, the target deactivates TRDY# and DEVSEL# and disconnects
its drivers on the AD lines.

DEVICE CONFIGURATION

A PCI interface includes a small configuration ROM memory that stores information about
the I/O device connected to it. The configuration ROMs of all devices are accessible in the
configuration address space, where they are read by the PCI initialization software whenever
the system is powered up or reset. By reading the information in the configuration ROM, the
software determines whether the device is a printer, a camera, an Ethernet interface, or a disk
controller.

Devices connected to the PCI bus are not assigned permanent addresses that are built
into their I/O interface hardware. Instead, device addresses are assigned by software during
the initial configuration process. This means that when power is turned on, devices cannot be
accessed using their addresses in the usual way, as they have not yet been assigned any
address. A different mechanism is used to select I/O devices at that time.

The PCI bus may have up to 21 connectors for I/O device interface cards to be
plugged into. Each connector has a pin called Initialization Device Select (IDSEL#). This pin
is connected to one of the upper 21 address/data lines, AD11 to AD31. A device interface
responds to a configuration command if its IDSEL# input is asserted.

P a g e 33 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The configuration software scans all 21 locations to identify where I/O device interfaces are
present. For each location, it issues a configuration command using an address in which the
AD line corresponding to that location is set to 1 and the remaining 20 lines are set to 0. If a
device interface responds, it is assigned an address and that address is written into one of its
registers designated for this purpose. Using the same addressing mechanism, the processor
reads the device’s configuration ROM and carries out any necessary initialization. It uses the
low-order address bits, AD0 to AD10, to access locations within the configuration ROM.
This automated process means that the user simply plugs in the interface board and turn on
the power. The software does the rest.

: STANDARD COMPUTER SYSTEM INTERFACE (SCSI)

It refers to a standard bus defined by the ANSI. The SCSI bus may be used to connect a
variety of devices to a computer. It is particularly well-suited for use with disk drives.

Data Transfer

Devices connected to the SCSI bus are not part of the address space of the processor in the
same way as devices connected to the processor bus or to the PCI bus.

A SCSI bus may be connected directly to the processor bus, or more likely to another standard
I/O bus such as PCI, through a SCSI controller.
Data and commands are transferred in the form of multi-byte messages called packets.
To send commands or data to a device, the processor assembles the information in the memory
then instructs the SCSI controller to transfer it to the device.

The following is a simplified high-level description, processor wishes to read a block of data
from a disk drive and that these data are stored in two disk sectors that are not contiguous.
The processor sends a command to the SCSI controller, which causes the following sequence
of events to take place:
1. The SCSI controller polls for control of the SCSI bus.
2. When the initiator wins the arbitration process, it selects the target controller and hands
over the control of the bus to disk controller.
3. The target starts an output operation.
4. When disk controller is performs the disk seek operation, it suspends the control of the bus
and logical connection with SCSI controller.
5. Once the disk controller done with disk seek (finds the data) target request the initiator for
bus control.
6. Initiator/ SCSI controller again participates in bus arbitration and passes the control of the
bus to target/Disk controller after winning the arbitration
7. After getting control over the bus target reselects the initiator and restores the connection.
8. Target starts sending remaining data to the initiator.
9. After completing data transfer target terminates the connection with initiator.
10. Initiator / SCSI controller will send this data to the main memory using DMA operation
and intimates the processor by raising the request.

P a g e 34 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The SCSI control messages exchanged between initiator and target are as shown in the table
2.2

Table 2.2: The SCSI Bus signals


ARBITRATION :
The Fig 2.23 shows the bus arbitration and selection on SCSI Bus. Controller 2 & 6
̅
request the bus by asserting simultaneously. Highest priority device will win the

arbitration. If two devices of same arbitration are in polling then daisy chain arrangement is
used. Here device 6 is having higher priority and wins the race. Device wants to
communicate with device5 then the control of the bus is given to the device5.

SELECTION:
After winning the arbitration contoller6 asserts -BSY and its address on –DB6 and
indicates it wants to select device5 by asserting –SEL ,–DB5 and releases the –BSY. The
selected target controller responds by asserting –BSY. Initiator controller now removes
address from bus.

INFORMATION TRANSFER:
The target controller –I/O signal during an input operation and the –C/D signal to
indicate that the information is being transferred. End of the transfer the target controller
releases the – BSY signal and bus is free to be used by any other device.

P a g e 35 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.23: Arbitration and selection on SCSI Bus and device 6 wins the arbitration.

RESELECTION
After the information transfer is complete, the target suspends the logical connection with the
initiator. If any one of the initiator and target want to transfer the data it must repeat
arbitration & selection processes once again .

UNIVERSAL SERIAL BUS (USB )


The Universal Serial Bus (USB) is the most widely used interconnection standard. A large
variety of devices are available with a USB connector. The USB has been designed to meet
several key objectives:
• Provide a simple, low-cost, and easy to use interconnection system
• Accommodate a wide range of I/O devices and bit rates, including Internet
connections, and audio and video applications
• Enhance user convenience through a “plug-and-play” mode of operation.

Port limitation:
Using general purpose, low-to-medium speed devices can be connected to a computer. Only
few such port are provided in a computer because of physical space separation restriction . the
objective of the USB is to make it possible to add many devices to the system at any time
without the need to rearrange the motherboard.

USB Architecture and addressing:


The USB uses point-to-point connections and a serial transmission format. When multiple
devices are connected, they are arranged in a tree structure as shown in Fig 2.24.
Each node of the tree has a device called a hub, which acts as an intermediate transfer point
between the host computer and the I/O devices. At the root of the tree, a root hub connects the
entire tree to the host computer. The leaves of the tree are the I/O devices.
If I/O devices are allowed to send messages at any time, two messages may reach the hub at
the same time and interfere with each other.

P a g e 36 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

The USB operates strictly on the basis of polling. A device may send a message only in response
to a poll message from the host processor. Hence, no two devices can send messages at the same
time. This restriction allows hubs to be simple, low-cost devices.

Fig 2.24: Universal serial bus tree structure.

Each device on the USB, whether it is a hub or an I/O device, is assigned a 7-bit address. This
address is local to the USB tree and is not related in any way to the processor’s address space.
The root hub of the USB, which is attached to the processor, appears as
a single device processor.

The host software communicates with individual devices by sending information to the root
hub, which it forwards to the appropriate device in the USB tree.

When a device is first connected to a hub, or when it is powered on, it has the address 0.
Periodically, the host polls each hub to collect status information and learn about new devices
that may have been added or disconnected.

When the host is informed that a new device has been connected, it reads the information in a
special memory in the device’s USB interface to learn about the device’s capabilities. It then
assigns the device a unique USB address and writes that address in one of the device’s
interface registers.

This initial connection procedure that gives the USB its plug-and-play capability.
The USB devices has some locations like status, control and data registers which are called as
endpoint and identified by the 4bit identifiers. Thus, system can have 16 I/O endpoints.

USB operation:
The USB operation is based on two mode of operation polling mechanism and split traffic
mode operation

P a g e 37 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

In polling device, can send a message only when it receives poll message from the host.
Upstream ( Device -> Hub) message don’t interfere with each other.
Polling is suitable for slow speed devices.
Split traffic mode serves both low speed and high speed devices. consider Fig 2.25. While
communicating with low speed device D which takes several clock cycles no
other data transaction can take place reducing effectiveness of highspeed link.

Fig 2.25: split bus operation in USB.

To avoid this split mode allows root hub to communicate several devices to device C which
the data is being forwarded to device D.
The bidirectional communication links between application software and I/O devices. These
links are called as pipes.

USB PROTOCOL:

Information is transferred as packets overs the USB. We have three types of packets. Control
packet, Data packet and Acknowledge packet.
The control packet contains controlling information called as tokens. The
USB packet is as shown in the Fig 2.26.
The host sends the OUT control packet to hub followed by Data packet .Hub sends
acknowledgement packet to host on receiving Data packet completely without errors.
After sending ACK packet to the root hub forwards the control and Data packets to down the
tree. only addressed device receives the packet and sends ACK packet to the hub if received
packet is error free.
Error Data packets or the lost Data packets are resent by the root/hub to complete the
transaction if ACK is not received in time.
The input operation (device -> root) is carried out in same manner with token type IN
in reverse direction of communication. The timing diagram for communication as shown
in the Fig2.27

P a g e 38 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

Fig 2.26: USB packet format.

Fig 2.27: Timing Diagram for output operation to USB device connected through
hub

P a g e 39 |
R V Institute of Technology & Management®

COMPUTER ORGANIZATION MODULE II

III- Semester, Computer Organization (18CS34) Page 40| 40

You might also like