0% found this document useful (0 votes)
19 views23 pages

Unit 3 Coa

The document discusses the Input-Output Interface, which facilitates data transfer between a computer's internal storage and external peripheral devices, highlighting the importance of synchronization and error detection. It also covers I/O modules that manage communication between CPUs and networks, detailing their core functions such as error detection, data buffering, and device communication. Additionally, it explains the differences between memory-mapped I/O and isolated I/O, as well as the roles of PCI and SCSI in connecting various peripheral devices.

Uploaded by

Shivam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views23 pages

Unit 3 Coa

The document discusses the Input-Output Interface, which facilitates data transfer between a computer's internal storage and external peripheral devices, highlighting the importance of synchronization and error detection. It also covers I/O modules that manage communication between CPUs and networks, detailing their core functions such as error detection, data buffering, and device communication. Additionally, it explains the differences between memory-mapped I/O and isolated I/O, as well as the roles of PCI and SCSI in connecting various peripheral devices.

Uploaded by

Shivam Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Unit 3

Input-Output Interface
Input-Output Interface is a method of transferring information between the internal storage devices.
i.e. memory and the external peripheral device. A peripheral device is a device that provides both
input/output for the computer, it is also known as an Input-Output device. This input-output
interface in a computer system exists in a particular hardware component between the system's bus
and peripherals. This component is known as the "interface unit".
The below figure shows a typical input-output interface between the processor and different
peripherals:

Basic input-output interface


In the above figure, we see that every peripheral device has an interface unit associated with it.
For example, A mouse or keyboard that provides input to the computer is called an input device
while a printer or monitor that provides output to the computer is called an output device.
Why Input-Output Interface?
We require the input-output Interface because many differences exist between each peripheral and
the central computer while transferring data. Some significant differences between the peripheral
and CPU are:
• The nature of the CPU is electronic, and that of the peripheral device is electro-
mechanical and electromagnetic. So, we can see many differences in the mode of
operation of both CPU and peripheral devices.
• We have a synchronization mechanism because the data transfer rate is slower in
peripheral devices than CPU.
• In peripheral devices, data code and formats differ from the format in the CPU.
• The operating modes of peripheral devices are different, and each can be controlled
not to disturb the operation of any other peripheral devices connected to the CPU.
Functions of Input-Output Interface
The primary functions of the input-output Interface are listed below:
• It can synchronize the operating speed of the CPU to peripherals.
• It selects the peripheral appropriate for the interpretation of the input-output devices.
• It provides signals like timing and control signals.
• In this, data buffering may be possible through the data bus.
• It has various error detectors.

It can convert serial data into parallel and vice versa.


WHAT IS AN I/O MODULE?

Input/Output Modules, or I/O Modules, manage the communication between a CPU and a network,
including the transfer of data, the management of power loads, and the control of machine functions.

This enables system integrators to connect disparate devices, allowing greater control of the industrial
network. I/O modules are especially helpful in instances where there exists legacy machinery, devices,
and systems that are unable to natively communicate with a desired industrial protocol.

I/O modules help to extend a manufacturer's network to incorporate all manufacturing equipment,
enabling greater control of the system as well as increased operational visibility. They also overcome the
challenge of collecting peripheral data, which can come in various amounts, at different speeds, and in
varying formats.

Some of the devices from which data can be collected using an I/O module include sensors, actuators,
monitors, and valves. I/O modules can also work as accessory devices for PLCs and HMIs.
WHAT ARE THE CORE FUNCTIONS OF AN I/O MODULE?

I/O modules offer a variety of key functions within an industrial environment. Without Input/Output
modules, organizations would be unable to exchange data between peripheral devices and their
network. Below we outline the critical functionality of I/O modules:

• Detecting Errors: I/O modules have the ability to detect errors and report them to the CPU. One
way that I/O modules detect errors is with the parity bit method.
• Processor Communication: This critical function of an I/O module involves a few components:
o Command Decoding: Receive and decode commands sent from the processor.
o Data Exchange: Exchange data between peripherals, processors, and the main memory.
o Status Reporting: Communicate the status of peripherals to the processor.
o Address Decoding: Organizes each of the peripherals connected to the I/O module by
managing their unique addresses.
• Buffering Data: With data buffering, I/O modules can manage the transfer speed of data sent by
the processor to peripheral devices. This compensates for the latency of peripheral devices.
• Device Communication: I/O modules can facilitate communication between connected
peripheral devices.
• Control and Timing: I/O modules are designed to manage the data transactions between the
internal system and peripheral devices.

Memory mapped I/O and Isolated I/O


As a••CPU needs to communicate with the various memory and input-output devices (I/O) as we
know data between the processor and these devices flow with the help of the system bus. There are
three ways in which system bus can be allotted to them :
1. Separate set of address, control and data bus to I/O and memory.
2. Have common bus (data and address) for I/O and memory but separate control lines.
3. Have common bus (data, address, and control) for I/O and memory.
In first case it is simple because both have different set of address space and instruction but require
more buses.
Isolated I/O –
Then we have Isolated I/O in which we Have common bus(data and address) for I/O and memory but
separate read and write control lines for I/O. So when CPU decode instruction then if data is for I/O
then it places the address on the address line and set I/O read or write control line on due to which
data transfer occurs between CPU and I/O. As the address space of memory and I/O is isolated and
the name is so. The address for I/O here is called ports. Here we have different read-write

instruction for both I/O and memory.


Memory Mapped I/O –
In this case every bus in common due to which the same set of instructions work for memory and
I/O. Hence we manipulate I/O same as memory and both have same address space, due to which
addressing capability of memory become less because some part is occupied by the I/O.

Differences between memory mapped I/O and isolated I/O –


Isolated I/O Memory Mapped I/O

Memory and I/O have separate address space Both have same address space

Due to addition of I/O addressable memory


All address can be used by the memory
become less for memory

Separate instruction control read and write Same instructions can control both I/O and
operation in I/O and Memory Memory

In this I/O address are called ports. Normal memory address are for both

More efficient due to separate buses Lesser efficient

Larger in size due to more buses Smaller in size

It is complex due to separate logic is used to Simpler logic is used as I/O is also treated as
control both. memory only.
Peripheral Component Interconnect(PCI)

Developed by Intel Corporation, the Peripheral Component Interconnect standard (PCI) is an


industry-standard, high-speed bus found in nearly all desktop computers. PCI slots allow you to
install a wide variety of expansion cards including:

• Graphics or Video cards


• Sound cards
• Network cards
• SCSI cards
• Many other types of cards

Furthermore, PCI automatically configures cards to work properly with other PCI cards in any
computer.

NOTE: The ability of the computer to automatically configure devices is termed Plug and Play. Plug
and Play differ from the earlier Industry-Standard Architecture (ISA) expansion card bus that often
requires the user to configure jumpers and other low-level software settings.
PCI cards come in both 32-bit and 64-bit versions, as well as both 33 MHz and 66 MHz speeds.
Running at 32 bits and 33 MHz, PCI yields a throughput rate of 133 MBps.

Function of PCI:
PCI slots are utilized to install sound cards, Ethernet and remote cards and presently strong state
drives utilizing NVMe innovation to supply SSD drive speeds that are numerous times speedier
than SATA SSD speeds. PCI openings too permit discrete design cards to be included to a computer
as well.
PCI openings (and their variations) permit you to include expansion cards to a motherboard. The
extension cards increment the machines capabilities past what the motherboard may create
alone, such as: upgraded illustrations, extended sound, expanded USB and difficult drive
controller, and extra arrange interface options, to title a couple of.
Advantages of PCI :
• You’ll interface a greatest of five components to the PCI and you’ll be able moreover
supplant each of them by settled gadgets on the motherboard.
• You have different PCI buses on the same computer.
• The PCI transport will improve the speed of the exchanges from 33MHz to 133 MHz
with a transfer rate of 1 gigabyte per second.
• The PCI can handle gadgets employing a greatest of 5 volts and the pins utilized can
exchange more than one flag through one stick.
Disadvantages of PCI :
• PCI Graphics Card cannot get to system memory.
• PCI does not support pipeline.

Small Computer System Interface (SCSI)

Overview

The small computer system interface operates locally as an input and output (I/O) bus that uses a
common command set to transfer controls and data to all devices. The main purpose of this
interface, called the SCSI bus, is to provide host computer systems with connections to a variety of
peripheral devices, including disk subsystems, tape subsystems, printers, scanners, optical devices,
communication devices, and libraries.

The SCSI bus design for the library provides a peer-to-peer, I/O interface that supports up to 16
devices and accommodates multiple hosts.

Peer-to-peer interface communication can be from:

• Host to host
• Host to peripheral device
• Peripheral device to peripheral device

SCSI terms defining communication between devices on the SCSI bus include:

• Initiator is the device that requests an operation.


• Target is the device that performs the operation requested.

Some targets are control units that can access one or more physical or virtual peripheral devices
addressable through the control unit. These peripheral devices are called logical units and are
assigned specific addresses or logical unit numbers (LUNs).

The library supports SCSI-3 commands.

The library and the tape drives have separate connections for attachment to the SCSI bus. Daisy-
chain cables are available to interconnect devices on the SCSI bus but keep the total cable length to
a minimum. The following figure shows an example of a library and four tape drives that are daisy-
chained to two initiators (or hosts). It is recommended that the drives be connected to a separate
SCSI bus from the library.

Figure 1-1 Example of a Library Configuration on the SCSI Bus

Benefits

A small computer system interface also provides these benefits:

• Low overhead
• High transfer rates
• A high-performance buffered interface
• Conformance to industry standards
• Plug compatibility for easy integration
• Error recovery, parity, and sequence checking provides high reliability
• Provisions in the command set for vendor-unique fields
• Standard or common command sets with an intelligent interface that provides device
independence

Implementation

Implementation of the SCSI bus for the library supports:

• 8-bit wide transfers, asynchronous; 16-bit wide selection


• Disconnect and reselect
• Multiple initiator
• Hard resets
• Single-ended LVD
• SCSI-3, 68-pin P-cable

Implementation for the library does not support:

• Soft resets
• Command queuing
• Command linking
• Asynchronous event notification

Data transmission

The process of sending data between two or more digital devices is known as data transmission. Data
is transmitted between digital devices using one of the two methods − serial transmission or parallel
transmission.

In serial transmission, data bits are sent one after the other across a single channel. Parallel data
transmission distributes numerous data bits through various channels at the same time.

What is Serial Transmission?

A serial transmission transfers data one bit at a time, consecutively, via a communication channel or
computer bus in telecommunication and data transmission. On the other hand, parallel
communication delivers multiple bits as a single unit through a network with many similar channels.

• 8-bits are conveyed at a time in serial transmission, with a start bit and a stop bit.
• All long-distance communication and most computer networks employ serial communication.
• Serial computer buses are becoming more common, even across shorter distances, since
newer serial technologies' greater signal integrity and transmission speeds have begun to
outperform the parallel bus's simplicity advantage.
• The majority of communication systems use serial mode. Serial networks may be extended
over vast distances for far less money since fewer physical wires are required.

What is Parallel Transmission?

Parallel communication is a means of transmitting multiple binary digits (bits) simultaneously in data
transmission. It differs from serial communication, which sends only one bit at a time; this distinction
is one method to classify a communication channel.

• A parallel interface comprises parallel wires that individually contain data and other cables
that allow the transmitter and receiver to communicate. Therefore, the wires for a similar
transmission system are put in a single physical thread to simplify installation and
troubleshooting.
• A large amount of data must be delivered across connection lines at high speeds that match
the underlying hardware.
• The data stream must be transmitted through "n" communication lines, which necessitates
using many wires. This is an expensive mode of transportation; hence it is usually limited to
shorter distances.

Difference between Serial and Parallel Transmission

The following table highlights the major differences between Serial and Parallel Transmission −

Key Serial Transmission Parallel Transmission

Serial Transmission is the type of Parallel Transmission is the mode of


transmission in which a single transmission in which multiple parallel
Definition
communication link is used to transfer the links are used that transmit each bit of
data from one end to another. data simultaneously.

In case of Serial Transmission, only one bit In case of Parallel Transmission, 8-bits
Bit transmission
is transferred at one clock pulse. transferred at one clock pulse.

As single link is used in Serial Multiple links need to be implemented


Transmission, it can be implemented in case of Parallel Transmission, hence
Cost Efficiency
easily without having to spend a huge it is not cost efficient.
amount. It is cost efficient.

As single bit gets transmitted per clock in 8-bits get transferred per clock in case
case of Serial Transmission, its of Parallel transmission, hence it is
Perform0000ance
performance is comparatively lower as more efficient in performance.
compared to Parallel Transmission.

Serial Transmission is preferred for long Parallel Transmission is preferred only


Preference
distance transmission. for short distance.
Key Serial Transmission Parallel Transmission

Serial Transmission is less complex as Parallel Transmission is more complex


Complexity compared to that of Parallel Transmission. as compared to that of Serial
Transmission.

Synchronous and Asynchronous Transmission

Both Synchronous and Asynchronous Transmission are types of serial data transmission techniques in
which data is transmitted between the sender and the receiver based on a clock pulse used for
synchronization.

Read through this article to find out more about synchronous and asynchronous transmission and how
they are different from each other.

What is Synchronous Transmission?

Synchronous transmission is a technique of data transmission in which a sender transmits a


continuous stream of data along with periodic timing signals supplied by an external clocking system
in order to ensure that the transmitter and the receiver are in sync.

In Synchronous transmission, data is transferred at predetermined intervals based on a predefined


clocking signal. It is intended for the consistent and reliable transmission of time-sensitive data such
as VoIP and audio/video streaming.

Because data is delivered in huge blocks rather than individual characters, this type of transmission
method is employed when huge volumes of data need to be transferred quickly. It can provide real-
time communication between a transmitter and a receiver.

The data blocks are separated and clustered at regular intervals, with synchronous characters
preceding them that a remote device decodes and uses to synchronize the connection between the
endpoints.

What is Asynchronous Transmission?

In Asynchronous transmission, each character is a self-contained unit with its start and stop bits and
an unpredictable delay. Hence, this form of transmission is also known as "start/top transmission".

The beginning and ending bits of asynchronous transmission are represented by start and stop bits.
At the beginning and end of a transmission, the extra 1 notifies the receiver of the first and last
characters being sent.
In asynchronous transmission method, data is transferred as packets rather than a continuous stream.
Since the start and stop bits are polarized in the opposite direction, the receiver can tell when the
second packet of data has been sent.

The following are the two main characteristics of asynchronous communication −

• A start bit precedes each character, and one or more stop bits follow it.
• Spaces frequently separate characters.

Asynchronous transmission, in general, has slower transmission rate, however it is highly flexible
because it does not require the transmitter and receiver to be synchronized.

Asynchronous Data Transfer


A synchronous data transfer is a data transmitting method where data is sent in a non-continuous,
non-synchronous manner. This shows that data is transmitted at irregular intervals, without any
fixed timing or coordination between the sender and receiver.
We synchronize the internal operations in any individual unit of the digital system using clock pulse.
That means clock pulse is given to every register within the unit. All the data transfer among internal
registers co-occurs during the occurrence of the clock pulse.
Now, let's assume that two units of the digital system are designed independently, like CPU and I/O
interface. If the internal registers in the I/O interface share a standard clock with the CPU registers,
then data transfer between the units (two or more) is said to be synchronous. But in maximum
cases, the internal timing in every unit is independent of each other, so every unit uses its clock for
its registers. In this case, the units are asynchronous, and data transfer between them is
called Asynchronous data transfer.

Methods of Asynchronous Data Transfer


We have four different methods of Asynchronous data transfer:
1. Strobe Control Method
2. Handshaking Method
3. Asynchronous Serial Transfer
4. Asynchronous Communication Interface
1. Strobe Control Method
The Strobe Control mode of asynchronous data transfer employs only one control line to time each
transfer. This control line is known as a strobe, and we may achieve it either by destination or
source, depending upon the one who initiates the data transfer.
Source initiated strobe: In the below figure, we can see that the source initiates the strobe, and as
shown in the diagram, the source unit will first place the data on the data bus.
Source: coagarage.blogspot.com
In the figure, we may see that the source unit initializes the strobe. In the timing diagram, we can
notice that the source unit first places the data on the data bus. Then after a brief delay, the source
unit activates a strobe pulse to ensure that the data revolves to a stable value. The strobe control
signal and data bus information remain in the active state for enough time to permit the destination
unit to receive the data.
Destination initiated strobe: In the below figure, we can see that the destination unit initiates the
strobe, and as shown in the timing diagram, the destination unit activates the strobe pulse first by
informing the source to provide the data.

In destination initiated transfer, the source unit will respond by placing the requested information
on the data bus. The transfer data must be valid and remain on the data bus long enough for the
destination unit to receive it. We can use the strobe pulse's falling edge again to trigger a destination
register. The destination unit then disables the stroke pulse. Finally, the source unit removes the
data from the bus after some determined time interval.
2. Handshaking method
The strobe control method for asynchronous data transfer has a disadvantage. The source unit
always assumes that the destination unit has received the data placed in the data bus. Similarly, the
destination unit that initiates the transfer takes that the source unit has placed data on the bus.
This problem is solved by the handshaking method of data transfer. In this method, we also have a
second control line that provides the unit that initiates the transfer with a relay.
In the Handshaking method, one control line is in the direction of the bus's data flow, i.e., from
source to destination. The source unit uses it to inform the destination unit if the data is valid in the
bus. The second control line is in the opposite direction, i.e., from destination to source. The
destination unit uses this control line to inform the source unit of accepting the data. And also, the
sequence of control depends on the unit that first initiates the transfer.
Source initiated handshaking: In the figure below, we can see that we have two handshaking lines,
"data valid" and "data accepted," generated by the source unit and the destination unit,
respectively.

Source: coagarage.blogspot.com
The timing diagram shows that the source initiates the transfer by placing the data on the data bus
and enabling its data valid signal. The destination then activates a data accepted signal after it
accepts the data from the bus. The source then disables the valid data signal, which makes the data
on the bus invalid. Finally, the destination disables the data accepted signal, and hence the system
goes into its initial state. We have a surety that the destination unit has read the data on the data
bus through the accepted signal.
Destination initiated handshaking: In the figure below, we can see that we have two handshaking
lines, "data valid" and "ready for data" generated by the source unit and the destination unit,
respectively.
In destination initiated, the destination unit initiates the transfer, so the source unit does not place
data on the bus until it receives a ready-for-data signal from the destination. After that, the
handshaking process is the same as the source initiated.

Asynchronous Communication Interface



The ••block diagram of the asynchronous communication interface is shown above. It functions both
as a transmitter and receiver.

Parts of the Interface :


The interface is initialized by the help of control bit loaded into the control register. The
transmitter register accepts the data byte from CPU through data bus which is then transferred to
shift register for serial transmission. The serial information is received into another shift register
and is transferred to the receiver register when a complete data byte is accumulated. The bits in
status register are used to check any errors during transmission and for input and output flags
which can be read by the CPU. The chip select (CS) input is used to select interface through
address bus. The register select (RS) is associated with Read (RD) and write (WR) controls. Two
registers are read and write only.
The register selected is the function of RS value and RD and WR status as shown in the table
below.

Working of the interface :


The interface is initialized by the CPU by sending a byte to the control register. Two bits in the
status register are used as flags and one bit is used to indicate whether the transmission register is
empty and another bit is used to indicate whether the receiver register is full.
Working of the transmitter portion :
The CPU reads the status register and checks the transmitter. If the transmitter is empty then CPU
transfers the character to transmitter. The first bit in transmitter is set to 0 to generate a start bit.
The parallel transfer of character takes place from the transmitter register to the shift register.
The transmitter is then marked empty. The CPU can transfer another character to transmitter
register after checking the flag in status register.
Working of receiver portion :
The receive data input is in 1-state when line is idle. The receiver control monitors the receive
data line to detect the occurrence of a start bit. The character bits are then shifted to the shift
register once the start bit has been detected. When the stop bit is received, the character is
transferred in parallel from shift register to the receiver register.
The interface checks for any errors during transmission and sets appropriate bits in the status
register. The three possible errors that the interface checks are the parity error, framing error and
over run error.

Modes of Transfer
We store the binary information received through an external device in the memory unit. The
information transferred from the CPU to external devices originates from the memory unit. Although
the CPU processes the data, the target and source are always the memory unit. We can transfer this
information using three different modes of transfer.
1. Programmed I/O
2. Interrupt- initiated I/O
3. Direct memory access( DMA)

1. Programmed I/O :
In this mode the data transfer is initiated by the instructions written in a computer
program. An input instruction is required to store the data from the device to the CPU and
a store instruction is required to transfer the data from the CPU to the device. Data
transfer through this mode requires constant monitoring of the peripheral device by the
CPU and also monitor the possibility of new transfer once the transfer has been initiated.
Thus CPU stays in a loop until the I/O device indicates that it is ready for data transfer.
Thus programmed I/O is a time consuming process that keeps the processor busy
needlessly and leads to wastage of the CPU cycles.
This can be overcome by the use of an interrupt facility. This forms the basis for the
Interrupt Initiated I/O.

2. Interrupt Initiated I/O :


This mode uses an interrupt facility and special commands to inform the interface to issue
the interrupt command when data becomes available and interface is ready for the data
transfer. In the meantime CPU keeps on executing other tasks and need not check for the
flag. When the flag is set, the interface is informed and an interrupt is initiated. This
interrupt causes the CPU to deviate from what it is doing to respond to the I/O transfer.
The CPU responds to the signal by storing the return address from the program counter
(PC) into the memory stack and then branches to service that processes the I/O request.
After the transfer is complete, CPU returns to the previous task it was executing. The
branch address of the service can be chosen in two ways known as vectored and non-
vectored interrupt. In vectored interrupt, the source that interrupts, supplies the branch
information to the CPU while in case of non-vectored interrupt the branch address is
assigned to a fixed location in memory.

3.DMA (Direct memory access)


is the special feature within the computer system that transfers the data between memory and
peripheral devices(like hard drives) without the intervention of the CPU. In other words, for large
data transfer like disk drives, it will be wasteful to use expensive general-purpose processors in
which status bits are to be watched and fed the data into the controller register one byte at a time
which is termed Programmed I/O. Computers avoid burdening the CPU so, they shift the work to a
Direct Memory Access controller. Let’s see the workings of this in detail.
To initiate the DMA transfer the host writes a DMA command block into the memory. This
block contains the pointer to the source of the transfer, the pointer to the destination of
the transfer, and the count of the number of bytes to be transferred. This command block
can be more complex which includes the list of sources and destination addresses that are
not contiguous. CPU writes the address of this command block and goes to other work. DMA
controller proceeds to operate the memory bus directly, placing the address on it without
the intervention of the main CPU. Nowadays simple DMA controller is a standard
component in all modern computers.
Block Diagram of DMA
4.
5. Block Diagram of DMA

The mutual understanding between the device controller and DMA controller is performed
via pair of wires called DMA request and DMA acknowledge. Let’s see what is the role of
these wires in the DMA transfer,
Working of DMA Transfer
The device controller places a signal on the DMA request wire when a word of data is
available for transfer. This cause DMA controller to seize the memory bus of CPU and place
the desired address on the DMA acknowledge wire. Up on successful data transfer the
device controller receives the DMA acknowledge and then it removes the DMA request
signal.
When the entire transfer is finished, DMA controller interrupts the CPU. This entire process
is depicted in the above diagram. DMA controller seizes the memory bus and CPU
momentarily prevented from accessing main memory. Although it can access the data items
in its cache. This cycle stealing (Seizing the memory bus temporarily and preventing the CPU
from accessing it) slows down the CPU computation, shifting the data transfer to DMA
controller generally improves the total system performance. Some of the computer
architecture used physical memory address for DMA, but other uses virtual addresses
(DVMA). Direct virtual memory access performs data transfer between memory mapped I/O
without the use of main memory.

Working of DMA Transfer


The device controller places a signal on the DMA request wire when a word of data is available for
transfer. This cause DMA controller to seize the memory bus of CPU and place the desired address
on the DMA acknowledge wire. Up on successful data transfer the device controller receives the
DMA acknowledge and then it removes the DMA request signal.
When the entire transfer is finished, DMA controller interrupts the CPU. This entire process is
depicted in the above diagram. DMA controller seizes the memory bus and CPU momentarily
prevented from accessing main memory. Although it can access the data items in its cache.
This cycle stealing (Seizing the memory bus temporarily and preventing the CPU from accessing
it) slows down the CPU computation, shifting the data transfer to DMA controller generally
improves the total system performance. Some of the computer architecture used physical
memory address for DMA, but other uses virtual addresses (DVMA). Direct virtual memory access
performs data transfer between memory mapped I/O without the use of main memory.

What is a DMA Controller?


Direct Memory Access uses hardware for accessing the memory, that hardware is called a DMA
Controller. It has the work of transferring the data between Input Output devices and main memory
with very less interaction with the processor. The direct Memory Access Controller is a control unit,
which has the work of transferring data.
DMA Controller Diagram in Computer Architecture
DMA Controller is a type of control unit that works as an interface for the data bus and the I/O Devices.
As mentioned, DMA Controller has the work of transferring the data without the intervention of the
processors, processors can control the data transfer. DMA Controller also contains an address unit,
which generates the address and selects an I/O device for the transfer of data. Here we are showing
the block diagram of the DMA Controller.

Block Diagram of DMA Controller

Data transmission in DMA controller:

The data transmission in the DMA controller can be done in three modes like burst mode, cycle
stealing mode, and transparent mode.

Burst Mode

In this mode, the transmission of a complete data block can be done in a continuous series. When the
CPU permits the direct memory access controller to access the system bus, then this controller will
transmit all data bytes within the data block before releasing the system control buses back to the
CPU, although it will cause the CPU to be not active for a long time. So this mode is also known as
Block Transfer Mode.
Cycle Stealing Mode

The cycle stealing mode is mainly used in a system wherever the CPU cannot be stopped for the time
taken for the burst transfer mode. In this type of mode, the direct memory access controller gets the
access to the system bus by using the Bus Grant & Bus Request signals. These signals are similar to the
burst mode which mainly controls the interface between the DMA controller & the CPU. In this mode,
the speed of data block transmission is not fast as compared in burst mode, however, the idle time of
the CPU is not as long as within the burst mode.

Transparent Mode

This mode uses more time for transmitting data blocks; however, it is also the most significant type of
mode in the overall performance of the system. In this mode, the DMA controller transmits data
simply whenever the CPU executes operations that do not utilize the buses of the system.

The main benefit of this mode is that the CPU never ends performing its programs & DMA transmits

are free in terms of time, whereas the drawback is that the hardware requires to decide once the

CPU is not utilizing the buses of the system, which can be complex. So this is also known as hidden

DMA data transfer mode.

Interleaved DMA

Interleaved DMA allows multiple devices to transfer data simultaneously. Unlike traditional DMA

methods, where only one device can access the memory at a time, interleaved DMA enables parallel

data transfers from multiple sources.

In Interleaved DMA, data is divided into smaller blocks or packets, which are transferred

alternatingly between different devices. This ensures efficient memory bus utilization and reduces

bottlenecks that could occur with sequential transfers.

By interleaving data transfers, this method optimizes overall system performance by minimizing idle

times and maximizing throughput. It is particularly beneficial when real-time processing and high-

speed data transfer are crucial.

Input/Output Processor
An input-output processor (IOP) is a processor with direct memory access capability. In this, the
computer system is divided into a memory unit and number of processors.Each IOP controls and
manage the input-output tasks. The IOP is similar to CPU except that it handles only the details of
I/O processing. The IOP can fetch and execute its own instructions. These IOP instructions are
designed to manage I/O transfers only.

Block Diagram Of I/O Processor

Below is a block diagram of a computer along with various I/O Processors. The memory unit occupies
the central position and can communicate with each processor.

The CPU processes the data required for solving the computational tasks. The IOP provides a path
for transfer of data between peripherals and memory. The CPU assigns the task of initiating the I/O
program.

The IOP operates independent from CPU and transfer data between peripherals and memory.

The communication between the IOP and the devices is similar to the program control method of
transfer. And the communication with the memory is similar to the direct memory access method.

In large scale computers, each processor is independent of other processors and any processor can
initiate the operation.

The CPU can act as master and the IOP act as slave processor. The CPU assigns the task of initiating
operations but it is the IOP, who executes the instructions, and not the CPU. CPU instructions
provide operations to start an I/O transfer. The IOP asks for CPU through interrupt.

Instructions that are read from memory by an IOP are also called commands to distinguish them
from instructions that are read by CPU. Commands are prepared by programmers and are stored in
memory. Command words make the program for IOP. CPU informs the IOP where to find the
commands in memory.
What are different types of interrupts?

An interrupt is a signal from a device attached to a computer or from a program within he computer
that requires the operating system to stop and figure out what to do next.

Interrupt systems are nothing but while the CPU can process the programs if the CPU needs any IO
operation. Then, it is sent to the queue and it does the CPU process. Later on Input/output (I/O)
operation is ready.

The I/O devices interrupt the data which is available and does the remaining process; like that
interrupts are useful. If interrupts are not present, the CPU needs to be in idle state for some time,
until the IO operation needs to complete. So, to avoid the CPU waiting time interrupts are coming into
picture.

Processor handle interrupts

Whenever an interrupt occurs, it causes the CPU to stop executing the current program. Then, comes
the control to interrupt handler or interrupt service routine.

These are the steps in which ISR handles interrupts. These are as follows −

Step 1 − When an interrupt occurs let assume processor is executing i'th instruction and program
counter will point to the next instruction (i+1)th.

Step 2 − When an interrupt occurs the program value is stored on the process stack and the program
counter is loaded with the address of interrupt service routine.

Step 3 − Once the interrupt service routine is completed the address on the process stack is popped
and placed back in the program counter.

Step 4 − Now it executes the resume for (i+1)th line.

Types of interrupts

There are two types of interrupts which are as follows −

Hardware interrupts

The interrupt signal generated from external devices and i/o devices are made interrupt to CPU when
the instructions are ready.
For example − In a keyboard if we press a key to do some action this pressing of the keyboard
generates a signal that is given to the processor to do action, such interrupts are called hardware
interrupts.

Hardware interrupts are classified into two types which are as follows −

• Maskable Interrupt − The hardware interrupts that can be delayed when a highest priority
interrupt has occurred to the processor.
• Non Maskable Interrupt − The hardware that cannot be delayed and immediately be serviced
by the processor.

Software interrupts

The interrupt signal generated from internal devices and software programs need to access any
system call then software interrupts are present.

Software interrupt is divided into two types. They are as follows −

• Normal Interrupts − The interrupts that are caused by the software instructions are called
software instructions.
Exception − Exception is nothing but an unplanned interruption while executing a program. For
example − while executing a program if we got a value that is divided by zero is called an exception.
Vectored Interrupts
Vectored interrupts are a type of interrupt mechanism where the interrupting device or program
directly provides the processor with information about the specific interrupt source. The interrupt
vector, which is a unique identifier, helps the processor identify the appropriate interrupt handler
routine to execute. In vectored interrupt systems, the processor can determine the interrupt source
without the need for additional polling or investigation.
ADVERTISEMENT
Non-Vectored Interrupts
Non-vectored interrupts, also known as basic interrupts, do not provide direct information about the
interrupt source. When a non-vectored interrupt occurs, the processor relies on a predefined
priority scheme to determine which interrupt handler routine to execute. The processor typically
iterates through each interrupt source in a fixed order until it finds the highest-priority interrupt that
needs attention.

What is the priority interrupt in computer architecture?


A priority interrupt is a system that determines the priority at which devices generating interrupt
signals simultaneously should be serviced by the CPU first. High-speed transfer devices are generally
given high priority, and slow devices have low priority. And, in case of multiple devices sending
interrupt signals, the device with high priority gets the service first.
• Daisy Chaining Priority - Hardware Method
• This method uses hardware to establish the priority of simultaneous interrupts. Deciding the
interrupt priority includes the serial connection of all the devices that generate an interrupt
signal. The devices are placed according to their priority such that the device having the
highest priority gets placed first, followed by lower priority devices. The device with the
lowest priority is found at last within the chain. In the daisy-chaining device, all devices are
linked in serial form. The interrupt line request is not unusual to devices.
• Even if one of the devices has an interrupt signal in the low-level state, the interrupt line
goes to a low-level state and allows the interrupt input within the CPU. While there's no
interrupt, the interrupt line remains in a high-level state. The CPU responds to the interrupt
by allowing the interrupt acknowledge line. This signal is received via device '1' at its PI
input. The acknowledge signal passes to the subsequent device through PO output if tool '1'
isn't asking for an interrupt.
• Parallel Priority Interrupt


• The parallel priority interrupts method uses a register whose bits are set one after the other
through the interrupt signal from every device. Priority is established in step with the
position of the bits inside the register.
• Along with the interrupt register, the circuit may add a mask register whose motive is to
control the status of every interrupt(interrupt signal). The mask register could be
programmed for disabling lower-priority interrupts even as a higher-priority device is being
serviced. Even as a higher priority device is being serviced, lower priority interrupts are
disabled by the programming mask register. It could also offer a facility that permits a high-
priority device to interrupt the CPU simultaneously while a lower-priority device gets
service. The figure above shows the logic for deciding priority among four interrupt source
systems.
• It includes an interrupt register whose individual bits are set through external conditions
and cleared by program instructions. Being a high-speed device, the magnetic disk is given
the highest priority. The printer has the next priority, accompanied via a character reader
and a keyboard. The number of bits present in the mask register and interrupt register is the
same.
• Setting or resetting any bit within the mask register is feasible using software instructions.
Each interrupts bit and its corresponding mask bit are carried out to produce the four inputs
to a priority encoder. In this manner, an interrupt is recognized only if its corresponding
mask bit is about 1 through the program.
• Two bits of the vector address are transferred to the CPU, generated by the priority encoder.
Other output from the encoder fixes the interrupt status flip-flop lST while an interrupt that
isn't masked comes. The interrupt permit flip-flop lEN may be set or cleared using the
program to offer an overall control over the interrupt system. The outputs of IST ANDed
with IEN offer a common interrupt signal for the CPU. The interrupt acknowledges that the
INTACK signal from the CPU permits the bus buffers present in the output register, and VAD,
a vector address, is located into the data bus.

Polling - Software method


• Polling is a software method. It is used to establish priority among interrupts occurring
simultaneously. When the processor detects an interrupt in the polling method, it branches
to an interrupt service routine whose job is to pull each Input/Output module to determine
which module caused the interrupt.
• The poll can be in the form of a different command line (For example, Test Input/Output).
Here, the processor raises the Test input/output and puts the address of a specific I / O
module in the address line. If there is an interrupt, the interrupt gets pointed.
• Also, it is the order by which they are tested; that is, the order in which they appear in the
address line or service routine determines the priority of every interrupt. Like, at the time of
testing, devices with the highest priority get tested, then comes the turn of devices with
lower priority. This is the easiest method for priority establishment on simultaneous
interrupt. But the downside of polling is that it takes time.

Advantages of Priority Interrupts


• Priority interrupts offer several advantages in managing the execution of tasks in a computer
system. Here are five simple advantages:
• Fast Response: Priority interrupts allow urgent tasks to be handled quickly, ensuring critical
operations get immediate attention and reducing delays in essential processes.
• Efficient Resource Allocation: They help allocate system resources to the most important
tasks first, ensuring that high-priority tasks are completed without unnecessary waiting.
• Real-time Processing: Priority interrupts are vital for time-sensitive applications like
controlling machinery, as they guarantee that crucial tasks are executed promptly,
maintaining smooth operations.

NUMERICALS

1.A DMA controller transfers 16-bit word to memory using cycle stealing. The words are assembled
from a device that transmits characters at a rate of 2400 characters per second. The CPU is fetching
and executing instructions at an average rate of 1 million instructions per second.

1. By how much will the CPU be slow down because of DMA transfer when the characters are
represented with 8 bit ASCII?

2. How much more percent the CPU slows down when 32 –bits words are transferred to
memory using cycle stealing?

SOLUTION. 1 character is 8 bits which makes speed of device 2400 Bytes/sec.


Word size is 2B tranferring it will take 1/1200=833.33 microseconds.

Memory executes 106 instructions which means it fetches 106 instructions or words in 1 second.
Therfore 1 word is acessed in 1 microsecond or it cycle time.

Therfore slow down =(1/(833.33+1))×100%= 0.119 or 0.12%

2)How many characters per second can be transmitted over a 1200 baud line in each of the
following modes? (Assume a character code of 8 bits)
SOLUTION.(a) Synchronous serial transmission Answer: 1200/8 = 150 characters per second
(b) Asynchronous serial transmission with two stop bits Answer: 1200/11 = 109 characters per
second Start bit Parity bit Stop bit

(c) Asynchronous serial transmission with one stop bit. Answer: 1200/10 = 120 characters per second

You might also like