0% found this document useful (0 votes)
12 views28 pages

Unit 5 Question Bank

The document is a question bank for the Computer Organization and Architecture (COA) course at ABES Engineering College, detailing various concepts such as peripheral devices, I/O interfaces, interrupts, and data transfer modes. It includes definitions, explanations, and functions related to these topics, along with examples and advantages/disadvantages of different data transfer methods like programmed I/O and interrupt-driven transfer. The content is structured to aid students in understanding key concepts and preparing for examinations.

Uploaded by

adarsh7380892559
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views28 pages

Unit 5 Question Bank

The document is a question bank for the Computer Organization and Architecture (COA) course at ABES Engineering College, detailing various concepts such as peripheral devices, I/O interfaces, interrupts, and data transfer modes. It includes definitions, explanations, and functions related to these topics, along with examples and advantages/disadvantages of different data transfer methods like programmed I/O and interrupt-driven transfer. The content is structured to aid students in understanding key concepts and preparing for examinations.

Uploaded by

adarsh7380892559
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

ABES ENGINEERING COLLEGE, GHAZIABAD

Unit-5
QUESTION BANK WITH ANSWER-KEY
SUBJECT NAME: COMPUTER ORGANIZATION AND ARCHITECTURE (COA)

SUBJECT CODE: BCS302

Q.No Description CO BL
Define Peripheral Devices.
Ans: Peripheral Devices are the devices that are under the direct control of the
computer and are said to be connected online. These devices are designed to read
1 CO5 KL2
information into or out of the memory unit upon command from the CPU. These
are called peripheral devices. These devices are of three types: Input, Output and
Input-Output peripherals.
Explain I/O Interface.
Discuss the design of a typical input or output interface. (2022-2023)
Ans: I/O Interface is used as a method which helps in transferring of
information between the internal storage devices i.e. memory and the external
peripheral device. A peripheral device is that which provide input and output
for the computer, it is also called Input-Output devices. For Example: A
keyboard and mouse provide Input to the computer are called input devices
while a monitor and printer that provide output to the computer are called
output devices. Just like the external hard-drives, there is also availability of
some peripheral devices which are able to provide both input and output.
Requirement of I/O Interface is because of the following reasons:
1. Data transfer rate of peripherals is usually slower than the CPU.
2. Data codes & formats in peripheral differ from the word format in
CPU and memory.
3. The operating modes of peripherals are different from each other and
each must be controlled so as not to disturb the operation of another
peripheral connected to CPU.
2 CO5 KL3
To resolve these differences, computer systems include special hardware
components between CPU and peripherals to supervise and synchronize all
input and output transfers are called interface.

Functions of Input-Output Interface:


1. It is used to synchronize the operating speed of CPU with respect
to input-output devices.
ABES ENGINEERING COLLEGE, GHAZIABAD
2. It selects the input-output device which is appropriate for the
interpretation of the input-output signal.
3. It is capable of providing signals like control and timing signals.
4. In this data buffering can be possible through data bus.
5. There are various error detectors.
6. It converts serial data into parallel data and vice-versa.
7. It also converts digital data into analog signal and vice-versa.
Define Interrupts.
Ans: An interrupt is a signal sent by an I/O interface to CPU when it is ready to
send information to the memory or receive information from the memory.
3 CO5 KL2
There are various types of interrupts:
Priority interrupts, hardware & software interrupts, vectored & non- vectored
interrupts, maskable & non-maskable interrupts.
Discuss different types of interrupts.
Ans:
Priority Interrupt: A priority interrupt is a system that determine the priority
among various requests when two or more devices’ requests arrives
simultaneously. When two devices interrupt the computer at the same time, the
priority interrupt resolves the multiple interrupts occurring scenario based upon
the speed of the peripheral devices sending the signal.

Hardware & Software Interrupts.

4 CO5 KL3

Example: A program might generate a software interrupt to read input from


keyboard.

Vectored & Non0vectored interrupt:


ABES ENGINEERING COLLEGE, GHAZIABAD

Maskable & non-maskable interrupt:

Explain I/O module with block diagram. Also explain the functions of I/O
module.

Ans: An I/O module is a hardware component serving as an interface between a


computer’s central processing unit (CPU) and input/output devices. It connects
the processor’s system bus and one or more peripheral devices (input/output)
devices. A system bus is a network of electrical wires and cables connecting
major components of a computer, such as the CPU, memory, and I/O devices.
The I/O module is responsible for handling the communication between the CPU
and input and output devices. This includes managing data transfer, controlling
power loads, and handling machine instructions. It enables system integrators to
5 CO5 KL3
connect to different and a greater number of devices. Most older machines,
devices, and systems are not able to connect or communicate with each other
using standard industry protocols. In such a scenario, and I/O module comes in
handy. The I/O module acts as a means of data exchange between external
devices and a computer’s processor. It aids in expanding a manufacturer’s
network by enabling the connection to disparate devices. This results in
improved system management and operational visibility. Furthermore, it
facilitates collecting data from various peripheral devices that come in different
formats, speeds, and amounts. Sensors, valves, monitors, and actuators are a few
peripheral devices whose data can be collected by the I/O module.
ABES ENGINEERING COLLEGE, GHAZIABAD
The role of the I/O module with a few examples: Weather stations, Gaming
consoles and accessories, Smart home control.

Core Functions of I/O module:

1. Processor Communication: When the I/O module communicates with the


processor, it involves the following

• Command Decoding: The I/O module accepts and decodes commands


from the processor.
• Data Exchange: It facilitates data transfer between the processor,
memory, and peripheral devices.
• Status Reporting: The I/O module communicates the status of each
peripheral device to the CPU, as they are slower.
• Address Decoding: It organizes each peripheral device by recognizing
its unique address.

2. Control and Timing: Another major function of the I/O module is control
and timing to coordinate between internal resources and external devices. Here
is how the data transfers from external devices to the processor:

• The processor asks the i/O module about the status of the attached
peripheral device.
• The I/O module returns the status of a specific device.
• If the device is ready to transmit the data, the processor issues a
command to the I/O module to transfer data.
• The I/O module then takes the requested data from the external device
and transmits it to the processor.

3. Data Buffering: Data buffering helps the I/O module to manage the data
transfer speed for the data sent by the processor to peripheral devices. This
helps peripheral devices and the processor work at the same pace.
ABES ENGINEERING COLLEGE, GHAZIABAD
4. Error Detection: The I/O module detects electrical and mechanical
malfunctions in peripheral devices and reports them to the processor. The
parity bit method is one of the ways the I/O module detects errors.

5. Device Communication: It also facilitates data exchange between different


peripheral devices.
Define Data mode of transfer.
Ans: In computer organization and architecture, the "modes of transfer" refer
to the different methods used for transferring data between components of a
computer system. Here are some definitions of the common modes of transfer in
this context:
1. Serial transfer: This is a mode of transfer where data is transmitted one bit
at a time over a single communication line. Serial transfer is typically slower
than parallel transfer but requires fewer communication lines and is, therefore,
less complex and expensive.
2. Parallel transfer: This is a mode of transfer where data is transmitted multiple
bits at a time over multiple communication lines. The parallel transfer is faster
than serial transfer but requires more communication lines and is more complex
and expensive.
3. Direct Memory Access (DMA): This is a mode of transfer where data is
transferred directly between memory and an input/output (I/O) device without
involving the central processing unit (CPU). DMA provides a fast and efficient
method for transferring large amounts of data, as it frees up the CPU to perform
other tasks.
6 4. Interrupt-driven transfer: This is a mode of transfer where data is CO5 KL3
transferred in response to an interrupt signal. An interrupt signal is sent by an
I/O device to the CPU, indicating that data is ready for transfer. The interrupt-
driven transfer allows the CPU to pause its current tasks and attend to the
transfer, providing a flexible and efficient way of transferring data.
5. Programmed I/O transfer: This is a mode of transfer where data is
transferred under the control of a program. The CPU actively manages the
transfer, sending commands to the I/O device to initiate and monitor its progress.
6. Memory-mapped I/O transfer: This is a mode of transfer where data is
transferred through a shared address space, where I/O devices are treated as
memory locations. Memory-mapped I/O transfer allows the CPU to access I/O
devices using the same memory access instructions to access memory, making
it easy to use and flexible.
The choice of transfer mode in computer organization and architecture depends
on factors such as the amount of data being transferred, the speed of transfer
required, and the priority of the CPU's tasks. Each transfer mode has advantages
and disadvantages, and the optimal choice will depend on the system's specific
requirements.

Discuss Programmed I/O mode of data transfer.


Programmed I/O transfer
Programmed I/O transfer is a data transfer mode in computer organization and
7 architecture. In this mode of transfer, data is transferred between an I/O device CO5 KL3
and the computer's memory under the control of a program. The central
processing unit (CPU) actively manages the transfer, sending commands to the
I/O device to initiate the transfer and monitor its progress.
ABES ENGINEERING COLLEGE, GHAZIABAD
In a programmed I/O transfer, the CPU sends a command to the I/O device to
initiate the transfer and then waits for the transfer to complete. During the
transfer, the CPU cannot perform other tasks as it is busy managing the transfer.
Programmed I/O transfer provides low-level control over the transfer, as the
CPU can issue commands to the I/O device to control the transfer. This can be
useful for certain applications where fine-grained control over the transfer is
required. However, it also means that the CPU is tied up during the transfer,
potentially slowing down other tasks that the computer performs.
Programmed I/O transfer is typically slower than other transfer modes, such as
direct memory access (DMA). However, it can still be useful in certain
situations, such as when low-level control over the transfer is required or when
the amount of data is small.
Programmed I/O transfer works as follows:
1. The CPU sends a command to the I/O device to initiate the transfer. This
command may specify the transfer direction (i.e., from memory to the I/O device
or from the I/O device to memory), the starting address in memory where the
data is to be stored or retrieved, and the number of bytes to be transferred.
2. The I/O device receives the command and begins the transfer. It transfers the
data to or from the specified memory location, one byte at a time.
3. The CPU repeatedly polls the status register of the I/O device to monitor the
progress of the transfer. The status register contains information about the
current state of the I/O device, including whether the transfer is complete.
4. When the transfer is complete, the I/O device sets a flag in the status register
to indicate that the transfer is done. The CPU reads the status register, and when
it sees the flag, it knows that the transfer is complete.
5. The CPU can resume other tasks after the transfer is finished. In programmed
I/O transfer, the CPU manages the transfer and monitors its progress. This allows
for low-level control over the transfer but also means that the CPU is tied up
during the transfer and cannot perform other tasks. Programmed I/O transfer is
typically slower than other transfer modes, such as direct memory access
(DMA), as the CPU must repeatedly poll the status register to monitor the
transfer.
ABES ENGINEERING COLLEGE, GHAZIABAD

Here are a few examples of how programmed I/O transfer might be used in a computer system:
1. Keyboard input: When a user types on the keyboard, the keyboard sends a
signal to the CPU indicating that data is ready for transfer. The CPU uses
programmed I/O transfer to receive the data from the keyboard, one character at
a time.
2. Serial communication: When a computer communicates with another device
over a serial connection, the CPU uses programmed I/O transfer to send and
receive data. The CPU sends a command to the serial communication device to
initiate the transfer and then waits for the transfer to complete.
3. Disk I/O: When a computer reads data from a disk or writes data to a disk,
the CPU uses programmed I/O transfer to transfer the data. The CPU sends a
command to the disk controller to initiate the transfer and then waits for the
transfer to complete.
4. Display output: When a computer displays data on a screen, the CPU uses
programmed I/O transfer to send the data to the display adapter. The CPU sends
a command to the display adapter to initiate the transfer and then waits for the
transfer to complete.
These are just a few examples of how programmed I/O transfer might be used in
a computer system. The specific details of the transfer will vary depending on
the I/O device and the architecture of the computer system.
ABES ENGINEERING COLLEGE, GHAZIABAD

Advantages of Programmed I/O Transfer


1. Flexibility: Programmed I/O transfer allows for fine-grained control over the
transfer. The CPU can issue commands to the I/O device to control the transfer,
which can be useful for certain applications requiring low-level control.
2. Compatibility: Programmed I/O transfer works with a wide range of I/O
devices, making it a versatile transfer mode.
3. Debugging: As the CPU actively manages the transfer, diagnosing and fixing
problems that may occur during the transfer is easier.
4. Simple Implementation: Programmed I/O transfer can be implemented using
simple programming constructs, such as loops and status register polling.
5. Cost Effective: Programmed I/O transfer does not require specialized
hardware, making it a cost-effective solution for transferring data in certain
situations.
These are just a few advantages of programmed I/O transfer. The advantages
will
depend on the particular system used and the application's requirements.
Disadvantages of Programmed I/O transfer:
1. Slow performance: As the CPU must manage the transfer and poll the status
register, programmed I/O transfer is slower than other transfer modes, such
as direct memory access (DMA).
2. CPU utilization: During the transfer, the CPU is tied up, which can negatively
impact the system's overall performance.
3. Complexity: Implementing programmed I/O transfer can be more complex
than other modes of transfer, such as DMA, which can make it more difficult to
diagnose and fix problems that may occur during the transfer.
4. Lack of scalability: Programmed I/O transfer could be better suited for high-
speed transfers, as the CPU cannot manage multiple transfers simultaneously.
5. Limited to low-level control: While the low-level control offered by
programmed I/O transfer can be an advantage in some cases, it can also be a
disadvantage, as it does not allow for higher-level abstractions that may be
desirable for certain applications.

Discuss Interrupt- Driven transfer.


The interrupt-driven transfer is a data transfer mode in computer organization
and architecture. In this transfer mode, the CPU is notified of incoming data
through an interrupt request. An interrupt is a signal sent to the CPU indicating
that an I/O device has data ready for transfer.
When the CPU receives an interrupt, it suspends its current task and handles it.
The CPU communicates with the I/O device to initiate the data transfer and then
waits for the transfer to complete. Once the transfer is complete, the CPU returns
8 CO5 KL3
to its previous task.
The interrupt-driven transfer balances the control offered by programmed I/O
transfer and the speed offered by direct memory access (DMA). The CPU
controls the transfer, but the transfer occurs in the background, freeing up the
CPU to perform other tasks. This transfer mode is commonly used in systems
requiring high-speed data transfer and low-level control over the transfer.
Examples include keyboard input, serial communication, disk I/O, and display
output.
ABES ENGINEERING COLLEGE, GHAZIABAD

Interrupt-driven transfer works as follows:


1. The CPU sends a command to the I/O device to initiate the transfer. Depending
on the specific device, this command could be as simple as a write to a control
register or more complex.
2. The I/O device responds to the command by making the data available for
transfer.
3. The CPU polls the status register of the I/O device to determine when the data
is ready for transfer.
4. When the data is ready, the CPU transfers the data from the I/O device to
memory or from memory to the I/O device, one word at a time.
5. The CPU repeats the process until all data has been transferred.
In programmed I/O transfer, the CPU actively manages the transfer, providing
fine-grained control. However, this also means that the CPU is tied up during the
transfer, which can negatively impact performance.
Here are a few examples of Interrupt-driven transfer
The interrupt-driven transfer is a common data transfer mode in computer
organization and architecture used in various applications. Here are a few
examples:
1. Keyboard Input: When a user presses a key on the keyboard, an interrupt is
generated to notify the CPU that data is ready for transfer. The CPU handles the
interrupt, reads the data from the keyboard, and stores it in memory.
2. Serial Communication: When a serial device, such as a modem or a serial-
to-USB converter, has data ready to transfer, it generates an interrupt to notify
the CPU. The CPU handles the interrupt, reads the data from the device, and
stores it in memory.
3. Disk I/O: When a disk drive has data to transfer, it generates an interrupt to
notify the CPU. The CPU handles the interrupt, reads the data from the disk
drive, and stores it in memory.
4. Display Output: When the graphics card has data to transfer to the display, it
generates an interrupt to notify the CPU. The CPU handles the interrupt, reads
the data from the graphics card, and transfers it to the display.
These are just a few examples of the many applications that use interrupt-driven
transfer. The specific applications will depend on the particular system being
used and the application's requirements.
Here are a few advantages of Interrupt-driven transfer
The interrupt-driven transfer has several advantages over other modes of data
transfer, including:
1. Improved System Performance: Interrupt-driven transfer allows the CPU to
perform other tasks while waiting for the I/O transfer to complete, which can
result in improved system performance.
2. Real-time Processing: Interrupt-driven transfer can be used for real-time
processing, as the CPU is notified immediately when data is ready for transfer.
This can be important for applications that require the timely processing of data.
3. Lower CPU Overhead: Unlike programmed I/O transfer, interrupt-driven
transfer requires less CPU overhead, as the CPU is not constantly polling the
status register of the I/O device.
4. Dynamic Load Balancing: Interrupt-driven transfer allows the CPU to
dynamically balance the load between processing tasks and I/O tasks, as it can
handle interrupts as they occur.
ABES ENGINEERING COLLEGE, GHAZIABAD
5. Improved Responsiveness: Interrupt-driven transfer improves
responsiveness, as the CPU can quickly respond to incoming data. This is
especially important in interactive applications, such as games or multimedia
applications.
Overall, the interrupt-driven transfer provides a balance between the control
offered by programmed I/O transfer and the speed offered by direct memory
access (DMA), making it a versatile mode of data transfer in computer
organization and
architecture.
Here are a few disadvantages of Interrupt-driven transfer
Like any mode of data transfer, the interrupt-driven transfer has its disadvantages
as well, including:
1. Complexity: Interrupt-driven transfer can be more complex to implement
than other transfer modes, as it requires interrupting handlers and managing
interrupt priority levels.
2. Latency: Interrupt-driven transfer can introduce latency into the system, as
the CPU must handle the interrupt and perform the data transfer. This can be
especially problematic in real-time applications that require fast response times.
3. Overhead: Interrupt-driven transfer introduces additional overhead into the
system, as the CPU must handle the interrupt and perform the data transfer. This
can be especially problematic in systems with limited processing power.
4. Unpredictable Interrupt Latency: The latency associated with interrupt-
driven transfer can be unpredictable, as it can be affected by the number of other
interrupts being processed by the CPU at the time.
5. Error Handling: Error handling can be more complex in interrupt-driven
transfer, as errors can occur at any point in the transfer.
Overall, interrupt-driven transfer is a powerful mode of data transfer in computer
organization and architecture, but it requires careful consideration of the
trade-offs between performance, complexity, and overhead.

Discuss Direct Memory Access (DMA). (2019-2020) (2020-2021)


Ans: Direct Memory Access (DMA) is a data transfer method in computer
organization and architecture that allows an I/O device to transfer data
directly to or from memory without the involvement of the CPU. DMA allows
I/O devices to transfer data to or from memory at high speeds without putting
additional strain on the CPU.
In DMA, a DMA controller is used to manage data transfer between the I/O
device and memory. The DMA controller is responsible for setting up the
transfer, starting it, and monitoring it to ensure it is completed successfully. The
CPU is not involved in the actual data transfer but is notified when it is complete.
9 DMA is often used for high-bandwidth I/O devices, such as disk drives, network CO5 KL3
interfaces, and graphics cards, where the CPU is not fast enough to manage the
data transfer in real time. DMA allows these devices to transfer data to or from
memory at high speeds without putting additional strain on the CPU.
Direct Memory Access (DMA) works as follows:
1. Initialization: The CPU initializes the DMA transfer by setting up the source
and destination addresses, the size of the transfer, and any other relevant
parameters.
2. Request: The I/O device requests a DMA transfer by asserting a request signal
to the DMA controller.
ABES ENGINEERING COLLEGE, GHAZIABAD
3. Grant: The DMA controller grants the request and starts the transfer by
asserting a grant signal to the I/O device.
4. Data Transfer: The I/O device transfers the data directly to or from memory
without involving the CPU. The DMA controller manages the transfer and
provides necessary addresses and control signals.
5. Completion: When the transfer is complete, the I/O device asserts a
completion signal to the DMA controller. The DMA controller then informs the
CPU that the transfer is complete.
The CPU can perform other tasks during the transfer as the DMA controller
handles the data transfer. This allows the CPU to improve its overall
performance and responsiveness.
Overall, DMA provides a high-speed data transfer method that allows I/O
devices to transfer data to or from memory without putting additional strain on
the CPU. This improves system performance and responsiveness, especially for
high-bandwidth I/O devices.
Some examples of Direct Memory Access (DMA) in action:
1. Disk Drives: DMA is often used for disk drives, where high-speed data
transfer is critical for good performance. DMA allows disk drives to transfer data
to or from memory at high speeds without putting additional strain on the CPU.
2. Network Interfaces: Network interfaces also use DMA to transfer data to or
from memory at high speeds. This allows the CPU to handle other tasks while
the network interface performs the data transfer.
3. Graphics Cards: Graphics cards use DMA to transfer large amounts of video
data to or from memory. This allows the GPU to render images quickly without
putting additional strain on the CPU.
4. Sound Cards: Sound cards use DMA to transfer audio data to or from
memory, which allows the CPU to handle other tasks while the sound card is
playing or recording audio.
5. Peripheral Devices: Many peripheral devices, such as printers, scanners, and
USB devices, also use DMA to transfer data to or from memory. This allows the
CPU to handle other tasks while the peripheral device performs the data transfer.
These are just a few examples of Direct Memory Access (DMA). DMA is widely
used in computer systems to provide high-speed data transfer for I/O devices,
which helps to improve system performance and responsiveness.
Advantages of Direct Memory Access (DMA):
1. Improved Performance: DMA allows I/O devices to transfer data directly to
or from memory at high speeds without involving the CPU. This improves
system performance and responsiveness, especially for high-bandwidth I/O
devices.
2. CPU Offload: By handling the data transfer, DMA reduces the load on the
CPU. This allows the CPU to perform other tasks more efficiently and improves
system performance.
3. High Bandwidth: DMA provides a high-speed method of data transfer that
is well suited to high-bandwidth I/O devices. This results in improved system
performance and responsiveness for these devices.
4. Increased Efficiency: DMA allows I/O devices to transfer data directly to or
from memory without CPU intervention. This results in increased efficiency and
improved system performance.
ABES ENGINEERING COLLEGE, GHAZIABAD
5. Real-time Processing: DMA is often used for real-time processing
applications, such as audio and video playback, where high-speed data transfer
is critical for good performance.
Overall, DMA provides a high-speed data transfer method that allows I/O
devices to transfer data to or from memory without putting additional strain on
the CPU.
This results in improved system performance and responsiveness and makes
DMA a key component of many computer systems.
Disadvantages of Direct Memory Access (DMA):
1. Complexity: DMA requires a significant amount of hardware and software
support, which makes the overall system more complex and difficult to manage.
2. Resource Management: DMA requires careful management of system
resources, such as memory and I/O device access, to ensure that data transfers
do not interfere with other system operations.
3. Latency: In some cases, DMA may introduce latency into the system, as the
CPU may have to wait for the DMA transfer to complete before it can perform
other tasks.
4. Security Risks: DMA provides direct access to memory, which can introduce
security risks if the system is not properly secured. For example, malware or
other malicious software could use DMA to bypass security measures and access
sensitive data.
5. Interrupt Latency: DMA can introduce additional interrupt latency into the
system, as the CPU may have to handle additional interrupt requests from the
DMA controller.
These are a few disadvantages of Direct Memory Access (DMA). While DMA
provides many benefits for computer systems, it is important to consider these
limitations and design systems accordingly to ensure that the advantages of
DMA outweigh the disadvantages.

Conclusion
In conclusion, the transfer mode in Computer Organization and Architecture is
a critical aspect of the overall system design. There are three main transfer
ABES ENGINEERING COLLEGE, GHAZIABAD
modes: Programmed I/O transfer, Interrupt-driven transfer, and Direct Memory
Access (DMA).
Programmed I/O transfer is a straightforward data transfer method, but it can put
a heavy load on the CPU and result in low system performance.
The interrupt-driven transfer is more efficient than Programmed I/O transfer but
can introduce additional interrupt latency into the system.
Direct Memory Access (DMA) is the most efficient and high-performance data
transfer method. DMA allows I/O devices to transfer data directly to or from
memory without involving the CPU, resulting in improved system performance
and responsiveness. However, DMA is also the most complex and challenging
transfer mode, requiring careful system resource management and addressing
potential security risks.
Each transfer mode has its advantages and disadvantages, and the appropriate
mode of transfer for a particular system will depend on its specific requirements
and design constraints. Understanding the modes of transfer and their benefits
and limitations is crucial for effective computer organization and architecture
design.

Differentiate between isolated I/O and memory mapped I/O. (2019-2020)


Ans:

10 CO5 KL3

Advantages of Memory-Mapped I/O:

Faster I/O Operations: Memory-mapped I/O allows the CPU to access I/O
devices at the same speed as it accesses memory. This means that I/O
operations can be performed much faster compared to isolated I/O.

Simplified Programming: Memory-mapped I/O simplifies programming as


the same instructions can be used to access memory and I/O devices. This
means that software developers do not have to use specialized I/O instructions,
which can reduce programming complexity.
Efficient Use of Memory Space: Memory-mapped I/O is more memory-
efficient as I/O devices share the same address space as the memory. This
ABES ENGINEERING COLLEGE, GHAZIABAD
means that the same memory address space can be used to access both memory
and I/O devices.

Disadvantages of Memory-Mapped I/O:

Limited I/O Address Space: Memory-mapped I/O limits the I/O address
space as I/O devices share the same address space as the memory. This means
that there may not be enough address space available to address all I/O devices.
Slower Response Time: If an I/O device is slow to respond, it can delay the
CPU’s access to memory. This can lead to slower overall system performance.

Advantages of Isolated I/O:

Large I/O Address Space: Isolated I/O allows for a larger I/O address space
compared to memory-mapped I/O as I/O devices have their own separate
address space.
Greater Flexibility: Isolated I/O provides greater flexibility as I/O devices
can be added or removed from the system without affecting the memory
address space.
Improved Reliability: Isolated I/O provides better reliability as I/O devices
do not share the same address space as the memory. This means that if an I/O
device fails, it does not affect the memory or other I/O devices.

Disadvantages of Isolated I/O:

Slower I/O Operations: Isolated I/O can result in slower I/O operations
compared to memory-mapped I/O as it requires the use of specialized I/O
instructions.
More Complex Programming: Isolated I/O requires specialized I/O
instructions, which can lead to more complex programming.

Explain I/O Processor.


Ans: The DMA mode of data transfer reduces the CPU’s overhead in handling
I/O operations. It also allows parallelism in CPU and I/O operations. Such
parallelism is necessary to avoid the wastage of valuable CPU time while
handling I/O devices whose speeds are much slower as compared to CPU. The
concept of DMA operation can be extended to relieve the CPU further from
getting involved with the execution of I/O operations. This gives rise to the
development of special purpose processors called Input-Output Processor
11 CO5 KL3
(IOP) or IO channels.
The Input-Output Processor (IOP) is just like a CPU that handles the details of
I/O operations. It is more equipped with facilities than those available in a
typical DMA controller. The IOP can fetch and execute its own instructions
that are specifically designed to characterize I/O transfers. In addition to the
I/O-related tasks, it can perform other processing tasks like arithmetic, logic,
branching, and code translation. The main memory unit takes a pivotal role. It
communicates with the processor by means of DMA.
ABES ENGINEERING COLLEGE, GHAZIABAD
The Input-Output Processor is a specialized processor which loads and stores
data in memory along with the execution of I/O instructions. It acts as an
interface between the system and devices. It involves a sequence of events to
execute I/O operations and then store the results in memory.

Input-Output Processor

Features of an Input-Output Processor


• Specialized Hardware: An IOP is equipped with specialized
hardware that is optimized for handling input/output operations.
This hardware includes input/output ports, DMA controllers, and
interrupt controllers.
• DMA Capability: An IOP has the capability to perform Direct
Memory Access (DMA) operations. DMA allows data to be
transferred directly between peripheral devices and memory
without going through the CPU, thereby freeing up the CPU for
other tasks.
• Interrupt Handling: An IOP can handle interrupts from
peripheral devices and manage them independently of the CPU.
This allows the CPU to focus on executing application programs
while the IOP handles interrupts from peripheral devices.
• Protocol Handling: An IOP can handle communication
protocols for different types of devices such as Ethernet, USB, and
SCSI. This allows the IOP to interface with a wide range of devices
without requiring additional software support from the CPU.
• Buffering: An IOP can buffer data between the CPU and
peripheral devices. This allows the IOP to handle large amounts of
data without overloading the CPU or the peripheral devices.
• Command Processing: An IOP can process commands from
peripheral devices independently of the CPU. This allows the CPU
to focus on executing application programs while the IOP handles
peripheral device commands.
• Parallel Processing: An IOP can perform input/output
operations in parallel with the CPU. This allows the system to
handle multiple tasks simultaneously and improve overall system
performance.
ABES ENGINEERING COLLEGE, GHAZIABAD
Applications of I/O Processors
• Data Acquisition Systems: I/O processors can be used in data
acquisition systems to acquire and process data from various
sensors and input devices. The I/O processor can handle high-
speed data transfer and perform real-time processing of the
acquired data.
• Industrial Control Systems: I/O processors can be used in
industrial control systems to interface with various control devices
and sensors. The I/O processor can provide precise timing and
control signals, and can also perform local processing of the input
data.
• Multimedia Applications: I/O processors can be used in
multimedia applications to handle the input and output of
multimedia data, such as audio and video. The I/O processor can
perform real-time processing of multimedia data, including
decoding, encoding, and compression.
• Network Communication Systems: I/O processors can be used
in network communication systems to handle the input and output
of data packets. The I/O processor can perform packet routing,
filtering, and processing, and can also perform encryption and
decryption of the data.
• Storage Systems: I/O processors can be used in storage systems
to handle the input and output of data to and from storage devices.
The I/O processor can handle high-speed data transfer and perform
data caching and prefetching operations.
Advantages of Input-Output Processor
• The I/O devices can directly access the main memory without the
intervention of the processor in I/O processor-based systems.
• It is used to address the problems that arise in the Direct memory
access method.
• Reduced Processor Workload: With an I/O processor, the main
processor doesn’t have to deal with I/O operations, allowing it to
focus on other tasks. This results in more efficient use of the
processor’s resources and can lead to faster overall system
performance.
• Improved Data Transfer Rates: Since the I/O processor can
access memory directly, data transfers between I/O devices and
memory can be faster and more efficient than with other methods.
• Increased System Reliability: By offloading I/O tasks to a
dedicated processor, the system can be made more fault-tolerant.
For example, if an I/O operation fails, it won’t affect other system
processes.
• Scalability: I/O processor-based systems can be designed to
scale easily, allowing for additional I/O processors to be added as
needed. This can be particularly useful in large-scale data centers
or other environments where the number of I/O devices is
constantly changing.
• Flexibility: I/O processor-based systems can be designed to
handle a wide range of I/O devices and interfaces, providing more
flexibility in system design and allowing for better customization
to meet specific requirements.
ABES ENGINEERING COLLEGE, GHAZIABAD
Disadvantages of Input-Output Processor
• Cost: I/O processors can add significant costs to a system due to
the additional hardware and complexity required. This can be a
barrier to adoption, especially for smaller systems.
• Increased Complexity: The addition of an I/O processor can
increase the overall complexity of a system, making it more
difficult to design, build, and maintain. This can also make it
harder to diagnose and troubleshoot issues.
• Limited Performance Gains: While I/O processors can improve
system performance by offloading I/O tasks from the main
processor, the gains may not be significant in all cases. In some
cases, the additional overhead of the I/O processor may actually
slow down the system.
• Synchronization Issues: With multiple processors accessing the
same memory, synchronization issues can arise, leading to
potential data corruption or other errors.
• Lack of Standardization: There are many different I/O
processor architectures and interfaces available, which can make it
difficult to develop standardized software and hardware solutions.
This can limit interoperability and make it harder for vendors to
develop compatible products.
Explain I/O channels with its types.
Ans: I/O Channel is an extension of the DMA concept. It has ability to execute
I/O instructions using special-purpose processor on I/O channel and complete
control over I/O operations. Processor does not execute I/O instructions itself.
Processor initiates I/O transfer by instructing the I/O channel to execute a
program in memory. Program specifies – Device or devices, Area or areas of
memory, Priority, and Error condition actions Types of I/O Channels:

12 CO5 KL3

1. Selector Channel: Selector channel controls multiple high-speed devices.


It is dedicated to the transfer of data with one of the devices. In selector
channel, each device is handled by a controller or I/O module. It controls the
I/O controllers shown in the figure.
ABES ENGINEERING COLLEGE, GHAZIABAD

2. Multiplexer Channel: Multiplexer channel is a DMA controller that can


handle multiple devices at the same time. It can do block transfers for several
devices at once.

Two types of multiplexers are used in this channel:


1. Byte Multiplexer – It is used for low-speed devices. It transmits
or accepts characters. Interleaves bytes from several devices.
2. Block Multiplexer – It accepts or transmits block of characters.
Interleaves blocks of bytes from several devices. Used for high-
speed devices.

13 Differentiate between Serial Transmission Vs Parallel Transmission. CO5 KL2


ABES ENGINEERING COLLEGE, GHAZIABAD

Discuss Synchronous & Asynchronous Data Communication.


Ans: Serial communication is the process of sequentially transferring the
information/bits on the same channel. Due to this, the cost of wire will be
reduced, but it slows the transmission speed. Generally, communication can be
described as the process of interchanging information between individuals in the
form of audio, video, verbal words, and written documents. The serial protocol
is run on every device that can be our mobile, personal computers, and many
more with the help of following some protocols. The protocol is a type of reliable
and secure form of communication that contains a set of rules addressed with the
help of a source host and a destination host. In serial communication, binary
pulses are used to show the data. Binary contains the two numbers 0 and 1.
Synchronous Communication: In synchronous communication, the frames
or data will be constructed with the help of combining the groups of bits. That
14 frames will be continuously sent in time with a master clock. It uses a CO5 KL3
synchronized clock frequency to operate the data of sender or receiver. In
synchronous communication, there is no need to use the gaps, start bits and stop
bits. The time taken by the sender and receiver is synced that's why the frequency
of timing error will be less, and the data will move faster. On the basis of the
timing being synced correctly between the sender and receiver devices, the data
accuracy is totally dependent. The synchronous serial transmission is more
expensive as compared to asynchronous serial transmission.

Asynchronous Communication: In asynchronous communication, the


groups of bits will be treated as an independent unit, and these data bits will be
sent at any point in time. In order to make synchronization between sender and
receiver, the stop bits and start bits are used between the data bytes. These bits
are useful to ensure that the data is correctly sent. The time taken by data bits of
ABES ENGINEERING COLLEGE, GHAZIABAD
sender and receiver is not constant, and the time between transmissions will be
provided by the gaps. In asynchronous communication, we don't require
synchronization between the sender and receiver devices, which is the main
advantage of asynchronous communication. This method is also cost-effective.
In this method, there can be a case when data transmission is slow, but it is not
compulsory, and it is the main disadvantage of the asynchronous method.
Discuss transmission modes in serial communication. (2020-2021) (2021-22)
Ans: On the basis of the data transfer rate and the type of transmission mode,
serial communication will take many forms. The transmission mode can be
classified into simplex, half-duplex, and full-duplex. Each transmission mode
contains the source, also known as sender or transmitter, and destination, also
known as the receiver.
Transmission Mode: The transmission modes are described as follows:
Simplex: In the simplex method, the data transmission can be performed only
in one direction. At a time, only one client (either sender or receiver) can be
active. That means among the two devices, one device can only transmit a link
while the other device can only receive it. A sender can only transmit the data,
and the receiver can only accept that data. The receiver cannot reply back to the
sender. In another case, if the receiver sends the data, the sender will only accept
it. The sender cannot reply back to the receiver.

15 CO5 KL3

There are various examples of the simplex.

Example 1: Keyboard and CPU are the best examples of a simplex. The
keyboard always transmits characters to the CPU (Central processing unit), but
the CPU does not require transmitting characters or data to the keyboard.

Example 2: Printers and computers are one more example of the simplex.
Computers always send data to the printers, but printers are not able to send the
data to the computers. In some cases, printers can also talk back, and this case is
an exception. There is only one lane in the simplex.

Example 3: Teletext is also an example of the simplex. Television companies


broadcast the data. At the same time, the aerials will be used, which will help to
broadcast the data in the form of TV pictures in the people's homes. But people
are not able to send the signals back to aerials. With the help of our handset or
remove, we can set requests page or channel to our TV's special Teletext adapter.
When the requested page gets broadcast, then the special Teletext adapter will
capture the requested Teletext pages. TV will never send a request back to the
transmitters. Example 4: One-way road is also an example of the simplex. The
ABES ENGINEERING COLLEGE, GHAZIABAD
traffic on the one-way road can go only in one direction, and the vehicles coming
from the opposite directions are not allowed to drive through that way.

Half Duplex: In the half-duplex, the sender and receiver can communicate in
both directions, but not at the same time. If a sender sends some data, the receiver
is able to accept it, but at that time, the receiver cannot send anything to the
receiver. Same as if the receiver sends data to the sender, the sender cannot send.
If there is a case where we don't need to communicate at a time in both the
direction, we can use the half-duplex. For example, the internet is a good
example of half-duplex. With the help of internet, if a user sends a web page
request to the web server, the server processes the application and sends the
requested page to the user.

Example1: One lane bridge can also explain the half-duplex. In a one-lane
bridge, the two-way vehicles will provide the way so that they can cross. At a
time, only one end will send, and the other end will only receive. We can also
perform the error correction that means if the information received by the
receiver is corrupted, then it can again request the sender to retransmit that
information.
Example 2: Walkie-talkie is also a classic example of half-duplex. Both ends
of walkie talkie contain the speakers. We can use each handset or walkie talkie
to either send the message or receive it, but we cannot do both things at the same
time.
Example 3: Railroads usually contain the scenario of half-duplex because it is
cheaper and has to lay a single track. The driver of the train has to hold a train at
one end of a single track until the driver of another train, which is travelling train
in another direction, goes through. The printer is also a good example of half-
duplex. In the IEEE-1284, printers are also able to send messages to the
computer. When the computer is sending characters to the printer, at that time,
the printer is not able to send the message to the computer. When the computer
successfully sends all the messages and stop sending them after that, the printer
can send a message back to the computer. The half-duplex has an advantage, i.e.,
double-track or double lane has a large cost as compared to the single track or
single lane.
Full Duplex

In the full-duplex, the sender and the receiver are able to send and receive at the
same time. The communication mode of full-duplex is widely used in the world.
In this mode, signals travelling in one direction are able to share the capacity of
links with signals travelling in the opposite directions. There are two ways in
which sharing can occur, which is described as follow:
ABES ENGINEERING COLLEGE, GHAZIABAD
Either capacity of the link is divided into the signals going in both directions or
the links have two physically separated transmission parts. Where one part can
be used for sending, and another part can be used for receiving.

If we need communication in both directions constantly, in this case, we will use


the full-duplex mode. The capacity of the channel will be split into two
directions.

Example 1: Telephone Network is a good example of full-duplex mode. While


using the telephone or phone, the two persons are able to talk and hear both
things at the same time. The ordinary two-lane highway is helpful to explain
the full-duplex. If traffic is very much, in this case, the railroad is decided to lay
a double tack which is used to allow trains to pass in both directions. This type
of case is usually used while communicating in networking. The fibre optic hubs
are used to contain two connectors on each port. The full-duplex fibre is a type
of two cables, which tie together so that they can form two-lane roadways.

Example 2: Audio calling or Video calling is also an example of full-duplex.


In audio or video calling, two persons are able to communicate at the same time.
In audio calling, they are able to speak and listen at the same time, and in video
calling, they are able to communicate by seeing each other at the same time. If
we are doing our work in the full-duplex mode, it will provide the best
performance compared to the single and half-duplex because it has the ability to
maximize the amount of available bandwidth.
Discuss types of Asynchronous Communication in data transmission.
(2020-2021) (2021-2022)
Ans: The internal operations in an individual unit of a digital system are
synchronized using clock pulse. It means clock pulse is given to all registers
within a unit. And all data transfer among internal registers occurs
simultaneously during the occurrence of the clock pulse. Now, suppose any two
units of a digital system are designed independently, such as CPU and I/O
interface.
16 CO5 KL3
If the registers in the I/O interface share a common clock with CPU registers,
then transfer between the two units is said to be synchronous. But in most cases,
the internal timing in each unit is independent of each other, so each uses its
private clock for its internal registers. In this case, the two units are said to be
asynchronous to each other, and if data transfer occurs between them, this data
transfer is called Asynchronous Data Transfer.

But, the Asynchronous Data Transfer between two independent units requires
that control signals be transmitted between the communicating units so that the
ABES ENGINEERING COLLEGE, GHAZIABAD
time can be indicated at which they send data. These two methods can achieve
this asynchronous way of data transfer:

Strobe control: A strobe pulse is supplied by one unit to indicate to the other
unit when the transfer has to occur.

Handshaking: This method is commonly used to accompany each data item


being transferred with a control signal that indicates data in the bus. The unit
receiving the data item responds with another signal to acknowledge receipt of
the data.

The strobe pulse and handshaking method of asynchronous data transfer is not
restricted to I/O transfer. They are used extensively on numerous occasions
requiring the transfer of data between two independent units. So, here we
consider the transmitting unit as a source and receiving unit as a destination.

For example, the CPU is the source during output or write transfer and the
destination unit during input or read transfer.

Therefore, the control sequence during an asynchronous transfer depends on


whether the transfer is initiated by the source or by the destination.

So, while discussing each data transfer method asynchronously, you can see the
control sequence in both terms when it is initiated by source or by destination.
In this way, each data transfer method can be further divided into parts, source
initiated and destination initiated.

Asynchronous Data Transfer Methods

The asynchronous data transfer between two independent units requires that
control signals be transmitted between the communicating units to indicate when
they send the data. Thus, the two methods can achieve the asynchronous way of
data transfer.

1. Strobe Control Method

The Strobe Control method of asynchronous data transfer employs a single


control line to time each transfer. This control line is also known as a strobe, and
it may be achieved either by source or destination, depending on which initiate
the transfer.

Source initiated strobe: In the below block diagram, you can see that strobe is
initiated by source, and as shown in the timing diagram, the source unit first
places the data on the data bus.
ABES ENGINEERING COLLEGE, GHAZIABAD

After a brief delay to ensure that the data resolve to a stable value, the source
activates a strobe pulse. The information on the data bus and strobe control signal
remains in the active state for a sufficient time to allow the destination unit to
receive the data.

The destination unit uses a falling edge of strobe control to transfer the contents
of a data bus to one of its internal registers. The source removes the data from
the data bus after it disables its strobe pulse. Thus, new valid data will be
available only after the strobe is enabled again.

In this case, the strobe may be a memory-write control signal from the CPU to a
memory unit. The CPU places the word on the data bus and informs the memory
unit, which is the destination.

Destination initiated strobe: In the below block diagram, you see that the
strobe initiated by destination, and in the timing diagram, the destination unit
first activates the strobe pulse, informing the source to provide the data.

The source unit responds by placing the requested binary information on the data
ABES ENGINEERING COLLEGE, GHAZIABAD
bus. The data must be valid and remain on the bus long enough for the destination
unit to accept it.

The falling edge of the strobe pulse can use again to trigger a destination register.
The destination unit then disables the strobe. Finally, and source removes the
data from the data bus after a determined time interval.
In this case, the strobe may be a memory read control from the CPU to a memory
unit. The CPU initiates the read operation to inform the memory, which is a
source unit, to place the selected word into the data bus.

2. Handshaking Method

The strobe method has the disadvantage that the source unit that initiates the
transfer has no way of knowing whether the destination has received the data
that was placed in the bus. Similarly, a destination unit that initiates the transfer
has no way of knowing whether the source unit has placed data on the bus.

So this problem is solved by the handshaking method. The handshaking method


introduces a second control signal line that replays the unit that initiates the
transfer.

In this method, one control line is in the same direction as the data flow in the
bus from the source to the destination. The source unit uses it to inform the
destination unit whether there are valid data in the bus.

The other control line is in the other direction from the destination to the source.
This is because the destination unit uses it to inform the source whether it can
accept data. And in it also, the sequence of control depends on the unit that
initiates the transfer. So, it means the sequence of control depends on whether
the transfer is initiated by source and destination.

Source initiated handshaking: In the below block diagram, you can see that
two handshaking lines are "data valid", which is generated by the source unit,
and "data accepted", generated by the destination unit.
ABES ENGINEERING COLLEGE, GHAZIABAD

The timing diagram shows the timing relationship of the exchange of signals
between the two units. The source initiates a transfer by placing data on the bus
and enabling its data valid signal. The destination unit then activates the data
accepted signal after it accepts the data from the bus.
The source unit then disables its valid data signal, which invalidates the data on
the bus.

After this, the destination unit disables its data accepted signal, and the system
goes into its initial state. The source unit does not send the next data item until
after the destination unit shows readiness to accept new data by disabling the
data accepted signal.

This sequence of events described in its sequence diagram, which shows the
above sequence in which the system is present at any given time.

Destination initiated handshaking: In the below block diagram, you see that
the two handshaking lines are "data valid", generated by the source unit, and
"ready for data" generated by the destination unit.
Note that the name of signal data accepted generated by the destination unit has
been changed to ready for data to reflect its new meaning.
ABES ENGINEERING COLLEGE, GHAZIABAD

The destination transfer is initiated, so the source unit does not place data on the
data bus until it receives a ready data signal from the destination unit. After that,
the handshaking process is the same as that of the source initiated.
The sequence of events is shown in its sequence diagram, and the timing
relationship between signals is shown in its timing diagram. Therefore, the
sequence of events in both cases would be identical.

Advantages of Asynchronous Data Transfer

Asynchronous Data Transfer in computer organization has the following


advantages, such as:

o It is more flexible, and devices can exchange information at their own


pace. In addition, individual data characters can complete themselves so
that even if one packet is corrupted, its predecessors and successors will
not be affected.
o It does not require complex processes by the receiving device.
Furthermore, it means that inconsistency in data transfer does not result
in a big crisis since the device can keep up with the data stream. It also
makes asynchronous transfers suitable for applications where character
data is generated irregularly.

Disadvantages of Asynchronous Data Transfer


There are also some disadvantages of using asynchronous data for transfer in
computer organization, such as:
ABES ENGINEERING COLLEGE, GHAZIABAD
o The success of these transmissions depends on the start bits and their
recognition. Unfortunately, this can be easily susceptible to line
interference, causing these bits to be corrupted or distorted.
o A large portion of the transmitted data is used to control and identify
header bits and thus carries no helpful information related to the
transmitted data. This invariably means that more data packets need to
be sent.

You might also like