0% found this document useful (0 votes)
3 views19 pages

Notes 20

The document discusses three methods of I/O operations: Programmed I/O, Interrupt Driven I/O, and Direct Memory Access (DMA). Programmed I/O requires the CPU to manage data transfers directly, making it simple but inefficient for high-speed operations, while Interrupt Driven I/O allows the CPU to perform other tasks during I/O operations, improving efficiency at the cost of overhead. DMA further enhances performance by enabling data transfers between devices and memory without CPU intervention, making it ideal for high-speed data operations.

Uploaded by

manavlund5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views19 pages

Notes 20

The document discusses three methods of I/O operations: Programmed I/O, Interrupt Driven I/O, and Direct Memory Access (DMA). Programmed I/O requires the CPU to manage data transfers directly, making it simple but inefficient for high-speed operations, while Interrupt Driven I/O allows the CPU to perform other tasks during I/O operations, improving efficiency at the cost of overhead. DMA further enhances performance by enabling data transfers between devices and memory without CPU intervention, making it ideal for high-speed data operations.

Uploaded by

manavlund5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computer Architecture

and Organization
BECSE-IV
I/O
• Programmed I/O
• Interrupt Driven I/O
• DMA

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 2


Programmed I/O
• Programmed I/O (Input/Output) is a method of data transfer between a
peripheral device and a computer's CPU that relies on the processor to
manage each data transfer operation.
• In Programmed I/O, the CPU directly controls the data transfer process,
issuing commands to the device and waiting for the device to complete the
operation before proceeding with other tasks.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 3


How Programmed I/O typically works?
• Initiation: The CPU initiates an I/O operation by sending a command or control signal to
the peripheral device.
• This command specifies the operation to be performed, such as reading data from the
device or writing data to it.
• Data transfer: After sending the command, the CPU waits for the peripheral device to
complete the requested operation.
• For example, if the CPU is reading data from the device, it waits for the device to provide
the requested data.
• If the CPU is writing data to the device, it waits for confirmation that the data has been
successfully written.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 4


How Programmed I/O typically works?
• Completion: Once the data transfer operation is complete, the peripheral
device typically signals the CPU to indicate that it has finished.
• This signal might be in the form of an interrupt or a status flag that the CPU
polls periodically.
• Data processing: After receiving confirmation that the data transfer is
complete, the CPU can proceed with processing the data as needed.
• This might involve performing calculations, storing the data in memory, or
initiating further I/O operations.
Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 5
Pros and Cons
• Programmed I/O is simple and easy to implement, as it relies solely on the
CPU to manage data transfers between the CPU and peripherals.
• However, it can be inefficient for high-speed or high-volume data transfer
operations because it requires the CPU to spend a significant amount of time
managing each transfer.
• As a result, Programmed I/O is typically used for low-speed or occasional
I/O operations where efficiency is not a primary concern.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 6


Pros and Cons
• In contrast to Programmed I/O, other methods of I/O transfer, such as
DMA (Direct Memory Access) and interrupt-driven I/O, offload some of
the data transfer management tasks from the CPU to dedicated hardware
controllers or interrupt handlers, allowing for more efficient data transfer
operations.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 7


Interrupt driven I/O
• Interrupt-driven I/O (Input/Output) is a method of data transfer between a
peripheral device and a computer's CPU that relies on interrupts to manage the data
transfer process.
• In this approach, the CPU initiates an I/O operation and then continues executing
other tasks while waiting for the peripheral device to complete the operation.
• When the device has finished the operation, it generates an interrupt signal to the
CPU, which temporarily suspends its current task and services the interrupt.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 8


How interrupt-driven I/O typically works?
• Initiation: The CPU initiates an I/O operation by sending a command or
control signal to the peripheral device, specifying the operation to be
performed (e.g., read or write data).
• Asynchronous operation: Unlike programmed I/O where the CPU actively
waits for the operation to complete, in interrupt-driven I/O, the CPU
continues executing other tasks while waiting for the device to complete the
operation.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 9


How interrupt-driven I/O typically works?
• Interrupt generation: Once the peripheral device has completed the requested
operation (e.g., data transfer), it generates an interrupt signal to the CPU.
• The interrupt signal is typically sent through a dedicated interrupt line or as a status
flag that the CPU periodically polls.
• Interrupt service routine (ISR): Upon receiving the interrupt signal, the CPU
temporarily suspends its current task and transfers control to a special piece of code
called an interrupt service routine (ISR) or interrupt handler.
• The ISR is responsible for handling the interrupt, acknowledging the completion of
the operation, and processing the data transferred by the device.
Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 10
How interrupt-driven I/O typically works?
• Data processing: After servicing the interrupt and processing the data as
necessary, the CPU resumes execution of the interrupted task or moves on
to other tasks, depending on the system's priority and scheduling algorithm.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 11


Pros and Cons
• Interrupt-driven I/O offers several advantages over programmed I/O,
including improved system responsiveness and efficiency.
• By allowing the CPU to perform other tasks while waiting for I/O
operations to complete, interrupt-driven I/O can increase overall system
throughput and utilization.
• Additionally, it simplifies the programming model by abstracting away the
low-level details of I/O operations and allowing the CPU to handle
interrupts asynchronously.
Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 12
Pros and Cons
• However, interrupt-driven I/O also introduces overhead associated with
interrupt handling, context switching, and synchronization, which can impact
system performance, especially in high-speed or real-time applications.
• As a result, interrupt-driven I/O is typically used in scenarios where
responsiveness and multitasking are important and where the overhead of
interrupt handling is acceptable.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 13


Direct Memory Access(DMA)
• Direct Memory Access is a method for transferring data between devices
(such as storage, network interfaces, or sound cards) and memory without
involving the CPU directly in every data transfer operation.
• DMA offloads data transfer tasks from the CPU, allowing it to focus on
other processing tasks while data is being transferred between peripherals
and memory.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 14


How DMA typically works?
• Initiation: The CPU initiates a DMA transfer by setting up the DMA
controller with parameters such as the source and destination addresses of
the data to be transferred, the transfer size, and the direction of the transfer
(e.g., read from device to memory or write from memory to device).
• Arbitration: If multiple devices are capable of DMA transfers, the DMA
controller arbitrates between them to determine which device will have
access to the memory bus for the current transfer. This is typically done
using a priority scheme or round-robin scheduling.
Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 15
How DMA typically works?
• Transfer: Once the DMA controller gains control of the memory bus, it
transfers data directly between the device and memory without involving the
CPU. The CPU is free to perform other tasks while the transfer is in
progress.
• Completion: After the data transfer is complete, the DMA controller may
generate an interrupt to notify the CPU of the transfer's completion.
Alternatively, the CPU may periodically poll the DMA controller to check the
status of ongoing transfers.
Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 16
Advantages
• Efficiency: DMA transfers data between devices and memory at high speeds,
without the overhead of CPU involvement. This can significantly improve overall
system performance, especially for large data transfers or high-speed devices.
• Offloading CPU: By offloading data transfer tasks to the DMA controller, the
CPU is freed up to perform other tasks concurrently. This improves system
responsiveness and multitasking capabilities.
• Reduced latency: DMA transfers can occur asynchronously, reducing the latency
associated with CPU-managed I/O operations.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 17


Where DMA is used?
• DMA is commonly used in various computing systems, including personal
computers, embedded systems, and servers, to facilitate efficient data transfer
between devices and memory.
• It plays a crucial role in achieving high-performance I/O operations and
system throughput.

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 18


Thanks…

Lecture notes: Dr Mumtaz Ali Kaloi CSE Deptt. 19

You might also like