0% found this document useful (0 votes)
24 views25 pages

Peripherals: Embedded Systems (UEC513)

Uploaded by

ash.m.7891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views25 pages

Peripherals: Embedded Systems (UEC513)

Uploaded by

ash.m.7891
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Peripherals

Embedded Systems
(UEC513)
Accessing of I/O Devices
• More than one I/O devices may be connected through
set of three bus.
• Need to assign an unique address
• Two mapping techniques
– Memory mapped I/O
– I/O mapped I/O
I/O Mapping Techniques
• Two techniques are used to assign addressing to I/O
– Memory mapped I/O
– I/O mapped I/O
• Memory Mapped I/O
• Concept: In memory mapped I/O, the same address space is shared between
memory and I/O devices. I/O devices are assigned addresses within the regular
address space used by the system memory.
• Advantages:
• Simplifies the design, as the same instructions used for accessing memory
(load/store instructions) can be used for I/O operations.
• Provides more powerful and flexible ways to manipulate I/O devices using regular
memory access operations.
• Disadvantages:
• Consumes valuable address space that could otherwise be used for regular memory.
• The design must ensure that certain addresses are reserved exclusively for I/O
devices and not used for memory.
• Usage: Common in systems where ease of programming and flexibility are priorities.
Often used in modern microcontrollers and processors.
• I/O Mapped I/O (Port Mapped I/O)
• Concept: I/O mapped I/O uses a separate, dedicated address space for I/O
devices. This means that I/O devices have their own unique addresses distinct
from the memory addresses.
• Advantages:
• Does not consume the main address space, preserving it for memory use.
• Helps in clear separation of memory and I/O device address spaces.
• Disadvantages:
• Requires special instructions for I/O operations (e.g., IN, OUT in x86 assembly
language).
• May increase the complexity of the CPU design due to the need for separate
instruction sets and address spaces for I/O.
• Usage: Common in older and simpler systems where conserving address space
is critical, or where the hardware architecture inherently supports this method.
Accessing of I/O through polling
• Normally, the data transfer of rate of I/O devices is slower than the
speed of the processor. This creates the need for mechanisms to
synchronize data transfers between them.
• Program-controlled I/O: Processor continuously check the status
flag to achieve the necessary synchronization. It is called polling

Two other mechanisms used for synchronizing data transfers


between the processor and memory:
Interrupts driven
Direct Memory Access (DMA).
• Polling for I/O Device Access
• Polling is a technique where the processor actively waits for an I/O device to become
ready for data transfer by repeatedly checking its status. This is necessary because the
data transfer rate of I/O devices is typically much slower than the speed of the processor.
• Why Synchronization is Needed
• Processor Speed vs. I/O Speed: The processor operates much faster than I/O devices. For
example, a CPU might execute millions of instructions per second, while an I/O device
like a keyboard or a sensor might operate at much slower speeds.
• Avoiding Data Loss or Overruns: If the processor sends data to an I/O device or reads
data from it without ensuring the device is ready, data can be lost or corrupted.
• Program-Controlled I/O: Polling
• Program-controlled I/O, also known as polling, involves the processor checking the status
of an I/O device to ensure it is ready for a data transfer. Here’s how it typically works:
• Status Flag Check: The I/O device has a status register with flags indicating its state (e.g.,
ready, busy, error).
• Polling Loop: The processor enters a loop where it continuously reads the status flag.
• Condition Check: The processor checks if the device is ready (e.g., ready for input or
output).
• Data Transfer: When the status flag indicates that the device is ready, the processor
performs the data transfer (reading from or writing to the device).

• Advantages of Polling
• Simplicity: Easy to implement in software.
• Control: Provides fine-grained control over timing and order of operations.
• No Additional Hardware Needed: Requires no special hardware beyond
basic I/O registers.
• Disadvantages of Polling
• Inefficient Use of Processor Time: The processor spends a lot of time
checking status flags instead of performing other useful work.
• High Power Consumption: Constant polling keeps the processor active,
leading to higher power consumption, which is critical in battery-powered
devices.
• Poor Responsiveness to Multiple Devices: Polling multiple devices
sequentially can lead to latency in responding to some devices.
Interrupt driven I/O
• I/O devices send the request
about the readiness of the data
• Processor complete the current
instruction and send the
acknowledgement to respective
I/O
• Example: Let processor is
executing a program and at
instruction located at address i
when an interrupt occurs.
• Routine executed in response to
an interrupt request is called the
interrupt-service routine (ISR).
• When an interrupt occurs, control
must be transferred to the
interrupt service routine.
• After completion of ISR, the
• How Interrupt-Driven I/O Works
• Interrupt Request (IRQ):
• When an I/O device is ready to exchange data (e.g., it has received new data or it is
ready to send data), it sends an interrupt request (IRQ) to the processor.
• Processor Response:
• Upon receiving an interrupt, the processor completes the execution of the current
instruction.
• The processor then sends an acknowledgement to the respective I/O device to
indicate that the interrupt has been recognized.
• Interrupt-Service Routine (ISR):
• The processor saves its current context (the state of registers and the program
counter) to ensure it can return to the same state after handling the interrupt.
• Control is transferred to a special function known as the interrupt-service routine
(ISR). The ISR is a dedicated function designed to handle the specific interrupt.
• The ISR performs the necessary I/O operations, such as reading data from a device or
sending data to it.
• Return to Main Program:
• Once the ISR completes its task, it restores the saved context of the processor.
• Control is returned to the main program, resuming execution from the point where
the interrupt occurred.
• Advantages of Interrupt-Driven I/O
• Efficiency:
• The processor can perform other tasks and only respond to I/O devices when needed,
leading to better overall system performance.
• Reduced Power Consumption:
• Since the processor is not constantly polling I/O devices, it can enter low-power states when
idle, conserving energy.
• Better Responsiveness:
• Interrupts allow for immediate response to I/O events, which is critical for real-time
applications.
• Scalability:
• Interrupt-driven I/O scales well with multiple I/O devices, as each device can signal the
processor independently.

• Disadvantages of Interrupt-Driven I/O


• Complexity:
• Implementing interrupt-driven I/O requires more complex hardware and software design,
including managing the context switching and prioritizing multiple interrupts.
• Overhead:
• Handling interrupts involves overhead due to context saving and restoring, which can affect
performance if interrupts are too frequent.
• Priority Handling:

Interrupt service routine (ISR)
• CPU suspends execution of the current program
– Saves the address of the next instruction to be
executed (current contents of PC) and any other data
• CPU sets the PC to the starting address of an ISR
• CPU proceeds to the fetch cycle and fetches the first
instruction in ISR which is generally a part of the OS
• ISR typically determines the nature of the interrupt and
performs whatever actions are needed.
For example, ISR determines which I/O module
generated the interrupt and may branch to a program
that will write more data out to that I/O module.
Once ISR is completed, CPU will resume the execution
of the user program at the point of interruption.
Daisy Chain in Interrupt
• Connections of all interrupt is in serial
• First device has highest priority
• Same IR is used
• INTA is used to respond to the devices
• Start scanning from device 1 and so on
• Daisy Chain in Interrupt Systems
• In embedded systems, a daisy chain is a method used for connecting multiple
interrupt sources to a single interrupt request line (IR) in a serial manner. This
approach helps in prioritizing interrupts and managing multiple devices with a
single interrupt line. Here’s an expanded explanation of the daisy chain mechanism:
• Daisy Chain Configuration
• Serial Connection:
• Multiple I/O devices are connected in a linear sequence, forming a chain.
• Each device in the chain is linked to the next one, with the first device having the
highest priority and the last one having the lowest.
• Shared Interrupt Request Line (IR):
• All devices share the same interrupt request line connected to the processor.
• When any device in the chain needs to generate an interrupt, it asserts the shared
IR line.
• Interrupt Acknowledge (INTA):
• The processor uses an interrupt acknowledge line (INTA) to respond to the interrupt
request.
• INTA is used to query the devices in the chain to identify which one generated the
interrupt.
• How Daisy Chain Interrupt Handling Works
• Interrupt Request:
• When a device generates an interrupt, it asserts the IR line.
• Since the IR line is shared, the processor knows that at least one device in the chain
needs attention.
• Interrupt Acknowledge:
• The processor sends an interrupt acknowledge (INTA) signal to the first device in the
chain.
• If the first device generated the interrupt, it will respond to the INTA signal and the
processor will handle its interrupt.
• If the first device did not generate the interrupt, it passes the INTA signal to the next
device in the chain.
• Priority and Propagation:
• This process continues down the chain until the device that generated the interrupt is
found.
• The device that generated the interrupt captures the INTA signal and does not pass it
further, ensuring that the processor handles interrupts in priority order.
• Interrupt Service Routine (ISR):
• The processor executes the interrupt service routine (ISR) for the device that captured
the INTA signal.
• After handling the interrupt, the processor returns to the main program.
• Advantages of Daisy Chain Interrupts
• Simple Hardware Design:
• Daisy chaining requires minimal additional hardware since devices are connected in a simple
serial manner.
• Prioritization:
• Ensures a clear priority order among the devices, as the processor will always handle
interrupts starting from the highest priority device.
• Disadvantages of Daisy Chain Interrupts
• Fixed Priority:
• The priority of devices is fixed by their position in the chain, which may not be flexible for all
applications.
• Propagation Delay:
• There is a potential delay in propagating the INTA signal through the chain, especially if the
generating device is at the end of the chain.
• Scalability:
• Adding more devices increases the chain length, which can increase the delay and complexity
of managing the INTA propagation.
Multiple Interrupts
Two methods can be used to handle multiple interrupt:
• Sequential execution
• Execute as per priority of interrupt
• Sequential Execution
• Sequential Execution involves handling multiple interrupts one at a time, in the order they are
received. This approach does not consider the priority of the interrupts; instead, it processes each
interrupt as it arrives.
• How Sequential Execution Works
• Interrupt Arrival:
• Multiple interrupts may be triggered simultaneously or in quick succession.
• Queueing:
• The interrupts are queued in the order they are received.
• Processing:
• The processor handles each interrupt one by one, starting with the first interrupt in the queue.
• The processor completes the interrupt service routine (ISR) for the current interrupt before
moving on to the next one.
• Advantages of Sequential Execution
• Simplicity:
• The implementation is straightforward since interrupts are processed in the order they are
received.
• Fairness:
• Each interrupt is guaranteed to be processed, preventing starvation.
• Disadvantages of Sequential Execution
• No Priority Handling:
• Execution as per Priority of Interrupt
• Execution as per Priority of Interrupt involves handling multiple interrupts based on their priority levels. Each
interrupt is assigned a priority, and higher-priority interrupts are processed before lower-priority ones.
• How Priority-Based Execution Works
• Interrupt Arrival:
• Multiple interrupts may be triggered simultaneously or in quick succession.
• Priority Assessment:
• Each interrupt is assigned a priority level.
• Handling High-Priority Interrupts:
• The processor checks the priority of the incoming interrupts.
• If a high-priority interrupt arrives while a lower-priority interrupt is being serviced, the processor may
preempt the current ISR to handle the higher-priority interrupt.
• Interrupt Service Routine (ISR):
• The processor executes the ISR for the highest-priority interrupt.
• Once the ISR for the high-priority interrupt is completed, the processor resumes or starts the next highest-
priority interrupt.
• Advantages of Priority-Based Execution
• Timeliness:
• Ensures that critical tasks are handled promptly, improving system responsiveness.
• Flexibility:
• Allows for dynamic management of interrupts based on their importance.
• Disadvantages of Priority-Based Execution
• Complexity:
• Implementing a priority-based interrupt system is more complex due to the need for priority management
and potential preemption.
• Overhead:
Direct Memory Access
• A special control unit to provide transfer a block of data directly
between IO and memory by bypassing the processor
• It uses the busses of processor
• It is not a processor, so not having any instruction set
• Direct Memory Access (DMA)
• Direct Memory Access (DMA) is a method used in computer systems to transfer data directly
between I/O devices and memory, bypassing the processor to improve efficiency and
performance. This technique is particularly useful for large data transfers, such as those
involving disk drives, network cards, and audio/video devices.
• Key Features of DMA
• Special Control Unit:
• DMA involves a special hardware component known as the DMA controller. This controller
manages the data transfer between memory and I/O devices without involving the CPU for
each byte of data transferred.
• Bypassing the Processor:
• During a DMA operation, the processor is not involved in the actual data transfer, allowing it
to perform other tasks. This frees up the CPU from the overhead of managing multiple data
transfers, significantly enhancing system performance.
• Bus Utilization:
• The DMA controller uses the system buses (address bus, data bus, and control bus) to
transfer data. While the DMA transfer is in progress, the bus is temporarily controlled by the
DMA controller instead of the CPU.
• No Instruction Set:
• The DMA controller is not a processor and does not execute instructions. Instead, it operates
based on the configuration provided by the CPU, which specifies the source and destination
addresses, the amount of data to be transferred, and the transfer mode.
Direct Memory access
• DMA can transfer block of data from IO to processor,
memory to IO, memory to memory without any intervention
from processor
• To initiate the DMA transfer, the processor load the
information about the DMA controller:
– Starting address
– Number of words to be transfer
– Direction of transfer
– Modes of transfer
• After the completion of DMA transfer, it inform the processor
by raising interrupt signal
• Initialization:

• The CPU initializes the DMA controller by setting up the source and destination
addresses, the size of the data block to be transferred, and the type of transfer (e.g.,
read or write).
• Requesting the Bus:
• The DMA controller requests control of the system bus from the CPU. This request is
typically done through a signal called Bus Request (BR).
• Bus Arbitration:
• The CPU grants control of the bus to the DMA controller using a signal called Bus
Grant (BG). During this period, the CPU relinquishes control of the bus.
• Data Transfer:
• The DMA controller performs the data transfer directly between the I/O device and
memory. It uses the system buses to read data from the source and write it to the
destination.
• Completion and Interrupt:
• Once the data transfer is complete, the DMA controller releases control of the bus
back to the CPU.
• The DMA controller may generate an interrupt to notify the CPU that the transfer is
complete, allowing the CPU to proceed with the next operation.
Operation of DMA with CPU
Thanks

You might also like