Architecture - Advanced High-Performance Bus (AMBA-AHB) Compliant Direct Memory Access (DMA) Controller
Architecture - Advanced High-Performance Bus (AMBA-AHB) Compliant Direct Memory Access (DMA) Controller
In this project, we plan to implement an Advanced Micro-controller Bus Architecture - Advanced High-performance Bus (AMBA-AHB) compliant Direct Memory Access (DMA) controller.
About AMBA-AHB:
The Advanced Microcontroller Bus Architecture (AMBA) specification defines an on-chip communications standard for designing high-performance embedded microcontrollers. The AMBA AHB is for high-performance, high clock frequency system modules. The AHB acts as the high-performance system backbone bus. AHB supports the efficient connection of processors, on-chip memories and off-chip external memory interfaces with low-power peripheral macro-cell functions. AHB is also specified to ensure ease of use in an efficient design flow using synthesis and automated test techniques.
About DMA:
DMA aids the data transfer between the memory and peripherals, there-by reducing the load on the CPU. Thus DMA enables more efficient use of interrupts, increases data throughput, and potentially reduces hardware costs by eliminating the need for peripheral-specific FIFO buffers. This makes DMA an important module for any System-on-a-Chip as it can increase performance by a large factor.
Advantages of DMA:
DMA is fast (because a dedicated piece of hardware transfers data from one computer location to another). Only one or two bus read/write cycles are required per piece of data transferred. DMA achieves maximum data transfer speed (useful for high speed data acquisition devices).
DMA also minimizes latency in servicing a data acquisition device (because the dedicated hardware responds more quickly than interrupts, and transfer time is short). Minimizing latency reduces the amount of temporary storage (memory) required on an I/O device. DMA also off-loads the processor Therefore, the processor is not used for handling the data transfer activity and is available for other processing activity. Also, in systems where the processor primarily operates out of its cache, data transfer is actually occurring in parallel, thus increasing overall system utilization.