Unit Iv
Unit Iv
10 marks
In modern computer systems, transferring data between input/output devices and memory
can be a slow process if the CPU is required to manage every step. To address this, a Direct
Memory Access (DMA) Controller is utilized. A Direct Memory Access (DMA) Controller
solves this by allowing I/O devices to transfer data directly to memory, reducing CPU
involvement. This increases system efficiency and speeds up data transfers, freeing the CPU
to focus on other tasks. DMA controller needs the same old circuits of an interface to
communicate with the CPU and Input/Output devices.
Direct Memory Access (DMA) uses hardware for accessing the memory, that hardware is
called a DMA Controller. It has the work of transferring the data between Input Output
devices and main memory with very less interaction with the processor. The direct Memory
Access Controller is a control unit, which has the work of transferring data.
DMA Controller is a type of control unit that works as an interface for the data bus and the
I/O Devices. As mentioned, DMA Controller has the work of transferring the data without the
intervention of the processors, processors can control the data transfer. DMA Controller also
contains an address unit, which generates the address and selects an I/O device for the
transfer of data. Here we are showing the block diagram of the DMA Controller.
The figure below shows the block diagram of the DMA controller. The unit communicates with
the CPU through the data bus and control lines. Through the use of the address bus and allowing
the DMA and RS register to select inputs, the register within the DMA is chosen by the CPU. RD
and WR are two-way inputs. When BG (bus grant) input is 0, the CPU can communicate with
DMA registers. When BG (bus grant) input is 1, the CPU has relinquished the buses and DMA can
communicate directly with the memory.
The match register M has m bits, one for each memory word. Each word in
memory is related in parallel with the content of the argument register.
The words that connect the bits of the argument register set an equivalent
bit in the match register. After the matching process, those bits in the
match register that have been set denote the fact that their equivalent
words have been connected.
The key register supports a mask for selecting a specific field or key in the
argument word. The whole argument is distinguished with each memory
word if the key register includes all 1's.
Hence, there are only those bits in the argument that have 1's in their
equivalent position of the key register are compared. Therefore, the key
gives a mask or recognizing a piece of data that determines how the
reference to memory is created.
The following figure can define the relation between the memory array
and the external registers in associative memory.
The cells in the array are considered by the letter C with two subscripts.
The first subscript provides the word number and the second determines
the bit position in the word. Therefore cell Cij is the cell for bit j in word i.
A bit in the argument register is compared with all the bits in column j of
the array supported that K j = 1. This is completed for all columns j = 1,
2 . . . , n.
If a match appears between all the unmasked bits of the argument and
the bits in word i, the equivalent bit Mi in the match register is set to 1. If
one or more unmasked bits of the argument and the word do not match,
Mi is cleared to 0.
Cache mapping is the procedure in to decide in which cache line the main memory block will
be mapped. In other words, the pattern used to copy the required main memory content to
the specific location of cache memory is called cache mapping. The process of extracting the
cache memory location and other related information in which the required content is
present from the main memory address is called as cache mapping. The cache mapping is
done on the collection of bytes called blocks. In the mapping, the block of main memory is
moved to the line of the cache memory. Cache mapping is needed to identify where the
cache memory is present in cache memory. Mapping provides the cache line number where
the content is present in the case of cache hit or where to bring the content from the main
memory in the case of cache miss.
Direct Mapping
Fully Associative Mapping
Set Associative Mapping
Direct Mapping
In direct mapping physical address is divided into three parts i.e., Tag bits, Cache Line
Number and Byte offset. The bits in the cache line number represents the cache line in which
the content is present whereas the bits in tag are the identification bits that represents
which block of main memory is present in cache. The bits in the byte offset decides in which
byte of the identified block the required content is present.
Cache Line Number = Main Memory block Number % Number of Blocks in Cache
Fully Associative Mapping: In fully associative mapping address is divided into two parts i.e., Tag bits
and Byte offset. The tag bits identify that which memory block is present and bits in the byte offset
field decides in which byte of the block the required content is present.
Cache Set Number = Main Memory block number % Number of sets in cache