COA-Mod-5 - Ktunotes - in
COA-Mod-5 - Ktunotes - in
Processor Memory
Bus
Input device
EXPLANATION:
This program, reads a line of characters from the keyboard & stores
it in a memory buffer starting at locations LINE.
Then it calls the subroutine “PROCESS” to
process the input line. As each character is
read, it is echoed back to the display.
Register Ro is used as a pointer to memory buffer area.
The contents of R0 are updated using Auto – increment
mode so that successive characters are stored in
successive memory location.
Each character is checked to see if there is carriage return
(CR), char, which has the ASCII code 0D (hex).
If it is, a line feed character (ASCII character 0A) is sent
to move the cursor one line down on the display &
subroutine PROCESS is called. Otherwise, the program
loops back to wait for another character from the
keyboard.
DMA - It is a technique used for high speed I/O device. Here, the
device interface transfer data directly to or from the memory
without continuous involvement by the processor.
INTERRUPTS
When a program enters a wait loop, it will repeatedly check the device status.
During this period, the processor will not perform any function. There are many
situations where other tasks can be performed while waiting for an I/O device to
become ready. To allow this to happen, we can arrange for the I/O device to
alert the processor when it becomes ready.
Interrupt Hardware:
A single interrupt request line may be used to serve n devices. All devices are connected to the line via
switches to ground.
To request an interrupt, a device closes its associated switch, the voltage on INTR line drops to 0(zero).
If all the interrupt request signals (INTR1 to INTRn) are inactive, all switches are open and the voltage on
INTR line is equal to Vdd..
When a device requests an interrupt by closing its switch, the voltage on the line drops to 0, causing INTR
request signal received by the processor to go to 1.
Since closing one or more switches will cause line voltage to drop to 0, the value of INTR is the logical
OR of the requests from individual devices. ie;
INTR - It is used to name the INTR signal on common line it is active in the low voltage state.
The arrival of an interrupt request from an external device causes the processor to
suspend the execution of one program & start the execution of another because
the interrupt may alter the sequence of events to be executed.
INTR is active during the execution of Interrupt Service Routine.
There are 3 mechanisms to solve the problem of infinite loop which occurs due to
successive interruptions of active INTR signals.
The following are the typical scenario.
The processor has a special interrupt request line for which the interrupt handling
circuit responds only to the leading edge of the signal. Such a line said to be edge-
triggered.
When several devices’ requests interrupt at the same time, it raises some questions.
They are.
Polling Scheme:
If two devices have activated the interrupt request line, the ISR for the selected
device (first device) will be completed & then the second request can be serviced.
The simplest way to identify the interrupting device is to have the ISR polls all
the encountered with the IRQ bit set is the device to be serviced
IRQ (Interrupt Request) -> when a device raises an interrupt request, the status
register IRQ is set to 1.
Advantage: It is easy to implement.
Disadvantages: The time spent for interrogating the IRQ bits of all the devices that may
not be requesting any service.
Vectored Interrupt:
Here the device requesting an interrupt may identify itself to the processor by
sending a special code over the bus & then the processor start executing the ISR.
The code supplied by the processor indicates the starting address of the ISR for
the device.
The code length ranges from 4 to 8 bits.
The location pointed to by the interrupting device is used to store the staring
address to ISR.
The processor reads this address, called the interrupt vector & loads into PC.
The interrupt vector also includes a new value for the Processor Status Register.
When the processor is ready to receive the interrupt vector code, it activates the
interrupt acknowledge (INTA) line.
Interrupt Nesting:
Multiple Priority Scheme:
In multiple level priority scheme, we assign a priority level to the processor that
can be changed under program control.
The priority level of the processor is the priority of the program that is currently
being executed.
The processor accepts interrupts only from devices that have priorities higher than
its own.
At the time the execution of an ISR for some device is started, the priority of the
processor is raised to that of the device.
The action disables interrupts from devices at the same level of priority or lower.
Privileged Instruction:
The processor priority is usually encoded in a few bits of the Processor Status
word. It can also be changed by program instruction & then it is written into
PS. These instructions are called privileged instruction. This can be executed
only when the processor is in supervisor mode.
The processor is in supervisor mode only when executing OS routines.
It switches to the user mode before beginning to execute application program.
Privileged Exception:
Simultaneous Requests:
Daisy Chain:
The interrupt request line INTR is common to all devices. The interrupt
acknowledge line INTA is connected in a daisy chain fashion such that INTA
signal propagates serially through the devices.
When several devices raise an interrupt request, the INTR is activated & the
processor responds by setting INTA line to 1. this signal is received by device.
Device1 passes the signal on to device2 only if it does not require any service.
If devices1 has a pending request for interrupt blocks that INTA signal &
proceeds to put its identification code on the data lines.
Therefore, the device that is electrically closest to the processor has the highest
priority.
Merits:
It requires fewer wires than the individual connections.
Here the devices are organized in groups & each group is connected at a different
priority level.
Within a group, devices are connected in a daisy chain.
Exception of ISR:
◻ Read the input characters from the keyboard input data register. This will cause
the interface circuits to remove its interrupt requests.
◻ Store the characters in a memory location pointed to by PNTR & increment
PNTR.
◻ When the end of line is reached, disable keyboard interrupt & inform
program main.
◻ Return from interrupt.
Exceptions:
Kinds of exception:
Debugging:
System software has a program called debugger, which helps to find errors in a
program.
The debugger uses exceptions to provide two important facilities They
are
Trace
Breakpoint
Trace Mode:
Break point:
Here the program being debugged is interrupted only at specific points selected by
the user.
An instance called the Trap (or) software interrupt is usually provided for this
purpose.
While debugging the user may interrupt the program execution after instance „I‟
When the program is executed and reaches that point it examine the memory and
register contents.
Privileged Exception:
Starting address
Number of words in the block
Direction of transfer.
When a block of data is transferred, the DMA controller increment the memory
address for successive words and keep track of number of words and it also informs
the processor by raising an interrupt signal.
While DMA control is taking place, the program requested the transfer cannot
continue and the processor can be used to execute another program.
After DMA transfer is completed, the processor returns to the program that requested
the transfer.
R/W - Determines the direction of transfer.
When
R/W =1, DMA controller read data from memory to I/O device.
R/W =0, DMA controller perform write operation.
Done Flag=1, the controller has completed transferring a block of data and is
ready to receive another command.
IE=1, it causes the controller to raise an interrupt (interrupt Enabled) after it has
completed transferring the block of data.
IRQ=1, it indicates that the controller has requested an interrupt.
A DMA controller connects a high-speed network to the computer bus. The disk
controller two disks, also has DMA capability and it provides two DMA channels.
To start a DMA transfer of a block of data from main memory to one of the disks,
the program writes the address and the word count information into the registers
of the corresponding channel of the disk controller.
When DMA transfer is completed, it will be recorded in status and control registers
of the DMA channel (ie) Done bit=IRQ=IE=1.
Cycle Stealing:
Requests by DMA devices for using the bus are having higher priority than
processor requests.
Top priority is given to high-speed peripherals such as,
Disk
High speed Network Interface and Graphics display device.
Since the processor originates most memory access cycles, the DMA controller
can be said to steal the memory cycles from the processor.
This interviewing technique is called Cycle stealing.
Burst Mode:
The DMA controller may be given exclusive access to the main memory to
transfer a block of data without interruption. This is known as Burst/Block Mode
Bus Master:
The device that is allowed to initiate data transfers on the bus at any given time is
called the bus master.
Bus Arbitration:
It is the process by which the next device to become the bus master is selected and
the bus mastership is transferred to it.
Types:
There are 2 approaches to bus arbitration. They are,
Centralized Arbitration:
Here the processor is the bus master and it may grant bus mastership to one of its
DMA controllers.
A DMA controller indicates that it needs to become the bus master by activating
the Bus Request line (BR) which is an open drain line.
The signal on BR is the logical OR of the bus request from all devices connected
to it.
When BR is activated, the processor activates the Bus Grant Signal (BGI)
and indicated the DMA controller that they may use the bus when it becomes free.
This signal is connected to all devices using a daisy chain arrangement.
If DMA requests the bus, it blocks the propagation of Grant Signal to other
devices and it indicates to all devices that it is using the bus by activating open
collector line, Bus Busy (BBSY).
Distributed Arbitration:
It means that all devices waiting to use the bus have equal responsibility in carrying out
the arbitration process without using a central arbiter.
Example
Assume two devices A & B have their ID 5 (0101), 6(0110). They are requesting
the use of bus.
Device A transmits the pattern 0101 and B transmits 0110. The code seen by both
devices is 0111.
Each devices compares the pattern on the arbitration line to its own ID starting from
MSB.
If it detects a difference at any bit position, it disables the drivers at that bit
position and for all lower order bits.
It does this by placing 0 at the input of these drivers.
In our example. A detects a difference in line ARB1, hence it disables the drivers on
lines ARB1 & ARB0.
This causes the pattern on the arbitration line to change to 0110 which means
that B has won the contention.
Note that since the code on the priority line is 0111 for a shorter period, device B
may be temporarily disable its driver on line ARB0. However, it will enable this driver
once it sees a 0 on line ARB1 resulting from the action by device A.
Advantages:
Highly reliable – because operation of the bus is not dependent on any single device.
MEMORY SYSTEM - INTRODUCTION
Programs and data they operate on are resided in the memory of the computer. The execution
speed of the program is dependent on how fast the transfer of data and instructions in-between memory
and processor. There are three major types of memory available: Cache, Primary and Secondary
Memories.
A good memory would be fast, large and inexpensive. Unfortunately, it is impossible to meet all
three of these requirements simultaneously. Increased speed and size are achieved at increased cost.
BASIC CONCEPTS:
A memory unit is considered as a collection of cells, in which each cell is capable of storing a
bit of information. It stores information in group of bits called byte or word. The maximum size of
the memory that can be used in any computer is determined by the addressing scheme.
Word length is the number of bits that can be transferred to or from the memory, it can be
determined from the width of data bus, if the data bus has a width of n bits, it means word length of that
computer system is n bits.
Memory access time: time elapses between the
initiation of an operation and the completion of that
operation.
Memory cycle time: minimum time delay
required between the initiations of two
successive memory locations.
Compared to processor, the main memory unit
is very slow. So, in order to transfer something
between memory and processor takes a long time. The
processor has to wait a lot. To avoid this speed gap
between memory and processor, a new memory called cache memory is placed in between main
memory and processor.
In the memory hierarchy, speed will decrease and size will increase from top to bottom level.
An important design issue is to provide a computer system with as large and fast a memory as possible,
within a given cost target.
Random Access Memory (RAM)is a memory system in which any location can be accessed for
a Read or Write operation in some fixed amount of time that is independent of the location’s address.
Several techniques to increase the effective size and speed of the memory: Cache memory (to
increase the effective speed) &Virtual memory (to increase the effective size
Most of the static RAMs are built using MOS (Metal Oxide Semiconductor) technology, but
some are built using bipolar technology. If the cell is in state 1/0, the signal on b is high/low and signal
on bit line b‟ is low/high.
Read operation: In order to read state of SRAM cell, the word line is activated to close
switches T1 and T2. Sense/Write circuits at the bottom monitor the state of b and b‟.
Write operation: During the write operation, the state of the cell is set by placing the appropriate
value on bit line b and its complement on b‟ and then activating the word line. This forces the cell into
the corresponding state. The major advantage of SRAM is very quickly accessed by the processor. The
major disadvantage is that SRAM are expensive memory and SRAM are Volatile memory. If the power
is interrupted, the cell’s content will be lost. Continuous power is needed for the cell to retain its state.
In the diagram above, we can see that there are two extra elements with two extra lines attached
to them: the Row Address Latch is controlled by the RAS (or Row Address Strobe) pin, and the Column
Address Latch is controlled by the CAS (or Column Address Strobe) pin.
Read Operation:
1. The row address is placed on the address pins via the address bus.
2. The RAS pin is activated, which places the row address onto the Row Address Latch.
3. The Row Address Decoder selects the proper row to be sent to the sense amps.
4. The Write Enable (not pictured) is deactivated, so the DRAM knows that it's not being
written to.
5. The column address is placed on the address pins via the address bus.
6. The CAS pin is activated, which places the column address on the Column Address Latch.
7. The CAS pin also serves as the Output Enable, so once the CAS signal has stabilized the
sense amps, it places the data from the selected row and column on the Data Out pin so that it
can travel the data bus back out into the system.
8. RAS and CAS are both deactivated so that the cycle can begin again.
Write Operation:
1. In the write operation, the information on the data lines is transferred to the selected circuits.
For this write enable is activated
Fast Page Mode
Suppose if we want to access the consecutive bytes in the selected row. This can be done
without having to reselect the row. Add a latch at the output of the sense circuits in each row.
All the latches are loaded when the row is selected. Different column addresses can be applied to select and
place different bytes on the data lines. Consecutive sequence of column addresses can be applied under
the control signal CAS, without reselecting the row.
This methodology allows a block of data to be transferred at a much faster rate than random
accesses. A small collection/group of bytes is usually referred to as a block. This transfer capability is
referred to as the fast page mode feature. This mode of operation is useful when there is requirement for
fast transfer of data (Eg: Graphical Terminals)
Synchronous DRAM’s
Operation is directly synchronized with processor clock signal. The outputs of the sense circuits
are connected to a latch. During a Read operation, the contents of the cells in a row are loaded onto the
latches. During a refresh operation, the contents of the cells are refreshed without changing the contents
of the latches.
Data held in the latches correspond to the selected columns are transferred to the output.
For a burst mode of operation, successive columns are selected using column address counter
and clock. CAS signal need not be generated externally. A new data is placed duringraising edge of the
clock
Memory latency is the time it takes to transfer a word of data to or from memory.
Memory bandwidth is the number of bits or bytes that can be transferred in one second.
Double Data Rate SDRAM
DDR-SDRAM is a faster version of SDRAM. The standard SDRAM perform all actions on
the rising edge of the clock signaled SDRAM is also access the cell array in the same way but
transfers data on both edges of the clock. So, bandwidth is essentially doubled for long burst transfers.
To make it possible to access the data at a high enough rate, the cell array is organized in two
banks. Each bank can access separately. Consecutive words of a given block arc stored in different
banks. Such interleaving of words allows simultaneous access to two words that are transferred on
successive edges of the clock.
Static RAM Dynamic RAM
More expensive Less expensive
No refresh Deleted & refreshed
High power Less power
Less storage capacity Higher storage capacity
MOS transistors Transistor & capacitor
Faster Slow
More reliable Less reliable
The choice of a RAM chip for a given application depends on several factors: Cost, speed,
power, size etc. SRAMs are faster, more expensive and smaller. In the case of DRAMs are
slower, cheaper and larger.
If speed is the primary requirement static RAMS are the most appropriate one. Static RAMs are
mostly used in cache memories. If cost is the prioritized factor, then we are going for dynamic RAMs.
Dynamic RAMs are used for implementing computer main memories.
Refresh overhead:
All dynamic memories have to be refreshed. In DRAM, the period for refreshing all rows is
16ms whereas 64ms in SDRAM.
Eg: Suppose a SDRAM whose cells are in 8K (8192) rows; 4 clock cycles are needed to
access (read) each rows; then it takes 8192×4=32,768 cycles to refresh all rows; if the clock rate is
133 MHz, then it takes 32,768/(133×10-6)=246×10-6 seconds; suppose the typical refreshing period is
64ms, then the refresh overhead is 0.246/64=0.0038<0.4% of the total time available for accessing the
memory.
Memory Controller
Dynamic memory chips use multiplexed address inputs so that we can reduce the number of pins.
The address is divided into two parts and they are the High order address bits and Low order address
bits. The high order selects a row in the cell array and the low order address bits selects a column in the
cell array. The address selection is done under the control of RAS and CAS signal respectively for high
order and low order address bits.
At Logic value „0‟: Transistor(T) is connected to the ground point(P). Transistor switch is
closed & voltage on bit line nearly drops to zero. At Logic value „1‟: Transistor switch is open. The bit
line remains at high voltage.
To read the state of the cell, the word line is activated. A Sense circuit at the end of the bit line
generates the proper output value.
Types of ROM
Different types of non-volatile memory are
PROM
EPROM
EEPROM
Flash Memory
Programmable Read-Only Memory (PROM):
PROM allows the data to be loaded by the user. Programmability is achieved by inserting a
„fuse‟ at point P in a ROM cell. Before it is programmed, the memory contains all 0‟s. The user can
insert 1‟s at the required location by burning out the fuse at these locations using high-current
pulse. This process is irreversible.
The PROMs provide flexibility and faster data access. It is less expensive because they can be
programmed directly by the user.
Erasable Reprogrammable Read-Only Memory (EPROM):
EPROM allows the stored data to be erased and new data to be loaded. In an EPROM cell, a
connection to ground is always made at „P‟ and a special transistor is used, which has the ability to
function either as a normal transistor or as a disabled transistor that is always turned „off‟.
During programming, an electrical charge is trapped in an insulated gate region. The charge is
retained for more than 10 years because the charge has no leakage path. For erasing this charge, ultra-
violet light is passed through a quartz crystal window (lid). This exposure to ultra-violet light dissipates
the charge. During normal use, the quartz lid is sealed with a sticker.
EPROM can be erased by exposing it to ultra-violet light for duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function.
Merits: It provides flexibility during the development phase of digital system. It is capable of
retaining the stored information for a long time.
Demerits: The chip must be physically removed from the circuit for reprogramming and its
entire contents are erased by UV light.
CACHE MEMORIES
Processor is much faster than the main memory. As a result, the processor has to spend much of
its time waiting while instructions and data are being fetched from the main memory. These create a
major obstacle towards achieving good performance. Speed of the main memory cannot be increased
beyond a certain point.
Cache Memory is a special very high-speed memory. It is used to speed up and synchronizing
with high-speed CPU. Cache memory is costlier than main memory or disk memory but economical
than CPU registers. Cache memory is an extremely fast memory type that acts as a buffer between
RAM and the CPU. It holds frequently requested data and instructions so that they are immediately
available to the CPU when needed.
Cache memory is used to reduce the average time to access data from the Main memory. The
cache is a smaller and faster memory which stores copies of the data from frequently used main memory
locations. There are various different independent caches in a CPU, which store instructions and data.
Cache memory is based on the property of computer programs known as “locality of reference”.
Prefetching the data into cache before the processor needs it. It needs to predict processor future access
requirement [Locality of Reference].
Locality of Reference
Analysis of programs indicates that many instructions in localized areas of a program are
executed repeatedly during some period of time, while the others are accessed relatively less
frequently. These instructions may be the ones in a loop, nested loop or few procedures calling each
other repeatedly. This is called “locality of reference”.
Temporal locality of reference:
Recently executed instruction is likely to be executed again very soon.
MAPPING FUNCTIONS
The mapping functions are used to map a particular block of main memory to a particular block
of cache. This mapping function is used to transfer the block from main memory to cache memory.
Mapping functions determine how memory blocks are placed in the cache.
Three mapping functions:
Direct mapping.
Associative mapping.
Set-associative mapping.
Direct Mapping
A particular block of main memory can be brought to a particular block of cache memory. So, it
is not flexible.
The simplest way of associating main memory
blocks with cache block is the direct mapping technique. In
this technique, block k of main memory maps into block k
modulo m of the cache, where m is the total number of
blocks in cache. In this example, the value of m is 128.
In direct mapping technique, one particular block of
main memory can be transferred to a particular block of
cache which is derived by modulo function.
Example: Block j of the main memory maps to
j modulo 128 of the cache. (ie) Block 0, 128, 256 of
main memory is maps to block 0 of cache
memory.1,129,257 maps to 1, & so on.
More than one memory block is mapped onto the
same position in the cache. This may lead to contention for
cache blocks even if the cache is not full. Resolve the
contention by allowing new block to replace the old
block, leading to a trivial replacement algorithm.
Memory address is divided into three fields: Low
order 4 bits determine one of the 16 words in a block. When a new block is brought into the cache, the
next 7 bits determine which cache block this new block is placed in. High order 5 bits determine
which of the possible32 blocks is currently present in the cache. These are tag bits.
This mapping methodology is simple to implement but not very flexible.
Associative mapping
In the associative mapping technique, a main memory block can potentially reside in any cache
block position. In this case, the main memory address is divided into two groups, a low-order bit
identifies the location of a word within a block and a high-order bit identifies the block.
In the example here, 11 bits are required to identify a main memory block when it is resident in
the cache, high-order 11 bits are used as TAG bits and low-order 5 bits are used to identify a word
within a block. The TAG bits of an address received from the CPU must be compared to the TAG bits
of each block of the cache to see if the desired block is present.
In the associative mapping, any block of main memory can go to any block of cache, so it has
got the complete flexibility and we have to use proper replacement policy to replace a block from cache
if the currently accessed block of main memory is not present in cache.
It might not be practical to use this complete flexibility of associative mapping technique due
to searching overhead, because the TAG field of main memory address has to be compared with the
TAG field of the entire cache block.
In this example, there are 128 blocks in cache and the size of TAG is 11 bits. The whole
arrangement of Associative Mapping Technique is shown in the figure below.
Set-Associative mapping
This mapping technique is intermediate to the previous two techniques. Blocks of the cache are
grouped into sets, and the mapping allows a block of main memory to reside in any block of a specific
set. Therefore, the flexibility of associative mapping is reduced from full freedom to a set of specific
blocks.
This also reduces the searching overhead, because the search is restricted to number of sets,
instead of number of blocks. Also, the contention problem of the direct mapping is eased by having a
few choices for block replacement.
Consider the same cache memory and main memory organization of the previous example.
Organize the cache with 4 blocks in each set. The TAG field of associative mapping technique is
divided into two groups, one is termed as SET bit and the second one is termed as TAG bit. Each set
contains 4 blocks, total number of set is 32. The main memory address is grouped into three parts: low-
order 5 bits are used to identifies a word within a block. Since there are total 32 sets present, next 5
bits are used to identify the set. High-order 6 bits are used as TAG bits.
Replacement Algorithms
When the cache is full, there is a need for replacement algorithm for replacing the cache block
with a new block. For achieving the high-speed such types of the algorithm is implemented in hardware.
In the cache memory, there are three types of replacement algorithm are used that are:
Random replacement policy.
First in first Out (FIFO) replacement policy
Least recently used (LRU) replacement policy.
Random replacement policy
This is a very simple algorithm which used to choose the block to be overwritten at random. In
this algorithm replace any cache line by using random selection. It is an algorithm which is simple
and has been found to be very effective in practice.
First in first out (FIFO)
In this algorithm replace the cache block which is having the longest time stamp. While using
this technique there is no need of updating when a hit occurs but when there is a miss occur then the
block is put into an empty block and the counter values are incremented by one.
Least recently used (LRU)
In the LRU, replace the cache block which is having the less reference with the longest time
stamp. In this case also when a hit occurs when the counter value will be set to 0 but when the miss
occurs there will be arising of two possibilities in which one possibility is that counter value is set as 0
and, in another possibility, the counter value can be incremented as 1.
HARDWARE ORGANIZATION
The block diagram of an associative memory consists of a memory arrayand logic from
words with n bits per word. The argument register A and key register K each have n bits, one for
each bit of a word.
To illustrate with a numerical example, suppose that the argument register A and the key register K
have the bit configuration shown below. Only the three leftmost bits of A are compared with
Word 2 matches the unmasked argument field because the three leftmost bits ofthe argument and
the word are equal.
The relation between the memory array and external registers in anassociative memory is
shown in below figure.
The cells in the array are marked by the letter C with two subscripts. The first subscript
gives the word number and the second specifies the bit position in the word. Thus cell Cij is the cell
for bit j in word i. A bit A j in the argument register is compared with all the bits in column j of the
array provided that K j =1. This is done for all columns j = 1, 2,…,n. If a match occurs between all
the unmasked bits of the argument and the bits in word i, the corresponding bit M i in the match
register is set to 1.
If one or more unmasked bits of the argument and the word do not match, Mi iscleared to 0.
Flop storage element Fij and the circuits for reading, writing, and matching the cell. The input bit
is transferred into the storage cell during a write operation. The bit stored is read out during a read
operation. The match logic compares the content of the storage cell with the corresponding
unmasked bit of the argument and provides an output for the decision logic that sets the bit in Mi.
READ OPERATION
The matched words are read in sequence by applying a read signal to each word line whose
corresponding Mi bit is a 1. In most applications, the associative memory stores a table with no
two identical items under a given key. In this case, only one word may match the unmasked
argument field. By connecting output Mi directly to the read line in the same word position (instead
of the M register), the content of the matched word will be presented automatically at the output lines
and no special read command signal is needed. Furthermore, if we exclude words having a zero
content, an all-zero output will indicate that no match occurred and that the searched item is not
available inmemory.
WRITE OPERATION
If the entire memory is loaded with new information at once prior to a search operation then
the writing can be done by addressing each location in sequence. This will make the device a
random-access memory for writing and a content addressable memory for reading. The advantage
here is that the address for input can be decoded as in a random-access memory. Thus instead of
having m address lines, one for each word in memory, the number of addresslines can be reduced by
the decoder to d lines, where m = 2d.
If unwanted words have to be deleted and new words inserted one at a time, there is a need
for a special register to distinguish between active and inactive words. This register, sometimes
called a tag register, would have as many bits as there are words in the memory. For every active
word stored in memory, the corresponding bit in the tag register is set to 1. A word is deleted from
memory by clearing its tag bit to 0. Words are stored in memory by scanning the tag register until
the first 0 bit is encountered. This gives the first available inactive word and a position for
writing a new word. After the new word is stored in memory it is made active by setting its tag bit
to 1. An unwanted word when deleted from memory can be cleared to all 0‟s if this value is used to
specify an empty location.