Computer Architecture & Organization Unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

UNIT - 4

By: Namrata Singh


Introduction to Memory:
A memory unit is the collection of storage units or devices together. The memory unit stores the binary
information in the form of bits.
Generally, memory/storage is classified into 2 categories:
1. Volatile Memory: This loses its data, when power is switchedoff.
2. Non-Volatile Memory: This is a permanent storage and does not lose any data when power is
switched off.

Memory Hierarchy:

The total memory capacity of a computer can be visualized by hierarchy of components. The memory hierarchy
system consists of all storage devices contained in a computer system from the slow Auxiliary Memory to fast
Main Memory and to smaller Cache memory.
Auxiliary Memory:-
Auxiliary memory access time is generally 1000 times that of the main memory, hence it is at the bottom of the
hierarchy.

Main Memory:-
The main memory occupies the central position because it is equipped to communicate directly with the CPU
and with auxiliary memory devices through Input/output processor (I/O). When the program not residing in main

By: Namrata Singh


memory is needed by the CPU, they are brought in from auxiliary memory. Programs not currently needed in
main memory are transferred into auxiliary memory to provide space in main memory for other programs that are
currently in use. The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to7~10

Memory Access Methods:-

Each memory is a collection of numerous memory locations. To access data from any memory, first it must be
located and then the data is read from the memory location. Following are the methods to access information
from memory locations:
1. Random Access: Main memories are random access memories, in which each memory location has a
unique address. Using this unique address any memory location can be reached in the same amount of
time in any order.
2. Sequential Access: This methods allows memory access in a sequence or in order.
3. Direct Access: In this mode, information is stored in tracks, with each track having a separate read/write
head.

Main Memory:-
The memory unit that communicates directly within the CPU, Auxillary memory and Cache memory, is called
main memory. It is the central storage unit of the computer system. It is a large and fast memory used to store

By: Namrata Singh


data during computer operations. Main memory is made up of RAM and ROM, with RAM integrated circuit
chips holing the major share.
1. RAM: Random Access Memory
a) DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed every
10~100 ms. It is slower and cheaper than SRAM.
b) SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until powered off.
c) NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.

2. ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information. It also
stores the bootstrap loader program, to load and start the operating system when computer is turned
on.
Types of ROM
a) PROM(ProgrammableROM)
b) EPROM(Erasable PROM)and
c) EEPROM(Electrically ErasablePROM)

Auxiliary Memory:-

Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks and tapes are
commonly used auxiliary devices. Other devices used as auxiliary memory are magnetic drums, magnetic bubble
memory and optical disks. It is not directly accessible to the CPU, and is accessed using the Input/Output
channels.

Cache Memory:-
The data or contents of the main memory that are used again and again by CPU, are stored in the cache memory
so that we can easily access that data in shorter time. Whenever the CPU needs to access memory, it first checks
the cache memory. If the data is not found in cache memory then the CPU moves onto the main memory. It also
transfers block of recent data into the cache and keeps on deleting the old data in cache to accomodate the new
one.
Hit Ratio
The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU refers to
memory and finds the word in cache it is said to produce a hit. If the word is not found in cache, it is in main
memory then it counts as a miss. The ratio of the number of hits to the total CPU references to memory is called
hit ratio.

By: Namrata Singh


Hit Ratio = Hit/(Hit + Miss)

Random Access Memory:-


In random-access memory(RAM) the memory cells can be accessed for information transfer from any
desired random location. That is, the process of locating a word in memory is the same and requires an equal
amount of time no matter where the cells are located physically in memory. Communication between a memory
and its environment is achieved through data input and output lines, address selection lines, and control lines that
specify the direction of transfer.

A block diagram of a RAM unit is shown below:

The n data input lines provide the information to be stored in memory, and the n data output lines supply the
information coming out of particular word chosen among the 2kavailable inside the memory. The two control
inputs specify the direction of transfer desired.

Write and Read Operations:-


The two operations that a random access memory can perform are the write and read operations. The write
signal specifies a transfer-in operation and the read signal specifies a transfer-out operation. On accepting one of
these control signals. The internal circuits inside the memory provide the desired function. The steps that must be
taken for the purpose of transferring a new word to be stored into memory are as follows:
1. Apply the binary address of the desired word into the addresslines.
2. Apply the data bits that must be stored in memory into the data inputlines.
3. Activate the writeinput.

The memory unit will then take the bits presently available in the input data lines and store them in the specified

By: Namrata Singh


by the address lines. The steps that must be taken for the purpose of transferring a stored word out of
memory are as follows:

1. Apply the binary address of the desired word into the addresslines.
2. Activate the read input.

The memory unit will then take the bits from the word that has been selected by the address and apply them into
the output data lines. The content of the selected word does not change after reading.

Serial Access Memories:-


Sequential access is a process used for retrieving data from a storage device. It is also known as serial
access. In sequential access, the storage device moves through all information up to the point it is attempting to
read or write. An example of sequential access drive is a tape drive where the drive moves the tape forward or
backward until the destination is reached. Sequential access memory can also be called "storage system." The
data is stored and read in a sequential fixed order. Sequential access is the type of memory mostly used for
permanent storage, whereas, random access memory is used for temporary storage.

Serial Access Devices:-


Old recording media such as CDs, DVDs, and magnetic tapes are examples of sequential access memory drives.
Hard drive is also an example of sequential access memory. Examples of random access memory include
memory chips and flash memory (such as memory sticks or memory cards).

Difference between Sequential Access and Random Access:-


Comparing sequential versus random disk operations helps to assess systems efficiency. Accessing data
sequentially is faster than random operations, because it involves more search functions. The search operation is
performed by the right disk cylinder. It occurs when the disk head positions itself to access the data requested for.
More ever, random access delivers a lower rate of output. If the disk access is random, it is advisable to pay
attention and monitor for the emergence of any bottleneck. For workloads of either random or sequential
input/output, it is advisable to use drives with faster rotational speeds. For workloads that are predominantly
random input/output, it is advisable to use a drive with faster search time.

Disadvantages of Sequential Access:-


The number of records that are affected when updating a file refers to its hit rate. Let us consider a file with 5000
records; if there is a delete or an update operation affecting only 50 records, then the hit rate is very low. If there
are 4500 records that are affected by update or delete operations, then the hit rate is high. Sequential access is
found to be slow when the hit rate is low. It is due to the fact that sequential access has to search all the records in

By: Namrata Singh


a particular order. Moreover, sequential files are executed in a batched transaction to overcome the problem of
low hit rate.

RAM interfaces:-
Data RAM:-
The data RAM shown below is organized as 8 ways 256-bit wide contiguous memories. It supports the following
accesses:

1. 8 word data reads


2. n * 8 bits data writes with byte enables controls
3. 8 word data writes for linefills.

Data RAM organization:-

By: Namrata Singh


Dirty RAM:-
The dirty RAM shown below is organized as a 16-bit wide memory, 2 bits per 8-word cache line. The dirty
RAM address is the same as the tag RAM address bus. It supports the following accesses:
 16 bit dirty reads for write-back eviction on alinefill.
 16 bit dirty reads for cache maintenance operations. 
 1 or 2 bit dirty writes for writes and allocations. 
Dirty RAM organization:-

By: Namrata Singh


The above figure shows the dirty RAM connectivity.

Dirty RAM connectivity:-

By: Namrata Singh


Tag RAM:-
There is one tag RAM for each way of the L2 cache. A tag RAM is organized as a 21-bit wide memory. 18 bits
are dedicated to address tag, 1 bit for security information, 1 bit for valid information, and optionally 1 bit for
parity. The tag RAM address bus is also the address bus for the dirty RAM. The tag RAM support the following
accesses:
 20-bit tag reads for Tag lookup
 20-bit tag writes for allocations.

The NS bit takes the value of 1 for NS data, and 0 for secure data.
Note:-You require a 21-bit wide memory to support the parity option.

Tag RAM organization:-

Cache lookup:-
The tag RAM format:

By: Namrata Singh


Tag RAMformat:-

Each line is marked as secure or NS depending on the value of the AWPROT[1] or ARPROT[1] value on the
original transaction. The security setting of the access, AWPROT[1] or ARPROT[1], is used for Cache Lookup
and compared with the NS attribute in the Tag. The tag RAM contains a field to hold the NS attribute bit
corresponding for each cache line. This is required so that the NS attribute bit for all cache ways is compared to
generate the cache hit.
Note

 The cache is not automatically flushed when the processor changes security state.
 If an access is performed, and has an AWPROT[1]/ARPROT[1] value of 1'b1, then the NS attribute
must be HIGH. Cache lookups are performed on lines marked as NS, the NS cache line attribute = 1,
according to Physical Address(PA). 
 If any access is performed in secure state, and the transaction has an AWPROT[1]/ARPROT[1] value of
1'b0), then the NS attribute must be LOW. Cache lookups are performed on lines marked as secure (NS
cachelineattribute=0)accordingtoPA.AsecureaccessonlyhitsontagswithasecureNSattribute.

RAM sizes:-
The below table shows the different sizes of RAM.
L2 cache size Data RAM Tag RAM Dirty RAM

128KB 1 × (256 + 32) × (ways × 512) Ways × (20 + 1) × 512 1 × (2 × ways) × 512

256KB 1 × (256 + 32) × (ways × 1024) Ways × (19 + 1) × 1,024 1 × (2 × ways) × 1,024

512KB 1 × (256 + 32) × (ways × 2048) Ways × (18 + 1) × 2,048 1 × (2 × ways) × 2,048

1MB 1 × (256 + 32) × (ways × 4096) Ways × (17 + 1) × 4,096 1 × (2 × ways) × 4,096

2MB 1 × (256 + 32) × (ways × 8192) Ways × (16 + 1) × 8,192 1x (2 × ways) × 8,192

By: Namrata Singh


Note:-
1. The format for RAM sizes are:
 Number of RAM × (width + parity) × number of address location.
2. The dirty ram does not have parity. Width for the tag RAM consists of Valid + NS +address.

Magnetic Surface Recording:


A disk is a circular platter constructed of nonmagnetic material, called the substrate, coated with a magnetizable
material. Traditionally, the substrate has been an aluminum or aluminum alloy material. More recently, glass
substrates have been introduced. The glass substrate has a number of benefits, including the following:
1. Improvement in the uniformity of the magnetic film surface to increase diskreliability;
2. A significant reduction in overall surface defects to help reduce readwrite errors;
3. Ability to support lower fly heights (described subsequently);
4. Better stiffness to reduce disk dynamics;and
5. Greater ability to withstand shock and damage

Magnetic Read and Write Memory:-


Magnetic disks remain the most important component of external memory. Both removable and fixed, or hard,
disks are used in systems ranging from personal computers to mainframes and supercomputers. Data are recorded
on and later retrieved from the disk via a conducting coil named the head. In many systems, there are two heads,
a read head and a write head. During a read or write operation, the head is stationary while the platter rotates be
neat hit.

Inductive write/Magetoresistive Read Head

By: Namrata Singh


The write mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric
pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below, with
different patterns for positive and negative currents. The traditional read mechanism exploits the fact that a
magnetic field moving relative to a coil produces an electrical current in the coil. When the surface of the disk
passes under the head, it generates a current of the same polarity as the one already recorded. The structure of the
head for reading is in this case essentially the same as for writing and therefore the same head can be used for
both. Such single heads are used in floppy disk systems and in older rigid disk systems. The read head consists of
a partially shielded magneto resistive (MR) sensor. The MR material has an electrical resistance that depends on
the direction of the magnetization of the medium moving under it.

Optical Memories:
Optical memories are used for large, storage of data. These devices provide the option of variety of data storage.
These can save up to 20 GB of information. The data or information is read or written using a laser beam. Due to
its low cost and high data storage capacity these memories are being freely used. Apart from low cost these
memories have long life. But the problem is that of low access time.

Some Examples of Optical Memory:


CD-ROM: CD ROM or Compact-Disk Read Only Memory are optical storage device which can be easily read
by computer but not written. CD-ROMs are stamped by the vendor, and once stamped, they cannot be erased and
filled with new data. To read a CD, CD-ROM player is needed. All D-ROMs conform to a standard size and
format, so any type of CD-ROM can be loaded into any CD-ROM player. In addition, CD-ROM players are
capable of playing audio CDs, which share the same technology. CD-ROMs are particularly well-suited to
information that requires large storage capacity This includes large software applications that support colour,
graphics. sound and especially video.

Advantages of CD ROM:
1. Storage capacity is high.
2. Data storage cost per bit is reasonable.
3. Easy to carry.
4. Can store variety of data.

Disadvantages of CD ROM:-

1. CD ROMs are read only.


2. Access time is more than hard disk.

By: Namrata Singh


WORM:

WORM or Write Once Read Many or CD-R or CD-Record able are a kind of optical device which provides the
user the liberty to write once on the CD R. The user can write on the disk using the CD R disk drive unit. But this
data or information cannot be overwritten or changed. CD R does not allow re-writing though reading can be
done many times.

Advantages of WORM:-
1. Storage capacity is high.
2. Can be recorded once.
3. Reliable.
4. Runs longer.
5. Access time is good.
Disadvantages or limitations of WORM:-
1. Can be written only once.

Erasable Optical Disk:-

Erasable Optical Disks are also called CD RW or CD rewritable. It gives the user the liberty of erasing data
already written by burning the microscopic point on the disk surface. The disk can be reused.

Advantages of CD RW:-
Storage capacity is very high.
1. Reliability is high.
2. Runs longer.
3. Easy to rewrite.

Limitations of CD RW:-
 Access time is high.

DVD-ROM, DVD-R and DVD-RAM:-

DVD or Digital Versatile Disk is another form of optical storage. These are higher in capacity than the CDs. Pre-
recorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD. Such disks
are known as DVD-ROM, because data can only be read and not written nor erased. DVD Rs are the blank record
able DVDs which can be recorded once using optical disk recording technologies by using DVD recorders and
then function as a DVD-ROM. DVD-ROM. Re writable DVDs DVD-RAM can be recorded.

Multilevel memories:

Memory Hierarchy

By: Namrata Singh


Memory System Organization
No matter how big the main memory, how we can organize effectively the memory system in order to
store more information than it can hold. The traditional solution to storing a great deal of data is a
memory hierarchy.

Major design objective of any memory system:

1. To provide adequate storage capacity


2. An acceptable level of performance
3. At a reasonable cost

Four interrelated ways to meet this goal

1. Use a hierarchy of storage devices.


2. Develop automatic space allocation methods for efficient use of the memory.
3. Through the use of virtual memory techniques, free the user from memory management tasks.
4. Design the memory and its related interconnection structure so that the processes.

Multilevel Memories Organization:-

Three key characteristics increase for a memory hierarchy. They are the access time, the storage capacity and the
cost. The memory hierarchy is illustrated in figure 9.1.

The memory hierarchy


We can see the memory hierarchy with six levels. At the top there are CPU registers, which can be accessed at
full CPU speed. Next commes the cache memory, which is currently on order of 32 KByte to a few Mbyte. The
main memory is next, with size currently ranging from 16 MB for entry-level systems to tens of Gigabytes. After
that come magnetic disks, the current work horse for permanent storage. Finally we have magnetic tape and
optical disks for archival storage.

By: Namrata Singh


Basis of the memory hierarchy
1. Registers internal to the CPU for temporary data storage (small in number but very fast)
2. External storage for data and programs (relatively large and fast)
3. External permanent storage (much larger and much slower)

Typical Memory Parameters


Characteristics of the memory hierarchy
1. Consists of distinct “levels” of memory components
2. Each level characterized by its size, access time, and cost per bit
3. Each increasing level in the hierarchy consists of modules of larger capacity, slower access time,
and lower cost/bit

Memory Performance

Goal of the memory hierarchy. Try to match the processor speed with the rate of information transfer
from the lowest element in the hierarchy.
The memory hierarchy speed up the memory performance.
The memory hierarchy works because of locality of reference.
– Memory references made by the processor, for both instructions and data, tend to cluster together
+ Instruction loops, subroutines
+ Data arrays, tables
– Keep these clusters in high speed memory to reduce the average delay in accessing data
– Over time, the clusters being referenced will change -- memory management must deal with this
 Performance of a two level memory

By: Namrata Singh


Example: Suppose that the processor has access to two level of memory:
– Two-level memory system
– Level 1 access time of 1us

– Level 2 access time of 10us

– Ave access time = H(1) + (1-H)(10)ns


where: H is a fraction of all memory access that are found in the faster memory (e.g cache)

Performance of a two level memory

Cache & virtualmemory:-

Cache memory:-

A cache memory is a fast random access memory where the computer hardware stores copies of information
currently used by programs (data and instructions), loaded from the main memory. The cache has a significantly
shorter access time than the main memory due to the applied faster but more expensive implementation
technology. The cache has a limited volume that also results from the properties of the applied technology. If
information fetched to the cache memory is used again, the access time to it will be much shorter than in the case
if this information were stored in the main memory and the program will execute faster.
Time efficiency of using cache memories results from the locality of access to data that is observed during
program execution.
We observe here time and space locality:

By: Namrata Singh


1. Time locality consists in a tendency to use many times the same instructions and data in programs during
neighbouring time intervals,
2. Space locality is atendencytostoreinstructionsanddatausedinaprograminshortdistancesoftime under
neighbouring addresses in the main memory.

Due to these localities, the information loaded to the cache memory is used several times and the execution time
of programs is much reduced. Cache can be implemented as a multi-level memory. Contemporary computers
usually have two levels of caches. In older computer models, a cache memory was installed outside a processor
(in separate integrated circuits than the processor itself). The access to it was organized over the processor
external system bus. In today's computers, the first level of the cache memory is installed in the same integrated
circuit as the processor. It significantly speeds up processor's co-operation with the cache. Some microprocessors
have the second level of cache memory placed also in the processor's integrated circuit. The volume of the first
level cache memory is from several thousands to several tens of thousands of bytes. The second level cache
memory has volume of several hundred thousand bytes. A cache memory is maintained by a special processor
subsystem called cache controller. If there is a cache memory in a computer system, then at each access to a
main memory address in order to fetch data or instructions, processor hardware sends the address first to the
cache memory. The cache control unit checks if the requested information resides in the cache. If so, we have a
"hit" and the requested information is fetched from the cache. The actions concerned with a read with a hit are
shown in the figure below.

Read implementation in cache memory on hit


If the requested information does not reside in the cache, we have a "miss" and the necessary information is
fetchedfromthemainmemorytothecacheandtotherequestingprocessorunit.Theinformationisnotcopiedin

By: Namrata Singh


In today's computers, caches and main memories are byte-addressed, so we will refer to byte-addressed
organization in the sections on cache memories that follow.

Virtual memory organization:-


In early computers, freedom of programming was seriously restricted by a limited volume of main memory
comparing program sizes. Small main memory volume was making large programs execution very troublesome
and did not enable flexible maintenance of memory space in the case of many co-existing programs. It was very
uncomfortable, since programmers were forced to spend much time on designing a correct scheme for data and
code distribution among the main memory and auxiliary store. The solution to this problem was supplied by
introduction of the virtual memory concept. This concept was introduced at the beginning of years 1970 under the
name of one-level storage in the British computer called Atlas. Only much later, together with application of this
idea in computers of the IBM Series 370, the term virtual memory was introduced. Virtual memory provides a
computer programmer with an addressing space many times larger than the physically available addressing space
of the main memory. Data and instructions are placed in this space with the use of virtual addresses, which can
be treated as artificial in some way. In the reality, data and instructions are stored both in the main memory and in
the auxiliary memory (usually disk memory). It is done under supervision of the virtual memory control system
that governs real current placement of data determined by virtual addresses. This system automatically (i.e.
without any programmer's actions) fetches to the main memory data and instructions requested by currently
executed programs. The general organization scheme of the virtual memory is shown in the figure below.

By: Namrata Singh


General scheme of the virtual memory:-
Virtual memory address space is divided into fragments that have pre-determined sizes and identifiers that are
consecutive numbers of these fragments in the set of fragments of the virtual memory. The sequences of virtual
memory locations that correspond to these fragments are called pages or segments, depending on the type of the
virtual memory applied. A virtual memory address is composed of the number of the respective fragment of the
virtual memory address space and the word or byte number in the given fragment.
We distinguish the following solutions for contemporary virtualmemory systems:
 paged (virtual)memory
 segmented (virtual)memory
 segmented (virtual) memory with paging. 

When accessing data stored under a virtual address, the virtual address has to be converted into a physical
memory address by the use of address translation. Before translation, the virtual memory system checks if the
segment or the page, which contains the requested word or byte, resides in the main memory. It is done by tests
of page or segments descriptors in respective tables residing in the main memory. If the test result is negative, a
physical address sub-space in the main memory is assigned to the requested page or segment and next it is loaded
into this address sub-space from the auxiliary store. Next, the virtual memory system up-dates the page or

segment descriptions in the descriptor tables and opens access to the requested word or byte for the processor
instruction, which has emitted the virtual address.

The virtual memory control system is implemented today as partially hardware and software system. Accessing
descriptor tables and virtual to physical address translation is done by computer hardware. Fetching missing
pages or segments and up-dating their descriptors is done by the operating system, which, however, is strongly
supported by special memory management hardware. This hardware usually constitutes a special functional unit
for virtual memory management and special functional blocks designed to perform calculations concerned with
virtual address translation.

Memory allocation:-

Memory is the processes by which information is encoded, stored and retrieved. Encoding allow information that
is from the outside world to reach our senses in the forms of chemical and physical stimuli. Memory allocation is
a process by which computer programs and services are assigned with physical or virtual memory space. Memory
allocation is the process of reserving a partial or complete portion of computer memory for the execution of
programs and processes. Memory allocation is achieved through a process known as memory management.
Memory allocation is primarily a computer hardware operation but is managed through operating system and
software applications. Memory allocation process is quite similar in physical and virtual memory management.

By: Namrata Singh


Programs and services are assigned with a specific memory as per their requirements when they are executed.
Once the program has finished its operation or is idle, the memory is released and allocated to another program or
merged within the primary memory.
Memory allocation has two core types;
 Static Memory Allocation: The program is allocated memory at compile time. 
 Dynamic Memory Allocation: The programs are allocated with memory at runtime.

Static memory allocation:-


In static memory allocation, size of the memory may be required for the calculation that must be define
before loading and executing the program.
Dynamic memory allocation:
There are two methods which are used for dynamic memory allocation:
1. Non-Preemptive Allocation
2. Preemptive Allocation

Non Preemptive allocation:-


Consider M1 as a main memory and M2 as secondary memory and a block K of n words is to be
transferred from M2 to M1. For such memory allocation it is necessary to find or create an available
reason of n or more words to accommodate K. This process is known as non preemptive allocation.

First Fit
In this algorithm, searching is started either at the beginning of the memory or where the previous first search
ended.

Best fit
In this algorithm, all free memory blocks are searched and smallest free memory block which is large enough to
accommodate desired block K is used to allocate K.

By: Namrata Singh


Preemptive allocation:-

Non preemptive allocation can’t make efficient use of memory in all situation. Due scattered memory blocks
larger free memory blocks may not be available. Much more efficient us of the available memory space is
possible if the occupied space can be re allocated to make room for incoming blocks by a method called as
Compaction.

Associative Memory:-
A memory unit accessed by content is called associative memory or content addressable memory(CAM) or
associative storage or associative array. Memory is capable of finding empty unused location to store the word.

To search particular data in memory, data is read from certain address and compared if the match is not found
content of the next address is accessed and compared. This goes on until required data is found. The number of
access depend on the location of data and efficiency of searching algorithm.

Hardware Organization of Associative Memory

Hardware Organization
Associative Memory is organized in such a way.

By: Namrata Singh


Argument register(A) : It contains the word to be searched. It has n bits(one for each bit of the word).
Key Register(K) : This specifies which part of the argument word needs to be compared with words in memory.
If all bits in register are 1, The entire word should be compared. Otherwise, only the bits having k bit set to 1 will
be compared.
Associative memory array : It contains the words which are to be compared with the argument word.
Match Register(M): It has m bits, one bit corresponding to each word in the memory array. After the matching
process, the bits corresponding to matching words in match register are set to 1.
Key register provide the mask for choosing the particular field in A register. The entire content of A register is
compared if key register content all 1. Otherwise only bit that have 1 in key register are 0compared. If the
compared data is matched corresponding bits in the match register are set. Reading is accomplished by sequential
access in memory for those words whose bit a reset.

Match Logic:-

Let us include key register. If Kj=0 then there is no need to compare Aj and Fij.
1. Only when Kj=1, comparison isneeded.
2. This achieved by ORing each term withKj.

By: Namrata Singh


Read Operation:-
When a word is to be read from an associative memory, the contents of the word, or a prt of the word is specified.

Write operations:-
If the entire memory is loaded with new information at once prior to search operation then writing can be done by
addressing each location in sequence. Tag register contain as many bits as there are words in memory. It contain
1 for active word and 0 for inactive word. If the word is to be inserted, tag register is scanned until 0 is found and
word is written at that position and bit is change to1.
Advantages:-
This is suitable for parallel searches. It is also used where search time needs to be short.
1. Associative memory is often used to speed up databases, in neural networks and in the page tables
used by the virtual memory of modern computers.
2. CAM-design challenge is to reduce power consumption associated with the large amount
of parallel active circuitry, without sacrificing speed or memory density.
Disadvantages:-
1. An associative memory is more expensive than a random access memory because each cell must have an
extra storage capability as well as logic circuits for matching its content with an external argument.
2. Usually associative memories are used in applications where the search time is very critical and must be
very short.

By: Namrata Singh

You might also like