0% found this document useful (0 votes)
23 views23 pages

3 - Memory

Chapter 3 discusses memory in embedded systems, detailing its functions, types, and characteristics. It categorizes memory based on write ability and storage permanence, explaining various types such as ROM, RAM, and their subtypes like SRAM and DRAM. The chapter also emphasizes the importance of memory hierarchy and cache memory for optimizing performance in embedded systems.

Uploaded by

leofoster1024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views23 pages

3 - Memory

Chapter 3 discusses memory in embedded systems, detailing its functions, types, and characteristics. It categorizes memory based on write ability and storage permanence, explaining various types such as ROM, RAM, and their subtypes like SRAM and DRAM. The chapter also emphasizes the importance of memory hierarchy and cache memory for optimizing performance in embedded systems.

Uploaded by

leofoster1024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

3.

MEMORY

CHAPTER 3: MEMORY
3.1 Memory
An ES functionality consists of three aspects:
1. Processing: Transmission of data
2. Storage: Retention of data for later use
3. Communication: Transfer of data

A memory stores large number of items commonly referred as words. Each word consists of
a specific number of bits, for e.g. 8 bits. Therefore, a memory can be viewed as m words of n
bits each, for a total of m x n bits as shown in figure below:

If a memory has k address inputs, it can have up to 2k words (m=2k) which implies that; k=log2(m).
This means that log2(m) address input signals are required to identify a particular word. Also, n data
signals are required to output a selected word.
For example: 4096 x 8 memory can store 32768 bits and require 12, (i.e. log 4096 / log2) i.e k=
log2(4096) = 12 address signals and eight I/O data signals. i.e k= log2(4096) = 12

3.2 Memory Write Ability and Storage Permanence


Write Ability:
It refers to the manner and speed that a particular memory can be written. The memory write ability
refers to the process of putting the bits in specific location of the memory and the ease and speed
with which the process can be completed. The writing process may be time consuming like in ROMs
or faster like in registers. A basic writing process involves providing the address values to the address
line and data to data lines and selecting the writing function. But there are different methods to
actually write the data into memory.
A ROM memory can be built using combinational logic whose inputs act as lines and output acts as
data lines. The circuit has to design such that each combination of inputs gives out data that is
stored in the address given by input lines, once implemented in hardware the stored values cannot
be changed. Another circuit has to be designed for another set of values. The write time is
comparatively large.

Embedded Systems © Er.Shiva Ram Dam,2022 1


3. MEMORY

A register is made up of flip-flops. Writing is easily accomplished, just put the data in the data lines
and enable load. Similarly, RAMs which are built around basic transistor storage cells, also have
faster writing ability. These memories are rewritable.

On the basis of writing ability, memory can be: -


• High end: The memory with high end write ability is easiest and quick one to write data into.
These are flip-flop based memory that processor can write directly into. Examples are:
registers, RAMs. These memories are used in computation process in embedded system.
• Middle Range: The memories in this range are little bit difficult to write and are slower than
high end ones. The processors can still write in them, but with a slower speed. These
memories are not accessed frequently and used for storing the data for a longer period of
time. Examples are FLASH, EEPROM. These memories are used in testing phase of embedded
system.
• Lower Range: A special programmer device is required to write data into this type of
memory. It is slower memory. Examples are: UV EPROM in which data is written by using
voltage higher than that of normal operation, one-time programmable (OTP) ROM, in which
data is written by blowing connections which represent bit values. The OTPROM can be
programmed once.
• Low End: In the low-end memory device, data is written only during manufacturing. The
memory device is manufactured with data into it. The data writing process starts with design
of chip. Once manufactured, the data cannot be rewritten. The mask programmed ROM is
the example. In embedded system, such memories can be used to hold program or some
data that are used frequently, but values to be stored must be final.

Storage Permanence:
It is the ability to hold the stored bits. The ability is temporary or permanent.
Range of storage permanence are: -
• High end: This range of memory essentially never loses bits e.g., mask-programmed ROM,
OTPROM. The programs of embedded system are placed in this range of memory.
• Middle range: This range memory holds for a bits over certain period of time as days,
months, or years after memory’s power source turned off e.g., NVRAM, a battery backed
RAM or flash memory.
• Lower range: This range memory holds bits as long as power supplied to memory e.g., SRAM
• Low end: It begins to lose bits almost immediately after written i.e refreshing circuit is
needed to hold data correctly e.g., DRAM

3.3 Types of Memory

3.3.1 Read Only Memory (ROM)


This is a nonvolatile memory. It can only be read from but not
written to, by a processor in an embedded system. Traditionally
written to, “programmed”, before inserting to embedded
system.
Uses: -
• To store software program for General Purpose
Processor i.e. program instructions can be one or more
ROM words
• To Store constant data needed by system
• To Implement combinational circuit

2 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

• For storing initial program called a bootstrap loader.


Example:
The figure shows the structure of a ROM. Horizontal lines represents the words. The vertical lines
give out data. These lines are connected only at circles. If address input is 010 the decoder sets 2nd
word line to 1. The data lines Q3 and Q1 are set to 1 because there is a “programmed” connection
with word 2’s line. The word 2 is not connected with data lines Q2 and Q0. Thus, the output is 1010.

The programming of programmable connection depends on the type of ROM being used. Common
ROM types are:
1. Masked Programmed ROM (Flash)
2. One-time programmable ROM (OTP-ROM)
3. Erasable ROM (EPROM)
4. Electrically Erasable Programmable ROM (EEPROM)

1. Mask-programmed ROM
The connections of this ROM are programmed at fabrication by creating an appropriate set of masks.
It can be written only once (in the factory). It has the lowest write ability. But it stores data for ever.
Thus, it has the highest storage permanence. The bits never change unless damaged. These are
typically used for final design of high-volume systems.

2. OTP ROM:
It is One-time programmable ROM. The Connections of this ROM are programmed after
manufacture by user. The user provides file of desired contents of ROM. The file is input to machine
by ROM programmer. Each programmable connection is a fuse. The ROM programmer blows fuses
where connections should not exist. This is done by passing a large current. General characteristics
are:
• Very low write ability: typically written only once and requires ROM programmer device

Embedded Systems © Er.Shiva Ram Dam,2022 3


3. MEMORY

• Very high storage permanence: bits don’t change unless reconnected to programmer and
more fuses blown
• Commonly used in final products: cheaper, harder to inadvertently modify

3. EPROM (Erasable programmable ROM) :


This is known as erasable programmable read only memory. The programmable component is a
MOS transistor. This transistor has a “floating” gate surrounded by an insulator. The Negative
charges form a channel between source and drain storing a logic 1. The Large positive voltage at
gate causes negative charges to move out of channel and get trapped in floating gate storing a logic
0. The (Erase) Shining UV rays on surface of floating-gate causes negative charges to return to
channel from floating gate restoring the logic 1. An EPROM package showing quartz window through
which UV light can pass. The EPROM has:
• Better write ability – can be erased and reprogrammed thousands of times –
• Reduced storage permanence – program lasts about 10 years but is susceptible to radiation
and electric noise
• Typically used during design development

4 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

4. EEPROM (Electrically Erasable and Programmable Read Only Memory):


It is erased typically by using higher than normal voltage. It can program and erase individual words
unlike the EPROMs where exposure to the UV light erases everything. It has:
• Better write ability
can be in-system programmable with built-in circuit to provide higher than normal voltage
i.e. built-in memory controller commonly used to hide details from memory user
• Writes very slowly due to erasing and programming. “busy” pin indicates to processor
EEPROM still writing
• Can be erased and programmed tens of thousands of times
• Similar storage permanence to EPROM (about 10 years)
• Far more convenient than EPROMs, but more expensive.

5. Flash Memory:
It is an extension of EEPROM. It has the same floating gate principle and same write ability and
storage permanence. It can be erased at a faster rate i.e. large blocks of memory erased at once,
rather than one word at a time. The blocks are typically several thousand bytes large
• Writes to single words may be slower
Entire block must be read, word updated, then entire block written back
• Used with embedded systems storing large data items in nonvolatile memory
e.g., digital cameras, TV set-top boxes, cell phones

3.3.2 RAM (Random-access memory):


It is a memory that can be both read and written easily. Writing a RAM is about as fast as reading
from a RAM unlike in system programmed ROM. RAM is typically volatile. Bits are not held without
the power supply. Unlike ROMs, RAM never contains data when inserted in an embedded system.
System writes data to and reads data from the RAM during its execution.

• The Internal structure of RAM is more complex than ROM. Each word consists of several
memory cells, each storing 1 bit. Each input and output data line connect to each cell in its
column. r/w connected to every cell.
• Each word enable line from the decoder connects to every cell in its row.
• when row is enabled by decoder, each cell has logic that stores input data bit when r/w
indicates write or outputs stored bit when r/w indicates read.

Embedded Systems © Er.Shiva Ram Dam,2022 5


3. MEMORY

Basic types of RAM


There are two basic types of RAM: static and dynamic. SRAM is faster but larger than DRAM. Also,
SRAM is easily implemented on the same chip as processors, whereas DRAM is usually implemented
on a separate IC.

1. SRAM (Static RAM):


The memory circuit is said to be static if the stored data can be retained indefinitely, as long as the
power supply is on, without any need of periodic refresh operation.
The data storage cell, i.e. the 1-bit memory cell in the SRAM arrays, invariably consists of a simple
flip-flop circuit with two stable operating points. Depending on the preserved data of the flip-flop,
the data being held in the memory cell will be interpreted either as logic 0 or as logic 1. To access the
data contained in the memory cell via a bit line, we need at least one switch, which is controlled by
the corresponding world line as shown in figure below.

A lower power SRAM cell may be designed by using cross-coupled CMOS inverters. The most
important advantage of this circuit topology is that static power dissipation is very small, essentially,
it is limited by small leakage current. Other advantages of this design are high noise immunity due to
larger noise margins, and the ability to operate at lower power supply voltage.
The major disadvantage of this topology is larger cell size. The circuit structure of the full CMOS
static RAM cell is shown in figure above.
The memory cell consists of simple CMOS inverters connected back to back, and two access
transistors. The access transistors are turned ON whenever a word line is activated for read or write
operation, connecting the cell to the complementary bit line columns.
To select a cell, the two access transistors must be ON so the elementary cell (flipflop) can be
connected to the internal SRAM circuitry. These two access transistors of a cell are connected to the
word line (also called row or X address). The selected row will be set at Vcc. The two flip-flop sides
are connected to a pair of lines data and data’ (also called columns or Y address).

6 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

2. DRAM (Dynamic RAM):


• Memory cell uses MOS transistor and capacitor to store bit
• More compact than SRAM
• “Refresh” required due to capacitor leak i.e. word’s cells refreshed when read
• Typical refresh rate 15.625 micro sec.
• Slower to access than SRAM.

Figure above shows the internal structure of a DRAM memory cell. DRAM stores each bit in a storage
cell consisting of a capacitor and a transistor. Capacitors tend to lose their charge rather quickly,
thus, the need for recharging or refreshing.
The presence or absence of charge in the capacitor determines whether the cell contains 1 or 0. A
typical DRAM cell’s minimum refresh rate is once every 15.625 microseconds. Usually the refresh
circuitry consists of a refresh counter which contains the address of the row to be refreshed which is
applied to the chip’s row address lines and a timer that increments the counter to step through the
rows, this counter may be part of the memory controller circuitry, or on the memory chip itself. Two
refreshing strategies may be employed: burst and distributed.
• Burst refresh: a series of refresh cycles are performed one after another until all the rows
have been refreshed.
• Distributed refresh: refresh cycles are performed at regular intervals, interspersed with
memory accesses.

Differences between SRAM and DRAM:

Embedded Systems © Er.Shiva Ram Dam,2022 7


3. MEMORY

RAM variation
1. Pseudo Static RAM (PSRAM):
These are RAMS with built in memory refresh controller. It appears to behave much like an
SRAM. In contrast to true SRAM, PSRAM may be busy refreshing itself when accessed; which
could slow access time and add some system complexity. It is the popular low-cost high-
density memory alternative to SRAM.

2. Non-Volatile RAM (NVRAM):


It is a special RAM variation able to hold its data even after external power removed. Two
common types exist:
a) Battery backed RAM:
It contains a static RAM along with one permanently connected battery. When external
power is removed or power drops below threshold, the internal battery maintains
power to the SRAM and thus memory continue to store its bits. It is far more writable
compared to other forms of NVRAM. Since, no special programming is necessary, writes
are done in nanoseconds as fast as reads. There is no limit on number of writes unlike
non-volatile ROM based memory. Storage permanence is better than SRAM or DRAM,
with many NVRAMs having batteries that can last for 10 years. However, the
disadvantage with NVRAMs is that they are more susceptible to having bits changed
inadvertently due to noise than EEPROM or Flash.

b) SRAM with EEPROM or Flash:


It contains a static RAM as well as an EEPROM or Flash having the same capacity as the
static RAM. It stores its complete RAM contents on EEPROM or Flash before power
turned off, or whenever instructed to store the data, and then reloads that data from
EEPROM or Flash into RAM after power is turned back on.

3.3 Memory Hierarchy and Cache


There are three key characteristics of memory that can put them in a hierarchical order, namely:
capacity, access time, and cost. The below relationship holds true for the memories:

• Faster the access time, the greater is the cost per bit
• Greater the storage capacity, smaller the cost per bit
• Greater storage capacity, slower access time

To meet performance requirements, the designer wishes to use expensive, relatively lower-capacity
memories with short access times. But ultimately, the cost increases. So, the designer should employ
a memory hierarchy. A typical hierarchy is illustrated in figure below:

8 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

As one goes down the hierarchy, the following occur:


a) Decreasing cost per bit
b) Increasing capacity
c) Increasing access time
d) Decreasing frequency of access of the memory by the processor

Cache Memory
• Cache memory is a small-sized type of volatile computer memory that provides high-speed
data access to a processor and stores frequently used computer programs, applications and
data.
• It is the fastest memory in a computer, and is typically integrated onto the motherboard and
directly embedded in the processor or main random-access memory (RAM).
• Cache is usually designed using static RAM rather than dynamic RAM, which is one reason
that cache is more expensive but faster than main memory.
• When the processor attempts to read a
word of memory, a check is made to
determine if the word is in the cache. If so,
the word is delivered to the processor. If
not, a block of main memory, consisting of
fixed number of words is read into the
cache and then the word is delivered to the
processor.
• Cache connects to the processor via data
control and address line.
• The data and address lines also attached to
data and address buffer which attached to
a system bus from which main memory is

Embedded Systems © Er.Shiva Ram Dam,2022 9


3. MEMORY

reached.
• When a cache hit occurs, the data and address buffers are disabled and the communication
is only between processor and cache with no system bus traffic.
• When a cache miss occurs, the desired word is first read into the cache and then transferred
from cache to processor. For later case, the cache is physically interposed between the
processor and main memory for all data, address and control lines.

Structure of Cache and Main Memory:


• Main memory consists of 2n
addressable words, each word having a
unique n bit address.
• This memory is considered to conisist
of a number of fixed length blocks of
size = K words each. i.e. M = 2n/ K
blocks.

• Cache consists of C lines of K words


each.
• Each line contains K words, plus a tag
of a few bits.
• The tag is usually a portion of the main
memory address.
• The number of lines (C) of cache is less
than the number of main memory
blocks (M)

Cache Mapping techniques:


• The transformation of data from main memory to cache memory is referred to as memory
mapping process.
• Because there are fewer cache lines than main memory blocks, an algorithm is needed for
mapping main memory blocks into cache lines.
• There are three different types of mapping functions in common use and are: direct,
associative and set associative.

1. Direct Mapping
• It is the simplest mapping technique.
• It maps certain block of main memory into only one possible cache line, i.e. a given main
memory block can be placed in one and only one place on cache.
• It implements one-to-many function, i.e., one cache line can map many blocks of
memory.
• The cache line no can be calculated as:
i = j MOD m
Where,
i = cache line number; j = main memory block number; m = number of lines in the
cache

10 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

For eg, if there are 16 blocks of main memory and 4 cache lines, and if we need to
identify which cache line maps the block no 14 of the main memory.
Here, i = 14 MOD 4 =2
So, the block no 14 of the main memory is mapped by cache line no 2.

• Each physical address generated by the CPU is viewed as three fields:

Cache tag bits Cache line no Word

For e.g., if CPU generates a 6-bit address 000101, it is viewed as:


Tag bits Line no Word
00 01 01

This tells to look at cache line no 1 tagged as 00 at its second index if desired address is
present.
So, a 2-bit tag is enough to identify the block of memory in this case.
• If the search is matched, this is a cache hit. But if the search does not match, then it is a
miss.
• Advantage: easy to implement
• Disadvantage: not flexible, contention problem
• The figure below represents a direct cache mapping.

Block 0 0 1 2 3
tag
Block 1 4 5 6 7
Line 0
tag
Block 2 8 9 10 11
Line 1 Block 3 12 13 14 15
tag Block 4 16 17 18 19
Line 2 Block 5 20 21 22 23
tag Block 6 24 25 26 27
Line 3 Block 7 28 29 30 31
Block 8 32 33 34 35
cache Block 9 36 37 38 39
Block 10 40 41 42 43
Block 11 44 45 46 47
Block 12 48 49 50 51
Block 13 52 53 54 55
Block 14 56 57 58 59
Block 15 60 61 62 63
Memory words in each block

Figure: Direct cache mapping

Here, memory consists of 16 blocks of each containing 4 words in an array (say Block 0 has
words 0,1,2,3 and Block 1 has 4,5,6,7….Block 15 has 60,61,62,63). Similarly, each cache line
maps 4 blocks of memory.

2. Associative Mapping
• This is a much more flexible mapping technique.
• Any main memory block can be loaded to any cache line position.
• It implements many-to-many function.
• Each physical address generated by the CPU is viewed as two fields:

Cache tag no Word

Embedded Systems © Er.Shiva Ram Dam,2022 11


3. MEMORY

For eg, if CPU generates a 6-bit address 000101, it is viewed as:

Tag no word
0001 01

This tells to look at cache line tagged as 0001 if it contains the word 01 of the block
0001 of the main memory.
In this case, a 4-bit tags are required to identify those 16 blocks of main memory.
The tag no represents the block no of the main memory.
• Here, we need to search for all the 16 tags from 0000 to 1111 to determine whether
a given block is in the cache or not. Therefore, the cost of implementation of
hardware (i.e comparator) will be higher.
• Advantage: flexibility
• Disadvantage: increased hardware cost, increased access time
• The figure below represents the associative mapping:
tag
Block 0 0 1 2 3
Line 0
tag Block 1 4 5 6 7
Line 1 Block 2 8 9 10 11
tag Block 3 12 13 14 15
Line 2
Block 4 16 17 18 19
tag
Block 5 20 21 22 23
Line 3
Block 6 24 25 26 27
cache
Block 7 28 29 30 31
Block 8 32 33 34 35
Block 9 36 37 38 39
Block 10 40 41 42 43
Block 11 44 45 46 47
Block 12 48 49 50 51
Block 13 52 53 54 55
Block 14 56 57 58 59
Block 15 60 61 62 63
memory Words in each block

Figure: Associative mapping

3. Set-associative Mapping
• This is the combination of the direct and associative technique. Overcomes the
limitation of contention and increased hardware cost.
• In this case, blocks of the cache are grouped into sets and the mapping allows to
reside in any block of a particular set.
• It implements many-to-many function.
• Each physical address generated by the CPU is viewed as three fields:

Cache tag no Set no Word

For e.g., if CPU generates a 6-bit address 000101, it is viewed as:


Tag no Set no word
00 01 01

This tells to look at cache line tagged as 00 and set 0 if it contains the word 01 of the
block 0001 of the main memory.

12 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

First, it compares set no 01 if it is present in SET 0 (i.e from 00 and 01) and then
looks for tag no 00 and finally looks for the index 2 of the cache line 1.
• In this case, a 4-bit tags are required to identify those 16 blocks of main memory.
The tag no represents the block no of the main memory.

Block 0 0 1 2 3
tag
Set 0 Line 0 Block 1 4 5 6 7
tag Block 2 8 9 10 11
Line 1
Block 3 12 13 14 15
tag
Set 1 Line 2
Block 4 16 17 18 19
tag Block 5 20 21 22 23
Line 3 Block 6 24 25 26 27
Block 7 28 29 30 31
cache Block 8 32 33 34 35
Block 9 36 37 38 39
Block 10 40 41 42 43
Block 11 44 45 46 47
Block 12 48 49 50 51
Block 13 52 53 54 55
Block 14 56 57 58 59
Block 15 60 61 62 63
memory Words in each block

Figure: Set Associative cache mapping

Cache Replacement Policy:


The cache replacement policy is the technique for choosing which cache block to replace when Full
Associative cache is full, or when a Set-associative cache’s line is full. It is to be noted that there is no
choice in a direct-mapped cache; a main memory address always maps to the same cache address
and thus replaces whatever block is already there.
There are three common replacement policies.

i) Random • This policy chooses the block to replace randomly.


• It is simple to implement.
• However, it does not prevent replacing a block that is
likely to be used again soon.
ii) Least Recently Used (LRU) • This policy replaces the block that has not been
accessed for the longest time, assuming that this means
that it is least likely to be accessed in the near future.
• This policy provides for an excellent hit/miss ratio but
requires expensive hardware to keep track of the times
blocks are accessed.
iii) First In First Out (FIFO) • Replace that block in the set which has been in the
cache longest
• Uses a queue of size N, pushing each block address
onto the queue when the address is accessed, and then
choosing the block to be replaced by popping the
queue.

Embedded Systems © Er.Shiva Ram Dam,2022 13


3. MEMORY

Cache Writing technique/policy:


Cache is a technique of storing a copy of data temporarily in rapidly accessible storage memory.
Cache stores most recently used words in small memory to increase the speed at which data is
accessed. It acts as a buffer between RAM and CPU and thus increases the speed at which data is
available to the processor.
Whenever a Processor wants to write a word, it checks to see if the address it wants to write the
data to, is present in the cache or not. If the address is present in the cache i.e., Write Hit.
We can update the value in the cache and avoid expensive main memory access. But this results
in Inconsistent Data Problem. As both cache and main memory have different data, it will cause
problems in two or more devices sharing the main memory (as in a multiprocessor system).
This is where Write Through and Write Back comes into the picture.

Write Through:
In write-through, data is simultaneously
updated to cache and memory. This process
is simpler and more reliable. This is used
when there are no frequent writes to the
cache (The number of write operations is
less).
It helps in data recovery (In case of a power
outage or system failure). A data write will
experience latency (delay) as we have to
write to two locations (both Memory and
Cache). It Solves the inconsistency problem.
But it questions the advantage of having a
cache in write operation (As the whole point of using a cache was to avoid multiple access to the
main memory).

Write Back:
The data is updated only in the cache and
updated into the memory at a later time. Data
is updated in the memory only when the cache
line is ready to be replaced (cache line
replacement is done using Least Recently Used
Algorithm, FIFO, LIFO, and others depending on
the application).
Writes are only done to cache. There is an
UPDATE bit set (called Dirty bit) when there is a
write. Before a cache line is replaced, if
UPDATE bit is set, its content is written to main
memory. But the problem is that portions of
main memory are invalid for a certain period of
time. If the other devices access those
locations, they will get wrong contents.
Therefore, access to main memory by I/O modules can only be allowed through the cache. This
makes complex circuitry and potential bottleneck.

14 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

Memory Fragmentation:
In a computer system, storage unit processes and resources are continuously loaded and released
from memory, because of this; free memory space is broken into small pieces. This causes the
creation of small non-used inefficient memory spaces, which are so small that normal processes
cannot fit into that small memory block. This is causing system capacity or performance degradation.
This problem is known as fragmentation. The state of fragmentation depends on the system of
memory allocation. In most cases, memory space is wasted, which is known as memory
fragmentation.

Types of memory fragmentation:


There are two types of memory fragmentation.

Internal Fragmentation
At the point when a process is assigned to a memory block and if that process is small than the
requested memory space, it makes a vacant space in the assigned memory block. The difference
between assigned and requested memory space is then called internal fragmentation. Commonly,
the internal fragmentation of memory is isolated into fixed-sized blocks.
Solution: Memory ought to be partitioned into variable-size blocks and assigned to the most suitable
block for the memory requested process. In basic terms, internal fragmentation can be decreased by
effectively allocating the littlest partition but sufficient for the process. However, the issue will not
be solved completely but it can be reduced to some extent.

Embedded Systems © Er.Shiva Ram Dam,2022 15


3. MEMORY

External Fragmentation
Typically, external fragmentation occurs in the case of dynamic or variable size segmentation.
Although the total space available in memory is sufficient to execute the process; however, this
memory space is not contiguous, which restricts process execution.
The solution to external fragmentation is condensation or alteration of memory contents. In this
technique all the contents of the memory are manipulated and all the free memory is put together
in a big block

3.4 Composing Memory


An Embedded System designer is often faced with the situation of needing a particular sized
memory (RAM/ROM), but heavily readily available memories of a different size. The designer may
need a 4K x 16 ROM, may have 2K x 8 ROMs available for use.

Memory size needed often differs from size of readily available memories.

When available memory is larger, we can simply ignore unneeded high-order address bits and higher
data lines. When available memory is smaller, we need to compose several smaller memories into
one larger memory.

Below are the cases for need of composing memory:

a) To increase the number of bits per word (or column size), simply connect the available
memories side by side as shown in figure.

b) To increase the number of words (or row size), simply connect the available memories top to
bottom as shown in figure below:

16 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

c) If the available memories have a smaller word width as well as fewer words than required,
we then combine the above two techniques:
• First creating the number of columns of memories necessary to achieve the needed
word width, and
• Then creating the number of columns of memories required to achieve the needed
word, and then creating the number of rows of magnitude necessary, along with a
decode to achieve the needed number of words as shown in figure below:

Embedded Systems © Er.Shiva Ram Dam,2022 17


3. MEMORY

Steps:

1. Find no or required given ROMS:


𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀
No of required ROMs = 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀
2. Find the no of address lines of required ROM
No of address lines or required ROM = log2 (no of words in required ROM)
3. Find no of data lines of required ROM
No of data lines = no of bits in a word.
4. If multiplier is different, then align no of required given ROMs horizontally.
5. If multiplicand is different, then align no of required given ROMs vertically
6. If both are different, then align no of required given ROMs horizontally and vertically (in
rows and column).

Samples:

1. Compose 1K x 8 ROM into 1K x 32 ROM


Here;
1K x 8 ------> 1K x 32

Now:
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 1𝐾 𝑥 32
• No of required given ROMs = 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀
= 1𝐾 𝑥 8
= 4

• No of required address lines = log2 (no of words in reqd ROM) = log2 (1024 x 1) =10

• No of required data lines = no of bits in a word = 32

• Here multiplier is different (i.e. 8 Vs 32), we make horizontal alignment of ROMs.

Hence, the required memory is:

18 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

2. Compose 1K x 8 ROM into 4K x 8 ROM.


Here;
1K x 8 ------> 4K x 8

Now:
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 4𝐾 𝑥 8
• No of required given ROMs = = =4
𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀 1𝐾 𝑥 8

• No of required address lines = log2 (no of words in reqd ROM) = log2 (1024 x 4) =12

• No of required data lines = no of bits in a word = 8

• Here multiplicand is different (i.e. 1K Vs 4K), we make vertical alignment of ROMs.

Hence, the required memory is:

3. Compose 1K x 8 ROM into 8K x 8 ROM.


Here;
1K x 8 ------> 8K x 8

Now:
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 8𝐾 𝑥 8
• No of required given ROMs = 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀
= 1𝐾 𝑥 8
=8

• No of required address lines = log2 (no of words in reqd ROM) = log2 (1024 x 8) =13

• No of required data lines = no of bits in a word = 8

• Here multiplicand is different (i.e. 1K Vs 8K), we make vertical alignment of ROMs.

Hence, the required memory is:


Embedded Systems © Er.Shiva Ram Dam,2022 19
3. MEMORY

4. Compose 1K x 8 ROM into 2K x 16 ROM.


Here;
1K x 8 ------> 2K x 16

Now:
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 2𝐾 𝑥 16
• No of required given ROMs = 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀
= 1𝐾 𝑥 8
= 4

20 Embedded Systems © Er.Shiva Ram Dam,2022


3. MEMORY

• No of required address lines = log2 (no of words in reqd ROM) = log2 (1024 x 2) =11

• No of required data lines = no of bits in a word = 16

• Here both multiplicand and multiplier are different (i.e. 1K Vs 4K and 8 Vs 16), we
make both horizontal and vertical alignment of ROMs.

2𝐾
• No of rows = 1𝐾 = 2

16
• No of columns = 8
=2

Hence, the required memory is:

Embedded Systems © Er.Shiva Ram Dam,2022 21


3. MEMORY

5. Construct 2(k+1) x n and 2(k) x 4n memories using 2(K) x n memory modules.

Here;
First Part: 2K x n ------> 2k+1 x n
Now:
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 2𝑘+1 × 𝑛
• No of required given ROMs = 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀
= 2𝑘 ×𝑛
= 2

• No of required address lines = log2 (no of words in reqd ROM) = log2 (2𝑘+1) =(K+1)
• No of required data lines = no of bits in a word = n
• Here multiplicand is different (i.e. 2k Vs 2k+1), we make vertical alignment of ROMs.
• Hence, the required memory is:

Second Part: 2K x n ------> 2k x 4n

𝑆𝑖𝑧𝑒 𝑜𝑓 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑅𝑂𝑀 2𝑘+1 𝑥 16 2𝑘 𝑥4 𝑛


• No of required given ROMs = = = 𝑘 = 4
𝑠𝑖𝑧𝑒 𝑜𝑓 𝑔𝑖𝑣𝑒𝑛 𝑅𝑂𝑀 1𝐾 𝑥 8 2 𝑥𝑛

• No of required address lines = log2 (no of words in reqd ROM) = log2 (2𝑘 ) =k
• No of required data lines = no of bits in a word = 4n
• Here multiplier is different (i.e. n Vs 4n), we make horizontal alignment of ROMs.

Now required ROM is:

Reference: https://www.youtube.com/watch?v=ldIpuFnFgZw
22 Embedded Systems © Er.Shiva Ram Dam,2022
3. MEMORY

Exam questions.

1. Combine 2K x 4 ROM to get 6K x 8 ROM. [2021 Fall]


2. Compose 2 K x16 ROM using 1K x 8 ROM. [2020 Fall]
3. What do you understand by cache memory? Discuss about cache replacement policy and cache
writing techniques. [2020 Fall]
4. Why is cache memory needed? Explain the principle of operation of EPROM with necessary
illustrations. [2019 spring]
5. Describe how the memory modules can be composed. Use a suitable example describing all the
three cases. [2019 spring]
6. Explain different types of RAM with RAM variation. [2019 Fall]
7. What is cache mapping? Explain cache mapping techniques. [2018 spring, 2017 Fall, 2014 Spring
8. How can you compose memory to increase number and width of words? Design 2K x 16 ROMs
using 1K x 8 ROMs (1K=1024 words) [2018 Spring]
9. Describe a way to fulfill a requirement of 18 memory locations each 8 bit wide using 16x4
memory chips. [2018 Fall]
10. Why do we need DMA? Explain the working principle of DMA. [2018 Fall]
11. Explain memory fragmentation? How problem related to memory fragmentation can be solved
in ES? [2018 Fall]
12. Compose 1K x 8 ROM into an 2K x 16 ROM. [2017 spring]
13. What is Direct Memory Access? Why such circuitry is needed? Explain with its block diagram.
[2017 spring]
14. Design 4 K x 8 ROMs using 1K x 8 ROMs. (1K=1024 words). [2016 Fall]
15. Compose 2(K+1) x 2n memory using dK x n memory modules. [2016 spring]
16. How can you increase the memory capacity and word length of a standard ROM? [2015 spring]
17. Sketch the internal design of 8x4 ROM memory. Explain different types of ROM. [2015 fall]
18. Why DMA is used? Explain in detail [2015 Fall]
19. What is Memory hierarchy? Explain the write ability and storage performance of Memory
devices. [2015 Fall]
20. Explain the operations of storing and erasing the data in UV-EPROM. [2014 Spring]
21. Briefly define each of the following: Masked-programmed ROM, PROM, EPROM, flashed
EEPROM. [2014 fall]
22. Define cache mapping. Explain set-associative mapping with figure. [2014 fall]
23. Write short notes on:
a) Set Associative cache mapping [2021 Fall, 2018 Fall]
b) DMA [2019 spring, 2016 Fall]
c) Types of Memory [2019 spring, 2017 spring]
d) Memory Management [2019 Fall]
e) Memory write and storage permanence [2017 Fall]
f) Cache memory [2015 spring]
g) Direct mapping [2015 Fall]

***

Embedded Systems © Er.Shiva Ram Dam,2022 23

You might also like