Chapter 9 - Memory Organization and Addressing
Chapter 9 - Memory Organization and Addressing
We now give an overview of RAM – Random Access Memory. This is the memory called
“primary memory” or “core memory”. The term “core” is a reference to an earlier memory
technology in which magnetic cores were used for the computer’s memory. This discussion
will pull material from a number of chapters in the textbook.
Primary computer memory is best considered as an array of addressable units. Addressable
units are the smallest units of memory that have independent addresses. In a byte-
addressable memory unit, each byte (8 bits) has an independent address, although the
computer often groups the bytes into larger units (words, long words, etc.) and retrieves that
group. Most modern computers manipulate integers as 32-bit (4-byte) entities, so retrieve the
integers four bytes at a time.
In this author’s opinion, byte addressing in computers became important as the result of the
use of 8–bit character codes. Many applications involve the movement of large numbers of
characters (coded as ASCII or EBCDIC) and thus profit from the ability to address single
characters. Some computers, such as the CDC–6400, CDC–7600, and all Cray models, use
word addressing. This is a result of a design decision made when considering the main goal
of such computers – large computations involving integers and floating point numbers. The
word size in these computers is 60 bits (why not 64? – I don’t know), yielding good precision
for numeric simulations such as fluid flow and weather prediction.
At the time of this writing (Summer 2011), computer memory as a technology is only about
sixty years old. The MIT Whirlwind (see Chapter 1 of this book), which became operational
in 1952, was the first computer to use anything that would today be recognized as a memory.
It is obvious that computer memory technology has changed drastically in the sixty years
since it was first developed as magnetic core memory. From the architectural viewpoint
(how memory interacts with other system components), the change may have been minimal.
However from the organizational (how the components are put together) and implementation
(what basic devices are used), the change has been astonishing. We may say that the two
major organizational changes have been the introduction of cache memory and the use of
multiple banks of single–bit chips to implement the memory. Implementation of memory
has gone through fewer phases, the major ones being: magnetic core memory and
semiconductor memory. While there are isolated cases of computer memory being
implemented with discrete transistors, most such memory was built using integrated chips of
varying complexity. It is fair to say that the astonishing evolution of modern computer
memory is due mostly to the ability to manufacture VLSI chips of increasing transistor count.
One of the standard ways to illustrate the progress of memory technology is to give a table
showing the price of a standard amount of memory, sometimes extrapolated from the price of
a much smaller component. The following table, found through Google and adapted from
[R74] shows a history of computer memory from 1957 through the end of 2010. The table
shows price (in US Dollars) per megabyte of memory, the access time (time to retrieve data
on a read operation), and the basic technology. We shall revisit these data when we discuss
the RISC vs. CISC controversy, and the early attempts to maximize use of memory.
We can use a truth table to specify the actions for a RAM. Note that when Select = 1,
Select R /W Action nothing is happening to the
1 0 Memory contents are not changed. memory. It is not being accessed
1 1 Memory contents are not changed. by the CPU and the contents do
0 0 CPU writes data to the memory. not change. When Select = 0, the
0 1 CPU reads data from the memory. memory is active and something
happens.
Consider now a ROM (Read Only Memory). Form the viewpoint of the CPU there are only
two tasks for the memory
CPU reads data from the memory.
CPU does not access the memory.
We need only one control signal to specify these two options. The natural choice is the
Select control signal as the R / W signal does not make sense if the memory cannot be
written by the CPU. The truth table for the ROM should be obvious
Select Action
1 CPU is not accessing the memory.
0 CPU reads data from the memory.
In discussing memory, we make two definitions relating to the speed of the memory.
Memory access time is the time required for the memory to access the data; specifically, it is
the time between the instant that the memory address is stable in the MAR and the data are
available in the MBR. Note that the table above has many access times of 70 or 80 ns. The
unit “ns” stands for “nanoseconds”, one–billionth of a second.
Memory cycle time is the minimum time between two independent memory accesses. It
should be clear that the cycle time is at least as great as the access time, because the memory
cannot process an independent access while it is in the process of placing data in the MBR.
Suppose a byte-addressable computer with a 32-bit address space. The highest byte address
is 232 – 1. From this fact and the address allocation to multi-byte words, we conclude
the highest address for a 16-bit word is (232 – 2), and
the highest address for a 32-bit word is (232 – 4), because the 32-bit word addressed at
(232 – 4) comprises bytes at addresses (232 – 4), (232 – 3), (232 – 2), and (232 – 1).
As a 32-bit signed integer, the number 0x01020304 can be represented in decimal notation as
1166 + 0165 + 2164 + 0163 + 3162 + 0161 + 4160 = 16,777,216 + 131,072 + 768 + 4 =
16,909,060. For those who like to think in bytes, this is (01)166 + (02)164 + (03)162 + 04,
arriving at the same result. Note that the number can be viewed as having a “big end” and a
“little end”, as in the following figure.
The “big end” contains the most significant digits of the number and the “little end” contains
the least significant digits of the number. We now consider how these bytes are stored in a
byte-addressable memory. Recall that each byte, comprising two hexadecimal digits, has a
unique address in a byte-addressable memory, and that a 32-bit (four-byte) entry at address Z
occupies the bytes at addresses Z, (Z + 1), (Z + 2), and (Z + 3). The hexadecimal values
stored in these four byte addresses are shown below.
Address Big-Endian Little-Endian
Z 01 04
Z+1 02 03
Z+2 03 02
Z+3 04 01
Just to be complete, consider the 16–bit number represented by the four hex digits 0A0B.
Suppose that the 16-bit word is at location W; i.e., its bytes are at locations W and (W + 1).
The most significant byte is 0x0A and the least significant byte is 0x0B. The values in the
two addresses are shown below.
Address Big-Endian Little-Endian
W 0A 0B
W+1 0B 0A
The figure below shows a graphical way to view these two options for ordering the bytes
copied from a register into memory. We suppose a 32-bit register with bits numbered from
31 through 0. Which end is placed first in the memory – at address Z? For big-endian, the
“big end” or most significant byte is first written. For little-endian, the “little end” or least
significant byte is written first.
There seems to be no advantage of one system over the other. Big–endian seems more
natural to most people and facilitates reading hex dumps (listings of a sequence of memory
locations), although a good debugger will remove that burden from all but the unlucky.
Big-endian computers include the IBM 360 series, Motorola 68xxx, and SPARC by Sun.
Little-endian computers include the Intel Pentium and related computers.
The big-endian vs. little-endian debate is one that does not concern most of us directly. Let
the computer handle its bytes in any order desired as long as it produces good results. The
only direct impact on most of us will come when trying to port data from one computer to a
computer of another type. Transfer over computer networks is facilitated by the fact that the
network interfaces for computers will translate to and from the network standard, which is
big-endian. The major difficulty will come when trying to read different file types.
The big-endian vs. little-endian debate shows in file structures when computer data are
“serialized” – that is written out a byte at a time. This causes different byte orders for the
same data in the same way as the ordering stored in memory. The orientation of the file
structure often depends on the machine upon which the software was first developed.
The following is a partial list of file types taken from a textbook once used by this author.
Little-endian Windows BMP, MS Paintbrush, MS RTF, GIF
Big-endian Adobe Photoshop, JPEG, MacPaint
Some applications support both orientations, with a flag in the header record indicating
which is the ordering used in writing the file.
Any student who is interested in the literary antecedents of the terms “big-endian” and “little-
endian” may find a quotation at the end of this chapter.
The linear view of memory is a way to think logically about the organization of the memory.
This view has the advantage of being rather simple, but has the disadvantage of describing
accurately only technologies that have long been obsolete. However, it is a consistent model
that is worth mention. The following diagram illustrates the linear model.
There are two problems with the above model, a minor nuisance and a “show–stopper”.
The minor problem is the speed of the memory; its access time will be exactly that of plain
variety DRAM (dynamic random access memory), which is at best 50 nanoseconds. We
must have better performance than that, so we go to other memory organizations.
The “show–stopper” problem is the design of the memory decoder. Consider two examples
for common memory sizes: 1MB (220 bytes) and 4GB (232 bytes) in a byte–oriented memory.
A 1MB memory would use a 20–to–1,048,576 decoder, as 220 = 1,048,576.
A 4GB memory would use a 32–to–4,294,967,296 decoder, as 232 = 4,294,967,296.
Neither of these decoders can be manufactured at acceptable cost using current technology.
At this point, it will be helpful to divert from the main narrative and spend some time in
reviewing the structure of decoders. We shall use this to illustrate the problems found when
attempting to construct large decoders. In particular, we note that larger decoders tend to be
slower than smaller ones. As a result, larger memories tend to be slower than smaller ones.
We shall see why this is the case, and how that impacts cache design, in particular.
Interlude: The Structure and Use of Decoders
For the sake of simplicity (and mainly because the figure has already been drawn, and
appears in an earlier chapter), we use a 2–to–4 enabled high, active–high decoder as an
example. The inferences from this figure can be shown to apply to larger decoders, both
active–high and active–low, though the particulars of active–low decoders differ a bit.
An N–to–2N active–high decoder has N inputs, 2N outputs, and 2N N–input AND gates. The
corresponding active–low decoder would have 2N N–input OR gates. Each of the N inputs to
either design will drive 2N–1 + 1 output gates. As noted above, a 1M memory would require a
20–to–1,048,576 decoder, with 20–input output gates and each input driving 524,899 gates.
This seems to present a significant stretch of the technology. On the positive side, the output
is available after two gate delays.
Here, each level of decoder adds two gate delays to the total delay in placing the output. For
this example, the output is available 4 gate delays after the input is stable. We now
investigate the generalization of this design strategy to building large decoders.
Suppose that 8–to–256 (8–to–28) decoders, with output delays of 2 gate delays, were stock
items. A 1MB memory, using a 20–to–1,048,576 (20–to–220) decoder, would require three
layers of decoders: one 4–to–16 (4–to–24) decoder and two 8–to–256 (8–to–28) decoders.
For this circuit, the output is stable six gate delays after the input is stable.
A 4GB memory using a 32–to–4,294,967,296 (32–to–232) decoder, would require four levels
of 8–to–256 (8–to–28) decoders. For this circuit, the output is stable eight gate delays after
the input is stable. While seemingly fast, this does slow a memory.
There is a slight variant of the decoder that suggests a usage found in modern memory
designs. It is presented here just to show that this author knows about it. This figure
generalizes to fabrication of an N–to–2N from two (N/2)–to–2N/2 decoders. In this design, a
1MB memory, using a 20–to–1,048,576 (20–to–220) decoder, would require two decoders,
each being 10–to–1,024 (10–to–210) and a 4GB memory using a 32–to–4,294,967,296
(32–to–232) decoder, would require two decoders, each being 16–to–65,536 (16–to–216).
Here is a depiction of a 32–bit address, in which the lower order 28 bits are used to reference
addresses in physical memory.
The memory of all modern computers comprises a number of chips, which are combined to
cover the range of acceptable addresses. Suppose, in our example, that the basic memory
chips are 4MB chips. The 256 MB memory would be built from 64 chips and the address
space divided as follows:
6 bits to select the memory chip as 26 = 64, and
22 bits to select the byte within the chip as 222 = 4220 = 4M.
The question is which bits select the chip and which are sent to the chip. Two options
commonly used are high-order memory interleaving and low-order memory interleaving.
Other options exist, but the resulting designs would be truly bizarre. We shall consider only
low-order memory interleaving in which the low-order address bits are used to select the chip
and the higher-order bits select the byte. The advantage of low–order interleaving over
high–order interleaving will be seen when we consider the principle of locality.
This low-order interleaving has a number of performance-related advantages. These are due
to the fact that consecutive bytes are stored in different chips, thus byte 0 is in chip 0, byte 1
is in chip 1, etc. In our example
Chip 0 contains bytes 0, 64, 128, 192, etc., and
Chip 1 contains bytes 1, 65, 129, 193, etc., and
Chip 63 contains bytes 63, 127, 191, 255, etc.
Suppose that the computer has a 64 bit–data bus from the memory to the CPU. With the
above low-order interleaved memory it would be possible to read or write eight bytes at a
time, thus giving rise to a memory that is close to 8 times faster. Note that there are two
constraints on the memory performance increase for such an arrangement.
1) The number of chips in the memory – here it is 64.
2) The width of the data bus – here it is 8, or 64 bits.
In this design, the chip count matches the bus width; it is a balanced design.
To anticipate a later discussion, consider the above memory as connected to a cache memory
that transfers data to and from the main memory in 64–bit blocks. When the CPU first
accesses an address, all of the words (bytes, for a byte addressable memory) in that block are
copied into the cache. Given the fact that there is a 64–bit data bus between the main DRAM
and the cache, the cache can be very efficiently loaded. We shall have a great deal to say
about cache memory later in this chapter.
A design implementing the address scheme just discussed might use a 6–to–64 decoder, or a
pair of 3–to–8 decoders to select the chip. The high order bits are sent to each chip and
determine the location within the chip. The next figure suggests the design.
Think of the 4194304 (222) bits in the 4Mb chip as being arranged in a two dimensional array
of 2048 rows (numbered 0 to 2047), each of 2048 columns (also numbered 0 to 2047). What
we have can be shown in the figure below.
Here is a picture of a SIMM card. It has 60 connectors, arranged in two rows of 30. It
appears to be parity memory, as we see nine chips on each side of the card. That is one chip
for each of the data bits, and a ninth chip for the parity bit for each 8–bit byte.
Here is a picture of a DIMM card. It appears to be an older card, with only 256 MB
capacity. Note the eight chips; this has no parity memory.
Suppose that the byte with address 0x124A is requested and found not to be in the L2 cache.
A cache line in the L2 cache would be filled with the 16 bytes with addresses ranging from
0x1240 through 0x124F. This might be done in two transfers of 8 bytes each.
We close this part of the discussion by examining some specifications of a memory chip that
as of July 2011 seemed to be state-of-the-art. This is the Micron DDR2 SDRAM in 3 models
MT46H512M4 64 MEG x 4 x 8 banks
MT47H256M8 32 MEG x 8 x 8 banks
MT47H128M16 16 MEG x 16 x 8 banks
Collectively, the memories are described by Micron [R89] as “high-speed dynamic random–
access memory that uses a 4ns–prefetch architecture with an interface designed to transfer
two data words per clock cycle at the I/O bond pads.” But what is “prefetch architecture”?
According to Wikipedia [R90]
“The prefetch buffer takes advantage of the specific characteristics of memory
accesses to a DRAM. Typical DRAM memory operations involve three phases
(line precharge, row access, column access). Row access is … the long and slow
phase of memory operation. However once a row is read, subsequent column
accesses to that same row can be very quick, as the sense amplifiers also act as
latches. For reference, a row of a 1Gb DDR3 device is 2,048 bits wide, so that
internally 2,048 bits are read into 2,048 separate sense amplifiers during the row
access phase. Row accesses might take 50 ns depending on the speed of the
DRAM, whereas column accesses off an open row are less than 10 ns.”
“In a prefetch buffer architecture, when a memory access occurs to a row the
buffer grabs a set of adjacent datawords on the row and reads them out ("bursts"
them) in rapid-fire sequence on the IO pins, without the need for individual
column address requests. This assumes the CPU wants adjacent datawords in
memory which in practice is very often the case. For instance when a 64 bit CPU
accesses a 16 bit wide DRAM chip, it will need 4 adjacent 16 bit datawords to
make up the full 64 bits. A 4n prefetch buffer would accomplish this exactly ("n"
refers to the IO width of the memory chip; it is multiplied by the burst depth "4"
to give the size in bits of the full burst sequence).”
“The prefetch buffer depth can also be thought of as the ratio between the core
memory frequency and the IO frequency. In an 8n prefetch architecture (such as
DDR3), the IOs will operate 8 times faster than the memory core (each memory
access results in a burst of 8 datawords on the IOs). Thus a 200 MHz memory
core is combined with IOs that each operate eight times faster (1600
megabits/second). If the memory has 16 IOs, the total read bandwidth would be
200 MHz x 8 datawords/access x 16 IOs = 25.6 gigabits/second (Gbps), or 3.2
gigabytes/second (GBps). Modules with multiple DRAM chips can provide
correspondingly higher bandwidth.”
Each is compatible with 1066 MHz synchronous operation at double data rate. For the
MT47H128M16 (16 MEG x 16 x 8 banks, or 128 MEG x 16), the memory bus can
apparently be operated at 64 times the speed of internal memory; hence the 1066 MHz.
Here is a functional block diagram of the 128 Meg x 16 configuration, taken from the Micron
reference [R91]. Note that there is a lot going on inside that chip.
Here are the important data and address lines to the memory chip.
A[13:0] The address inputs; either row address or column address.
DQ[15:0] Bidirectional data input/output lines for the memory chip.
A few of these control signals are worth mention. Note that most of the control signals are
active–low; this is denoted in the modern notation by the sharp sign.
CS# Chip Select. This is active low, hence the “#” at the end of the signal name.
When low, this enables the memory chip command decoder.
When high, is disables the command decoder, and the chip is idle.
RAS# Row Address Strobe. When enabled, the address refers to the row number.
CAS# Column Address Strobe. When enabled, the address refers to the column
WE# Write Enable. When enabled, the CPU is writing to the memory.
The following truth table explains the operation of the chip.
CS# RAS# CAS# WE# Command / Action
1 d d d Deselect / Continue previous operation
0 1 1 1 NOP / Continue previous operation
0 0 1 1 Select and activate row
0 1 0 1 Select column and start READ burst
0 1 0 0 Select column and start WRITE burst
In a standard memory system, an addressable item is referenced by its address. In a two level
memory system, the primary memory is first checked for the address. If the addressed item
is present in the primary memory, we have a hit, otherwise we have a miss. The hit ratio is
defined as the number of hits divided by the total number of memory accesses; 0.0 h 1.0.
Given a faster primary memory with an access time TP and a slower secondary memory with
access time TS, we compute the effective access time as a function of the hit ratio. The
applicable formula is TE = hTP + (1.0 – h)TS.
RULE: In this formula we must have TP < TS. This inequality defines the terms
“primary” and “secondary”. In this course TP always refers to the cache memory.
For our first example, we consider cache memory, with a fast cache acting as a front-end for
primary memory. In this scenario, we speak of cache hits and cache misses. The hit ratio is
also called the cache hit ratio in these circumstances. For example, consider TP = 10
nanoseconds and TS = 80 nanoseconds. The formula for effective access time becomes
TE = h10 + (1.0 – h)80. For sample values of hit ratio
Hit Ratio Access Time
0.5 45.0
0.9 17.0
0.99 10.7
The reason that cache memory works is that the principle of locality enables high values of
the hit ratio; in fact h 0.90 is a reasonable value. For this reason, a multi-level memory
structure behaves almost as if it were a very large memory with the access time of the smaller
and faster memory. Having come up with a technique for speeding up our large monolithic
memory, we now investigate techniques that allow us to fabricate such a large main memory.
At system start–up, the faster cache contains no valid data, which are copied as needed from
the slower secondary memory. Each block would have three fields associated with it
The tag field identifying the memory addresses contained
Valid bit set to 0 at system start–up.
set to 1 when valid data have been copied into the block
Dirty bit set to 0 at system start–up.
set to 1 whenever the CPU writes to the faster memory
set to 0 whenever the contents are copied to the slower memory.
The basic unit of a cache is called a “cache line”, which comprises the data copied from the
slower secondary memory and the required ID fields. A 16–KB cache might contain 1,024
cache lines with the following structure.
D bit V Bit Tag 16 indexed entries (16 bytes total)
0 1 0xAB712 M[0xAB7120] … M[0xAB712F]
We now face a problem that is unique to cache memories. How do we find an addressed
item? In the primary memory, the answer is simple; just go to the address and access the
item. The cache has much fewer addressable entities than the secondary memory. For
example, this cache has 16 kilobytes set aside to store a selection of data from a 16 MB
memory. It is not possible to assign a unique address for each possible memory item.
The choice of where in the cache to put a memory block is called the placement problem.
The method of finding a block in the cache might be called the location problem. We begin
with the simplest placement strategy. When a memory block is copied into a cache line, just
place it in the first available cache line. In that case, the memory block can be in any given
cache line. We now have to find it when the CPU references a location in that block.
The Associative Cache
The most efficient search strategy is based on associative memory, also called content
addressable memory. Unlike sequential searches or binary search on an array, the contents
of an associative memory are all searched at the same time. In terminology from the class on
algorithm analysis, it takes one step to search an associative memory.
Consider an array of 256 entries, indexed from 0 to 255 (or 0x0 to 0xFF). Suppose that we
are searching the memory for entry 0xAB712. Normal memory would be searched using a
standard search algorithm, as learned in beginning programming classes. If the memory is
unordered, it would take on average 128 searches to find an item. If the memory is ordered,
binary search would find it in 8 searches.
Associative memory would find the item in one search. Think of the control circuitry as
“broadcasting” the data value (here 0xAB712) to all memory cells at the same time. If one of
the memory cells has the value, it raises a Boolean flag and the item is found.
We do not consider duplicate entries in the associative memory. This can be handled by
some rather straightforward circuitry, but is not done in associative caches. We now focus
on the use of associative memory in a cache design, called an “associative cache”.
Assume a number of cache lines, each holding 16 bytes. Assume a 24–bit address. The
simplest arrangement is an associative cache. It is also the hardest to implement.
Divide the 24–bit address into two parts: a 20–bit tag and a 4–bit offset. The 4–bit offset is
used to select the position of the data item in the cache line.
Bits 23 – 4 3–0
Fields Tag Offset
A cache line in this arrangement would have the following format.
D bit V Bit Tag 16 indexed entries
0 1 0xAB712 M[0xAB7120] … M[0xAB712F]
The placement of the 16 byte block of memory into the cache would be determined by a
cache line replacement policy. The policy would probably be as follows:
1.First, look for a cache line with V = 0. If one is found, then it is “empty”
and available, as nothing is lost by writing into it.
2.If all cache lines have V = 1, look for one with D = 0. Such a cache line
can be overwritten without first copying its contents back to main memory.
When the CPU issues an address for memory access, the cache logic determines the part that
is to be used for the cache line tag (here 0xAB712) and performs an associative search on the
tag part of the cache memory. Only the tag memory in an associative cache is set up as true
associative memory; the rest is standard SRAM. One might consider the associative cache as
two parallel memories, if that helps.
After one clock cycle, the tag is either found or not found. If found, the byte is retrieved. If
not, the byte and all of its block are fetched from the secondary memory.
The Direct Mapped Cache
This strategy is simplest to implement, as the cache line index is determined by the address.
Assume 256 cache lines, each holding 16 bytes. Assume a 24–bit address. Recall that 256 =
28, so that we need eight bits to select the cache line.
Divide the 24–bit address into three fields: a 12–bit explicit tag, an 8–bit line number, and a
4–bit offset within the cache line. Note that the 20–bit memory tag is divided between the
12–bit cache tag and 8–bit line number.
Bits 23 – 12 11 – 4 3–0
Cache View Tag Line Offset
Address View Block Number Offset
Consider the address 0xAB7129. It would have
Tag = 0xAB7
Line = 0x12
Offset = 0x9
Again, the cache line would contain M[0xAB7120] through M[0xAB712F]. The cache line
would also have a V bit and a D bit (Valid and Dirty bits). This simple implementation often
works, but it is a bit rigid. Each memory block has one, and only one, cache line into which
it might be placed. A design that is a blend of the associative cache and the direct mapped
cache might be useful.
An N–way set–associative cache uses direct mapping, but allows a set of N memory blocks
to be stored in the line. This allows some of the flexibility of a fully associative cache,
without the complexity of a large associative memory for searching the cache.
Suppose a 2–way set–associative implementation of the same cache memory. Again assume
256 cache lines, each holding 16 bytes. Assume a 24–bit address. Recall that 256 = 28, so
that we need eight bits to select the cache line. Consider addresses 0xCD4128 and
0xAB7129. Each would be stored in cache line 0x12. Set 0 of this cache line would have
one block, and set 1 would have the other.
Entry 0 Entry 1
D V Tag Contents D V Tag Contents
1 1 0xCD4 M[0xCD4120] to 0 1 0xAB7 M[0xAB7120] to
M[0xCD412F] M[0xAB712F]
Examples of Cache Memory
We need to review cache memory and work some specific examples. The idea is simple, but
fairly abstract. We must make it clear and obvious. To review, we consider the main
memory of a computer. This memory might have a size of 384 MB, 512 MB, 1GB, etc. It is
divided into blocks of size 2K bytes, with K > 2.
In general, the N–bit address is broken into two parts, a block tag and an offset.
The most significant (N – K) bits of the address are the block tag
The least significant K bits represent the offset within the block.
We use a specific example for clarity.
We have a byte addressable memory, with a 24–bit address.
The cache block size is 16 bytes, so the offset part of the address is K = 4 bits.
In our example, the address layout for main memory is as follows:
Divide the 24–bit address into two parts: a 20–bit tag and a 4–bit offset.
Bits 23 – 4 3–0
Fields Tag Offset
Let’s examine the sample address, 0xAB7129, in terms of the bit divisions above.
Bits: 23 – 20 19 – 16 15 – 12 11 – 8 7–4 3–0
Hex Digit A B 7 1 2 9
Field 0xAB712 0x09
So, the tag field for this block contains the value 0xAB712. The tag field of the cache line
must also contain this value, either explicitly or implicitly. It is the cache line size that
determines the size of the blocks in main memory. They must be the same size, here 16
bytes.
All cache memories are divided into a number of cache lines. This number is also a power of
two. Our example has 256 cache lines. Where in the cache is the memory block placed?
Associative Cache
As a memory block can go into any available cache line, the cache tag must represent
the memory tag explicitly: Cache Tag = Block Tag. In our example, it is 0xAB712.
The cache line is written back only when it is replaced. The advantage of this is that it is a
faster strategy. Writes always proceed at cache speed. Furthermore, this plays on the
locality theme. Suppose each entry in the cache is written, a total of 16 cache writes. At the
end of this sequence, the cache line will eventually be written to the slower memory. This is
one slow memory write for 16 cache writes. The disadvantage of this strategy is that it is
more complex, requiring the use of a dirty bit.
Cache Line Replacement
Assume that memory block 0xAB712 is present in cache line 0x12. We now get a memory
reference to address 0x895123. This is found in memory block 0x89512, which must be
placed in cache line 0x12. The following holds for both a memory read from or memory
write to 0x895123. The process is as follows.
1. The valid bit for cache line 0x12 is examined. If (Valid = 0), there is nothing
in the cache line, so go to Step 5.
2. The memory tag for cache line 0x12 is examined and compared to the desired
tag 0x895. If (Cache Tag = 0x895) go to Step 6.
3. The cache tag does not hold the required value. Check the dirty bit.
If (Dirty = 0) go to Step 5.
4. Here, we have (Dirty = 1). Write the cache line back to memory block 0xAB712.
5. Read memory block 0x89512 into cache line 0x12. Set Valid = 1 and Dirty = 0.
6. With the desired block in the cache line, perform the memory operation.
We have three different major strategies for cache mapping.
Direct Mapping is the simplest strategy, but it is rather rigid. One can devise “almost
realistic” programs that defeat this mapping. It is possible to have considerable page
replacement with a cache that is mostly empty.
Fully Associative offers the most flexibility, in that all cache lines can be used. This is also
the most complex, because it uses a larger associative memory, which is complex and costly.
N–Way Set Associative is a mix of the two strategies. It uses a smaller (and simpler)
associative memory. Each cache line holds N = 2K sets, each the size of a memory block.
Each cache line has N cache tags, one for each set.
Consider variations of mappings to store 256 memory blocks.
Direct Mapped Cache 256 cache lines
“1–Way Set Associative” 256 cache lines 1 set per line
2–Way Set Associative 128 cache lines 2 sets per line
4–Way Set Associative 64 cache lines 4 sets per line
8–Way Set Associative 32 cache lines 8 sets per line
16–Way Set Associative 16 cache lines 16 sets per line
32–Way Set Associative 8 cache lines 32 sets per line
64–Way Set Associative 4 cache lines 64 sets per line
128–Way Set Associative 2 cache lines 128 sets per line
256–Way Set Associative 1 cache line 256 sets per line
Fully Associative Cache 256 sets
N–Way Set Associative caches can be seen as a hybrid of the Direct Mapped Caches
and Fully Associative Caches. As N goes up, the performance of an N–Way Set Associative
cache improves. After about N = 8, the improvement is so slight as not to be worth the
additional cost.
We now address two questions for this design before addressing the utility of a third level in
the cache. The first question is why the L1 cache is split into two parts. The second question
is why the cache has two levels. Suffice it to say that each design decision has been well
validated by empirical studies; we just give a rationale.
There are several reasons to have a split cache, either between the CPU and main memory or
between the CPU and a higher level of cache. One advantage is the “one way” nature of the
L1 Instruction Cache; the CPU cannot write to it. This means that the I–Cache is simpler and
faster than the D–Cache; faster is always better. In addition, having the I–Cache provides
some security against self modifying code; it is difficult to change an instruction just fetched
and write it back to main memory. There is also slight security against execution of data;
nothing read through the D–Cache can be executed as an instruction.
The primary advantage of the split level–1 cache is support of a modern pipelined CPU. A
pipeline is more akin to a modern assembly line. Consider an assembly line in an auto plant.
There are many cars in various stages of completion on the same line. In the CPU pipeline,
there are many instructions (generally 5 to 12) in various stages of execution. Even in the
simplest design, it is almost always the case that the CPU will try to fetch an instruction in
the same clock cycle as it attempts to read data from memory or write data to memory.
Here is a schematic of the pipelined CPU for the MIPS computer.
Virtual Memory
We now turn to the next example of a memory hierarchy, one in which a magnetic disk
normally serves as a “backing store” for primary core memory. This is virtual memory.
While many of the details differ, the design strategy for virtual memory has much in common
with that of cache memory. In particular, VM is based on the idea of program locality.
Virtual memory has a precise definition and a definition implied by common usage. We
discuss both. Precisely speaking, virtual memory is a mechanism for translating logical
addresses (as issued by an executing program) into actual physical memory addresses. The
address translation circuitry is called a MMU (Memory Management Unit).
This definition alone provides a great advantage to an Operating System, which can then
allocate processes to distinct physical memory locations according to some optimization.
This has implications for security; individual programs do not have direct access to physical
memory. This allows the OS to protect specific areas of memory from unauthorized access.
The invention of time–sharing operating systems introduced another variant of VM, now
part of the common definition. A program and its data could be “swapped out” to the disk to
allow another program to run, and then “swapped in” later to resume.
Virtual memory allows the program to have a logical address space much larger than the
computers physical address space. It maps logical addresses onto physical addresses and
moves “pages” of memory between disk and main memory to keep the program running.
An address space is the range of addresses, considered as unsigned integers, that can be
generated. An N–bit address can access 2N items, with addresses 0 … 2N – 1.
16–bit address 216 items 0 to 65535
20–bit address 220 items 0 to 1,048,575
32–bit address 232 items 0 to 4,294,967,295
In all modern applications, the physical address space is no larger than the logical address
space. It is often somewhat smaller than the logical address space. As examples, we use a
number of machines with 32–bit logical address spaces.
Machine Physical Memory Logical Address Space
VAX–11/780 16 MB 4 GB (4, 096 MB)
Pentium (2004) 128 MB 4 GB
Desktop Pentium 512 MB 4 GB
Server Pentium 4 GB 4 GB
IBM z/10 Mainframe 384 GB 264 bytes = 234 GB
Organization of Virtual Memory
Virtual memory is organized very much in the same way as cache memory. In particular, the
formula for effective access time for a two–level memory system (pages 381 and 382 of this
text) still applies. The dirty bit and valid bit are still used, with the same meaning. The
names are different, and the timings are quite different. When we speak of virtual memory,
we use the terms “page” and “page frame” rather than “memory block” and “cache line”.
In the virtual memory scenario, a page of the address space is copied from the disk and
placed into an equally sized page frame in main memory.
Another minor difference between standard cache memory and virtual memory is the way in
which the memory blocks are stored. In cache memory, both the tags and the data are stored
in a single fast memory called the cache. In virtual memory, each page is stored in main
memory in a place selected by the operating system, and the address recorded in a page table
for use of the program.
Page 391 CPSC 5155 Last Revised on July 8, 2011
Copyright © 2011 by Edward L. Bosworth, Ph.D. All rights reserved
Chapter 9 Boz–7 Memory Organization and Addressing
Here is an example based on a configuration that runs through this textbook. Consider a
computer with a 32–bit address space. This means that it can generate 32–bit logical
addresses. Suppose that the memory is byte addressable, and that there are 224 bytes of
physical memory, requiring 24 bits to address. The logical address is divided as follows:
Bits 31 – 28 27 – 24 23 – 20 19 – 16 15 – 12 11 – 8 7–4 3–0
Field Page Number Offset in Page
The physical address associated with the page frame in main memory is organized as follows
Bits 23 – 20 19 – 16 15 – 12 11 – 8 7–4 3–0
Field Address Tag Offset in Page Frame
Virtual memory uses the page table to translate virtual addresses into physical addresses. In
most systems, there is one page table per process. Conceptually, the page table is an array,
indexed by page frame of the address tags associated with each process. But note that such
an array can be larger than the main memory itself. In our example, each address tag is a
12–bit value, requiring two bytes to store, as the architecture cannot access fractional bytes.
The page number is a 20–bit number, from 0 through 1,048,575. The full page table would
require two megabytes of memory to store.
Each process on a computer will be allocated a small page table containing mappings for the
most recently used logical addresses. Each table entry contains the following information:
1. The valid bit, which indicates whether or not there is a valid address tag (physical
page number) present in that entry of the page table.
2. The dirty bit, indicating whether or not the data in the referenced page frame
has been altered by the CPU. This is important for page replacement policies.
3. The 20–bit page number from the logical address, indicating what logical page
is being stored in the referenced page frame.
4. The 12–bit unsigned number representing the address tag (physical page number).
More on Virtual Memory: Can It Work?
Consider again the virtual memory system just discussed. Each memory reference is based
on a logical address, and must access the page table for translation.
But wait! The page table is in memory.
Does this imply two memory accesses for each memory reference?
This is where the TLB (Translation Look–aside Buffer) comes in. It is a cache for a page
table, more accurately called the “Translation Cache”.
The TLB is usually implemented as a split associative cache.
One associative cache for instruction pages, and
One associative cache for data pages.
A page table entry in main memory is accessed only if the TLB has a miss.
Solved Problems
Here are some solved problems related to byte ordering in memory.
1. Suppose one has the following memory map as a result of a core dump.
The memory is byte addressable.
Address 0x200 0x201 0x202 0x203
Contents 02 04 06 08
What is the value of the 32–bit long integer stored at address 0x200?
This is stored in the four bytes at addresses 0x200, 0x201, 0x202, and 0x203.
Big Endian: The number is 0x02040608, or 0204 0608. Its decimal value is
22563 + 42562 + 62561 + 81 = 33,818,120
Little Endian: The number is 0x08060402, or 0806 0402. Its decimal value is
82563 + 62562 + 42561 + 21 = 134,611,970.
NOTE: Read the bytes backwards, not the hexadecimal digits.
Powers of 256 are 2560 = 1, 2561 = 256,
256 = 65536, 2563 = 16,777,216
2
2. Suppose one has the following memory map as a result of a core dump.
The memory is byte addressable.
Address 0x200 0x201 0x202 0x203
Contents 02 04 06 08
What is the value of the 16–bit integer stored at address 0x200?
This is stored in the two bytes at addresses 0x200 and 0x201.
Big Endian The value is 0x0204.
The decimal value is 2256 + 4 = 516
Little Endian: The value is 0x0402.
The decimal value s 4256 + 2 = 1,026
Note: The bytes at addresses 0x202 and 0x203 are not part of this 16–bit integer.
3. You are asked to implement a 128M by 32 memory (1M = 220), using only
16M by 8 memory chips.
a) What is the minimum size of the MAR?
b) What is the size of the MBR?
c) How many 16M by 8 chips are required for this design?
Answer: a) 128M = 27220 = 227, so the minimum MAR size is 27 bits.
b) The MBR size is 32 bits.
c) 128M32 / 16M8 = 84 = 32 chips.
4. Complete the following table describing the memory and chip count needed to fabricate.
ANSWER: We begin by showing the general formulae and then giving a few specific
answers. First, we must define some variables, so that we may state some equations.
Let N1 be the number of addressable units in the memory system
M1 be the number of bits for each entry in the memory system
N2 be the number of addressable units in the memory chip
M2 be the number of bits for each entry in the memory chip.
So that for making a 64K by 4 memory from a 1K by 8 chip, we have
N1 = 64K = 26210 = 216, as 1K = 210 = 1,024.
M1 = 4
N2 = 1K = 210.
M2 = 8.
Note that in most modern computers, the actual number of bits in the MAR is set at design
time and does not reflect the actual memory size. Thus, all computers in the Pentium™ class
have 32-bit MAR’s, even if the memory is 256MB = 2561MB = 28220B = 228 bytes.
N1 = 32K = 25210 = 215. Solve 2P–1 < 215 2P to get P = 15, or 15 bits in the MAR.
N1 = 64K = 26210 = 216. Solve 2P–1 < 216 2P to get P = 16, or 16 bits in the MAR.
N1 = 10K = 52K = 5211 = 1.25213, not a power of 2.
Solve 2P–1 < 1.25213 2P to get P = 14. Note that 213 = 8K and 214 = 16K.
For most of the table, one may compute the number of chips needed by the following
formula: Chips = (N1 M1) / (N2 M2), or the total number of bits in the memory system
divided by the total number of bits in the memory chip. In actual fact, this works only when
one of the two following conditions holds:
either M1 / M2 is a whole number (as M1 = 4 and M2 = 1),
or M2 / M1 is a whole number (as M1 = 4 and M2 = 8).
The analysis in the 10K-by-10 case, in which neither of these conditions holds, is a bit more
complicated. Here we present a detailed discussion of the 64K-by-4 case, followed by the
answers to all but the 10K-by-10 case, which we also discuss in detail.
For 64K-by-4 fabricated from 1K-by-4, it is obvious that each 4-bit entry in the memory
system is stored in one 4-bit memory chip, so that the total number of chips required is
simply 64, or (64K 4) / (1K 4).
For 64K-by-4 fabricated from 2K-by-1 chips, it should be obvious that four entries in the
2K-by-1 chip are required to store each of the 4-bit entries in the memory system. The
easiest way to achieve this goal is to arrange the memory chips in “banks”, with four of the
chips to each bank. The number of banks required is 64K / 2K = 32, for a total of 128 chips.
Note that this agrees with the result (64K 4) / (2K 1) = 256K / 2K = 128.
For 64K-by-4 fabricated from 1K-by-8 chips, it should be obvious that the 8-bit entries in the
chip can store two of the 4-bit entries in the memory system. For this reason, each 1K-by-8
chip can store 2K entries in the main memory and the number of chips needed is 64K / 2K or
32. This answer is the same as (64K 4) / (1K 8) = 256K / 8K = 32.
From this point until the 10K-by-8 entry we may just argue relative sizes of the memories, so
that the 64K-by-8 memory is twice the size of the 64K-by-4, the 32K-by-4 memory is half
the size of the 64K-by-4 memory, etc.
10K-by-8 memory
From 1K-by-4
(N1 M1) / (N2 M2) = (10K 8) / (1K 4) = 80K / 4K = 20
From 2K-by-1
(N1 M1) / (N2 M2) = (10K 8) / (2K 1) = 80K / 2K = 40
From 1K-by-8
(N1 M1) / (N2 M2) = (10K 8) / (1K 8) = 80K / 8K = 10
10K-by-10 memory
From 2K-by-1
(N1 M1) / (N2 M2) = (10K 10) / (2K 1) = 100K / 2K = 50
We run into trouble with the 1K-by-4 chips because 10/4 = 2.5; thus 4 does not divide 10.
The problem is how to spread two entries (20 bits) over five chips and retrieve the 10-bit
words efficiently. It cannot be done; one must allocate three chip entries per 10-bit word.
From 1K-by-4
We solve (N1 / N2) M1 / M2 = (10K / 1K) 10 / 4 = 10 3 = 30.
From 1K-by-4
We solve (N1 / N2) M1 / M2 = (10K / 1K) 10 / 8 = 10 2 = 20.
5. A 256-word memory has its words numbered from 0 to 255. Define the address bits for
each of the following. Each address bit should be specified as 0, 1, or d (don’t-care).
a) Word 48
b) Lower half of memory (words 0 through 127)
c) Upper hald of memory (words 128 through 255)
d) Even memory words (0, 2, 4, etc.)
e) Any of the eight words 48 through 55.
ANSWER: Note that 28 = 256, so we need an 8-bit address for this memory.
Recall that memory addresses are unsigned integers, so that we are dealing with unsigned
eight-bit integers here. The bit values are given in this table.
Bit B7 B6 B5 B4 B3 B2 B1 B0
Value 128 64 32 16 8 4 2 1
So, in eight-bit unsigned binary we have decimal 128 = 1000 0000 in binary, and decimal
127 = 128 – 1 = 0111 1111 in binary.
a) Word 48.
The answer here is just the binary representation of decimal 48.
Since 48 = 32 + 16, the answer is 0011 0000.
6 Design a 16K-by-8 memory using 2K-by-4 memory chips. Arrange the chips in
an 8 by 2 array (8 rows of 2 columns) and design the decoding circuitry.
ANSWER: The first part of the solution is to define what we are given and what we are
asked to build. From that, we can develop the solution, introducing a new term.
We are given a collection of 2K-by-4 memory chips. As 2K = 211, each of these chips has
eleven address lines, denoted A10A9A8A7A6A5A4A3A2A1A0, four data lines, D3D2D1D0, and
the required control and select lines. We are asked to design a 16K-by-8 memory.
As 16K = 214, this memory has 14 address lines, denoted A13 through A0. The memory to be
designed has eight data lines, denoted D7 through D0, and necessary control lines.
Because we are using 4-bit chips to create an 8-bit memory, we shall arrange the chips in
rows of two, with one chip holding the high-order four bits and the other the low-order four
bits of the eight-bit entry. Each such row is called a bank of memory. The end result of this
design is the arrangement of memory into a number of rows and two columns. The figure
below shows the use of the columns. The drawing on the left shows four data lines from
each chip feeding four bits in the eight-bit MBR. The drawing at the right shows a common
short-hand for expressing the same drawing.
The design suggested above divides the memory into banks of 2K-by-8, so that we need
16K / 2K = 8 banks for the memory to be designed. This suggests that we need 8 banks of 2
chips each for a total of 16 chips. Using the analysis of problem 3.1, we establish the number
of chips needed as (16K 8) / (2K 4) = 128K / 8K = 16, as expected.
If we have 8 banks, each bank containing 2K addresses, we need 3 bits (8 = 23) to select the
bank and 11 bits (2K = 211) to address each chip in the bank. We have stipulated that the
16K memory requires 14 address lines, as 16K = 214, so we have exactly the right bit count.
There are two standard ways to split the 14-bit address into a bank select and chip address.
The observant student will note that the number of ways to split the 14-bit address into a
3-bit bank number and an 11-bit chip address is exactly
but that only the two methods mentioned above make any sense.
The solution presented in these notes will be based on the interleaved memory model. The
basis of the design will be a set of memory banks, each bank arranged as follows.
Note that the eleven high-order bits of the system MAR are sent to each chip in every
memory bank. Note that in each memory bank, both chips are selected at the same time. We
now use a 3-to-8 decoder to select the memory bank to be accessed.
Here is the complete design of the 16K-by-8 memory as fabricated from 2K-by-4 chips.
The chip in the lower left explains the notation used in the main drawing.
For each of the sixteen memory chips, we have the following
A the eleven address inputs A11-0.
D the four data I/O lines D3D2D1D0.
S the chip select – active high.
7 You are given a 16K-by-4 ROM unit. Convert it to a 64K-by-1 ROM. Treat the
16K-by-4 ROM as one unit that you cannot alter. Only logic external to this
16K-by-4 ROM unit is to be used or shown.
ANSWER: This problem is almost opposite the previous problem. We are given a ROM
(Read-Only Memory) that is 16K-by-4. As 16K = 214, the address register for this chip must
have 14 bits: A13-0. The data buffer of this memory has four bits: D3D2D1D0.
The problem calls for the design of a 64K-by-1 ROM. As 64K = 216, this memory has a
16-bit address for each bit. Of the 16-bit address that goes to this memory, 14 bits must go to
the 16K-by-4 memory chip and 2 bits used to select one of the four bits output from the chip.
The only other task is to use a 4-to-1 multiplexer to select which of the 4 bits from the
16K-by-4 chip is to be used.
8 A given computer design calls for an 18-bit MAR and an 8-bit MBR.
a) How many addressable memory elements can the memory contain?
b) How many bits does each addressable unit contain?
c) What is the size of the address space in bytes?
NOTE: The answer can be given in many ways. If the answer were 16K, you
might say either 16K, 214, or 16384.
10 You are given a number of chips and asked to design a 64KB memory.
You immediately realize that this requires 16 address lines, as 64K = 216. The memory is
interleaved, with 12 address bits (A15 – A4) going to each byte-addressable memory chip and
4 address bits (A3A2A1A0) going to select the memory chip.
a) What is the size of each chip in bytes (or KB)?
b) How many chips are found in this design?
ANSWER: The memory chip is byte-addressable, implying that each byte in the chip has a
distinct address and that the number of addresses in the chip equals the number of bytes.
a) There are 12 address bits for each of the memory chips,
so each memory chip has 212 bytes = 22210 bytes = 4210 bytes = 4KB.
b) There are 4 address lines used to select the memory chips,
so there must be 24 = 16 chips.
NOTE: The key idea for this and the previous question is that P bits will select 2P different
items, alternately that a P-bit unsigned binary number can represent decimal numbers in the
range 0 to 2P – 1 inclusive, often written as the closed interval [0, 2P – 1]. As an example, an
8-bit binary number will represent unsigned decimal numbers in the range 0 to 28 – 1, or
0 to 255, while a 16-bit binary number will represent unsigned decimal numbers in the range
[0, 216 – 1] or [0, 65535].
Another way of saying the same thing is that the representation of a positive integer N
requires P bits where 2P–1 < N 2P. This simple idea is important in many areas of computer
science, but seems to elude some students.
ANSWER: Remember that 128M =27 220 = 227 and that 16M = 224.
a) To address 128M = 227 entries, we need a 27-bit MAR.
b) Since each entry is 32-bits, we have a 32-bit MBR.
c) The number of chips is (128M 32) / (16M 8), which evaluates to
(227 25) / (224 23) = 232 /227 = 25 = 32.
15 A computer has a cache memory set–up, with the cache memory having an access time of
6 nanoseconds and the main memory having an access time of 80 nanoseconds. This
question focuses on the effective access time of the cache.
a) What is the minimum effective access time for this cache memory?
b) If the effective access time of this memory is 13.4 nanoseconds, what is the hit ratio?
ANSWER:
a) The cache memory cannot have an access time less than that of the cache, so 6 nsec.
b) The equation of interest here is TE = hTP + (1 – h)TS. Using the numbers, we have
13.4 = h6 + (1 – h)80, or 13.4 = 80 – 74h, or 74h = 66.6, or h = 0.9.
16. (20 points) Suppose a computer using direct mapped cache has 232 words of main
memory and a cache of 1024 blocks, where each cache block contains 32 words.
a) How many blocks of main memory are there?
b) What is the format of a memory address as seen by the cache, that is, what are the
sizes of the tag, block, and word fields?
c) To which cache block will the memory reference 0000 63FA map?
ANSWER: Recall that 1024 = 210 and 32 = 25, from which we may conclude 1024 = 32 32.
a) The number of blocks is 232 / 25 = 2(32 – 5) = 227 = 27 220 = 128 M = 134, 217, 728.
b) 32 words per cache line Offset address is 5 bits.
1024 blocks in the cache Block number is 10 bits.
2 words 32 bit address 32 – (5 + 10) = 10 – 15 = 17 bit tag.
32
Bits 31 15 14 5 4 0
Contents Tag Block Number Offset within block
c) The number of bits allocated to the block number and offset are 15. We examine
the last four hex digits of 0000 63FA, as they correspond to 16 binary bits.
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Hex 6 3 F A
Binary 0 1 1 0 0 0 1 1 1 1 1 1 1 0 1 0
11 0001 1111 = 0x31F 1 1010 = 0x1A = 26
18 A computer memory system uses a primary memory with 100 nanosecond access time,
fronted by a cache memory with 8 nanosecond access time. What is the effective access
time if a) The hit ratio is 0.7?
b) The hit ratio is 0.9?
c) The hit ratio is 0.95?
d) The hit ratio is 0.99?
ANSWER: There is a problem with names here. For the cache scenario, we have
TP as the access time of the cache
TS as the access time of the primary memory.
19 (20 points) You are given a 16K x 4 ROM unit. Convert it to a 32K x 2 ROM.
Use only AND, OR, and NOT gates and possible D flip–flops in your design.
Treat the 16K x 4 ROM as one unit you cannot alter. Include only logic
gates external to the ROM unit.
ANSWER:
The 16K by 4 ROM has 16K = 214 addressable entries, each of 4 bits. We are to convert this
to a 32K by 2 ROM, with 32K = 215 addressable entries, each of 2 bits. The 15–bit address
for the latter must be broken into a 14–bit address for the former and a selector.
21 Suppose a computer using fully associative cache has 220 words of main memory and a
cache of 512 blocks, where each cache block contains 32 bytes.
The memory is byte addressable; each byte has a distinct address.
a) How many blocks of main memory are there?
b) What is the format of a memory address as seen by the cache, that is, what are the
sizes of the tag and word fields?
Answer: Recall that 32 = 25.
a) The number of blocks is 220 / 25 = 2(20 – 5) = 215 = 25 210 = 32 K = 32,768
b) For associative mapping the address has only tag and word offset in the block.
32 byte block 5 bit offset.
20 bit address (20 – 5) = 15 bit tag.
Bits 19 5 4 0
Contents Tag Offset in block
22 A SDRAM memory unit is connected to a CPU via a 500 MHz memory bus.
If the memory bus is 128 bits wide and the SDRAM is operated at Double Data Rate
(it is a DDR–SDRAM), what is the maximum burst transfer rate in bytes per second?
ANSWER: The bus is 128 bits (16 bytes) wide, so it delivers 16 bytes at a time.
It is DDR so it transfers 32 bytes per clock pulse.
It is DDR at 500 MHz, so it delivers 500106 collections of bytes per second.
The data rate is 32500106 bytes per second or 16,000106 bytes per second.
This is 16109 bytes per second, which is about 14.9 GB/sec.
Note: I shall accept 16.0 GB/sec, though it is technically incorrect.
23 You are given a D flip–flop, such as shown in the left diagram. You are to
design a memory cell as shown in the right diagram. The memory cell I/O line must be
connected directly to the bus line, and not pass through any other gates.
The memory cell is controlled by two inputs: Select and R/ W .
If Select = 0, the cell is inactive. If Select = 1, it is either being read or written to.
If R/ W = 0, the cell is being written to, and changes state according to its D input.
If R/ W = 1, the output of the cell (Q) is being placed on the bus line. Q is not used.
Design the circuit using only gates that will be internal to the Memory Cell. Recall that the
D flip–flop control signal called “clock” is just a pulse that causes the flip–flop to change
states.
ANSWER: Here we note that the control signals to the cell are as follows:
The cell is being written to if and only if Select = 1 and R/ W = 0.
The cell is being read if and only if Select = 1 and R/ W = 1.
The cell receives data only when its clock input goes high. Remember that this is an
input
to the flip–flop that activates it and enables it to receive data. The name is standard, but a
bit misleading. While this might be connected to the system clock, it need not be.
Here is the circuit.
[The two great Empires of Lilliput and Blefuscu] have, as I was going to tell you,
been engaged in a most obstinate War for six and thirty Moons past. It began
upon the following Occasion. It is allowed on all Hands, that the primitive Way
of breaking Eggs before we eat them, was upon the larger End: But his present
Majesty’s Grand-father, while he was a Boy, going to eat an Egg, and breaking it
according to the ancient Practice, happened to cut one of his Fingers.
Whereupon the Emperor his Father, published an Edict, commanding all his
Subjects, upon great Penalties, to break the smaller End of their Eggs. The
People so highly resented this Law, that our Histories tell us, there have been six
Rebellions raised on that Account; wherein one Emperor lost his Life, and
another his Crown. These civil Commotions were constantly fomented by the
Monarchs of Blefuscu; and when they were quelled, the Exiles always fled for
Refuge to that Empire. It is computed, that eleven Thousand Persons have, at
several Times, suffered Death, rather than submit to break their Eggs at the
smaller end. Many hundred large Volumes have been published upon this
Controversy: But the Books of the Big–Endians have been long forbidden, and
the whole Party rendered incapable by Law of holding Employments.”
Jonathan Swift was born in Ireland in 1667 of English parents. He took a B.A. at Trinity
College in Dublin and some time later was ordained an Anglican priest, serving briefly in a
parish church, and became Dean of St. Patrick’s in Dublin in 1713. Contemporary critics
consider the Big–Endians and Little–Endians to represent Roman Catholics and Protestants
respectively. In the 16th century, England made several shifts between Catholicism and
Protestantism. When the Protestants were in control, the Catholics fled to France; when the
Catholics were in control; the Protestants fled to Holland and Switzerland.
Lilliput seems to represent England, and its enemy Blefuscu is variously considered to
represent either France or Ireland. Note that the phrase “little–endian” seems not to appear
explicitly in the text of Gulliver’s Travels.