CDA 3103 Final Exam Practice Fall 2024
CDA 3103 Final Exam Practice Fall 2024
Name: . UID:
Instructions: This exam takes 75 mins. This exam is closed book / closed notes. No electronic devices are permitted.
Answer all questions completely. Show all work to get partial credits except yes/no problems. Some questions may have
multiple parts. Write clearly and identify your final answer. Good luck!
Important Reminder on Academic Honesty: Using unauthorized information or notes on an exam, peeking at others
work, or altering graded exams to claim more credit are severe violations of academic honesty. Detected cases will
receive a failing grade in the course.
Problem 1 Multiple Choice Questions
Review concepts
1. Hierarchical memory
Memory hierarchy is about arranging different kinds of storage devices in a computer based on their size, cost and
access speed, and the roles they play in application processing. The main purpose is to achieve efficient operations
by organizing the memory to reduce access time while speeding up operations.
2. SRAM vs DRAM
SRAM is faster but more expensive than DRAM. DRAM is slower but cheaper than SRAM. DRAM requires to perform
period refreshing to maintain the data.
3. The functions of the tag, block index and offset field
Cache placement policies are policies that determine where a particular memory block can be placed when it goes
into a CPU cache.
The tag bits derived from the memory block address are compared with the tag bits associated with the catch block.
If the tag matches, then there is a cache hit, otherwise there is a cache miss and the memory block is fetched from
the memory and stored to cache.
Offset corresponds to the bits used to determine the byte to be accessed from the block.
Index corresponds to bits used to determine the block of the Cache.
4. Page table vs TLB
A page table is a data structure used by a virtual memory system in a computer to store mappings between virtual
addresses and physical addresses.
A translation lookaside buffer (TLB) is a type of memory cache that stores recent translations of virtual addresses
to physical addresses to enable faster retrieval.
5. Volatile vs non-volatile memory
At a high level, the biggest difference between volatile and non-volatile memory is that volatile memory stores data
when a computer is on but erases it as soon as the computer is switched off, whereas non-volatile memory remains
in a computer even after the system shuts off.
6. Spatial locality vs temporal locality
1) The principle of spatial locality says that if a program accesses one memory address, there is a good chance
that it will also access other nearby addresses.
1_Practice
2) The principle of temporal locality says that if a program accesses one memory address, there is a good chance
that it will access the same address again.
7. Structure hazard, data hazard, branch hazard
1) A structural hazard in pipelining occurs when two or more instructions in a pipeline require the same hardware
resource at the same time.
2) Data Hazards occur when an instruction depends on the result of previous instruction and that result of instruction
has not yet been computed.
3) Control hazard occurs whenever the pipeline makes incorrect branch prediction decisions, resulting in
instructions entering the pipeline that must be discarded. A control hazard is often referred to as a branch hazard.
8. Definition of fragmentation, internal fragmentation and external fragmentation
In computer memory management, "paging" refers to dividing memory into fixed-sized blocks called "pages," while
"segmentation" divides memory into variable-sized blocks called "segments," and "fragmentation" describes the
wasted memory space that occurs when these blocks are not fully utilized, leading to inefficient memory
allocation; essentially, paging aims to minimize internal fragmentation by using uniform page sizes, while
segmentation allows for logical grouping of data but can lead to more external fragmentation due to varying segment
sizes.
9. Consider a set-associative cache of size 8 KiB with cache block size of 16 bytes. Assume a byte-addressable 2 GiB
main memory. If the tag field is 24 bits, this cache is a _____-way set associate cache.
10. Consider a direct mapped cache with 32 blocks and a block size of 32 bytes. To what block number does the byte
address 2370 map to block .
Multiple Choice Question Answer Sheet: fill in the entire circle that corresponds to your answer to each question. Erase
marks completely to make a change.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1. A nonpipelined system takes 200 ns to process a task. The same task can be processed in a 5-segment
pipeline with a clock cycle of 40 ns.
2. Consider a RISC-V assembly function func1. func1 has three passing arguments stored in registers
a0, a1 and a2, uses temporary registers t0-t3 and saved registers s4–s10. func1 needs to call
func2 and other functions may call func1 also. func2 has two passing arguments stored in registers
2_Practice
a0 and a1, respectively. In func1, after the program returns to func1 from func2, the code needs the
original values stored in registers t1 and a0 before it calls func2.
3. Suppose a byte-addressable computer with 1 GiB main memory and 64 KiB direct mapped cache
memory. Each block contains 32 bytes.
4. A 2-way set-associative cache memory unit with a capacity of 8 KiB is built using a block size of 4 words.
The word length is 32 bits. The size of the physical address space is 2 GiB.
5. Suppose the cache access time is 10 ns, main memory access time is 200 ns, and the cache hit rate is
84%. Assuming non-overlapped access.
6. Consider a virtual memory system that can address a total of 2 GiB. The physical memory has 256 MiB
of address space. Each page is 1 KiB.
How many bits will be used for:
Page offset
Frame Number
7. A direct mapped cache memory of 1 MB has a block size of 256 bytes. The cache has an access time of
3 ns and a hit rate of 94%. During a cache miss, it takes 20 ns to bring the first word of a block from the
main memory, while each subsequent word takes 5 ns. The word size is 64 bits.
3_Practice