Cache Memory Project Proposal

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Project Proposal

Group Information
1.Tharish Kumar Pagadala: [Team Member 1], [email protected]

2. Bhargavi Setlem: [Team Member 2], [email protected]

3. Vamsi Emani: [Team Member 3], [email protected]

Project Description
Goal:

The goal of this project is to enhance the cache memory performance by investigating and
implementing two key techniques: Victim Cache and Memory Bank Technique. These methods
aim to reduce cache miss rates and improve memory access times, bridging the performance gap
between the CPU and main memory. We expect the final system to demonstrate a noticeable
reduction in cache misses and improved memory access speeds compared to a traditional cache
architecture.

Issues Investigated:

1. Cache Misses: Direct-mapped caches suffer from conflict misses. The Victim Cache will
mitigate this by storing recently evicted blocks and swapping them back into the L1 cache if
accessed again.

2. Memory Access Time: High access time in main memory slows down overall system
performance. The Memory Bank Technique partitions memory into multiple banks to allow
parallel access, reducing the total access time.

Expected Results:

- Reduction in conflict miss rate with the Victim Cache.

- Improved memory access time using the Memory Bank technique.

Evaluation Metrics:

1. Miss Rate: The number of cache misses divided by the total memory accesses. A lower miss
rate indicates better performance.

2. Access Time: The average time taken to fetch data from the memory. This will be evaluated
with and without the Memory Bank technique.

3. Energy Consumption: The additional hardware (victim cache and memory banks) may increase
power usage, and we will compare energy efficiency alongside performance improvements.
Work Plan
Components to be Implemented:

1. Victim Cache Implementation:

- Design and implement a victim cache for the L1 cache. It will hold recently evicted blocks,
reducing the miss rate of the direct-mapped cache.

- Simulate this using the CACTI tool, adjusting parameters for cache size, associativity, and block
size.

2. Memory Bank Technique:

- Implement the memory bank technique in the L1 cache. Memory banks allow parallel access to
different memory locations, reducing memory access time.

- Configure CACTI to simulate this partitioning and evaluate the performance.

3. System Integration:

- Integrate the victim cache and memory bank techniques into a unified cache architecture.

- Simulate the combined system using CACTI and measure overall cache performance.

Tools Used:

1. CACTI: Cache modeling tool for simulating memory hierarchies and evaluating performance
metrics such as miss rate, access time, and power consumption.

2. Python/C++: Code implementation of the victim cache and memory bank logic for testing and
simulation.

Milestones
Week Milestones

Week 1 - Research and finalize the design for both


Victim Cache and Memory Bank.
- Study CACTI configuration for simulations.

Week 2 - Implement Victim Cache and Memory Bank


techniques.
- Set up initial CACTI simulations for
individual techniques.

Week 3 - Finalize simulation results for Victim Cache


and Memory Bank.
- Begin integration of the two techniques into
a unified system.

Week 4 - Complete final system integration.


- Run CACTI simulations for the integrated
system.
- Compile the final report with results and
analysis.

Related Work
Literature Review:

Several studies have explored various methods to improve cache


performance:

1. Victim Cache: Jouppi [1] introduced the victim cache as a small, fully
associative cache that reduces conflict misses in direct-mapped caches. It
was shown to improve cache performance significantly.

2. Memory Bank Techniques: Multiple papers, including the one by


Hennessy and Patterson [2], have discussed memory bank partitioning to
improve memory throughput by allowing simultaneous access to different
banks.

3. Cache Optimization: Studies like Chaplot et al. [3] have explored other
cache optimization techniques, such as associativity improvements and
prefetching mechanisms.

Gaps:

- While victim cache and memory bank techniques have been studied
individually, their combined impact on modern cache architectures
(especially in deep submicron technologies) requires further exploration.
Collected Resources:

1. Jouppi, Norman P. 'Improving Direct-Mapped Cache Performance by the


Addition of a Small Fully-Associative Cache and Prefetch Buffers.' ISCA
1990.

2. Hennessy, John L., and David A. Patterson. 'Computer Architecture: A


Quantitative Approach.' Morgan Kaufmann Publishers, 2011.

3. Chaplot, V., 'Cache Memory Optimization Techniques: An Overview,'


Journal of Advance Research in Computer Science, 2016.

You might also like