0% found this document useful (0 votes)
14 views14 pages

Annotated Presentation2

The document discusses approximate cache coherence and its implementation and comparison on different workloads. It presents motivation for approximate cache coherence and describes implementing it on matrix multiplication, matrix transpose multiplication and linked list with random access workloads. It analyzes the results and impact of approximate cache coherence on these workloads.

Uploaded by

shajarianroxana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Annotated Presentation2

The document discusses approximate cache coherence and its implementation and comparison on different workloads. It presents motivation for approximate cache coherence and describes implementing it on matrix multiplication, matrix transpose multiplication and linked list with random access workloads. It analyzes the results and impact of approximate cache coherence on these workloads.

Uploaded by

shajarianroxana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

PRECISION VS.

PERFORMANCE WITH APPROXIMATE


CACHE COHERENCE

FINAL PROJECT CSCE-430

Roxana Shajarian

Spring 2024
PRESENTATION Introduction

OUTLINE Motivation

Workloads

Implementation

Comparision

Future Direction

Summary

5/10/2024 2
INTRODUCTION
• What is Cache?
o Cache is a small, fast storage layer
that resides between the main memory
(RAM) and the processor (CPU).

Copi, Abhay. "Memory Hierarchy." Abhay's Blog. April 14, 2014.


https://abhaycopi.blogspot.com/2014/04/memory-hierarchy.html.

5/10/2024 3
INTRODUCTION

• Why is Cache necessary?


o Modern CPUs are incredibly fast and can execute billions of instructions per second. However,
accessing data directly from the main memory can be slow relative to the speed of the CPU,
often leading to a situation where the CPU needs to wait for data.

5/10/2024 4
INTRODUCTION

• What is approximate cache coherence?


• "approximate cache coherence" involves techniques that reduce the overhead of
maintaining coherence by relaxing some of the strict consistency requirements, especially
in contexts where absolute consistency across all processors at all times is not critical.

5/10/2024 5
MOTIVATION

• The core idea is implementing approximate cache coherence mechanisms and studying its
impact on different types of programs, particularly those with varying tolerance for data
incoherence.
• Cachegrind:
o It is a powerful cache profiler tool that is used for performance analysis.
o Simulates how your program interacts with a computer's cache hierarchy

5/10/2024 6
WORKLOADS

• Matrix Multiplication
o Loop Tiling
• Matrix Multiplication Transpose
o Loop Tiling
• Linked List with Random Access
o Prefetching

5/10/2024 7
IMPLEMENTATION

• Matrix Multiplication
o Spatial and Temporal Locality
o Impact
• Matrix Multiplication Transpose
o Access Pattern Complexity
o Impact

5/10/2024 8
IMPLEMENTATION

• Linked List with Random Access


o Poor Cache Utilization
o Impact

5/10/2024 9
RESULTS &
COMPARISION
• Matrix Multiplication

5/10/2024 10
RESULTS &
COMPARISION
• Matrix Multiplication Transpose

5/10/2024 11
RESULTS &
COMPARISION
• Linked List with Random
Access

5/10/2024 12
FUTURE PLAN

• Fault Tolerance in Advanced Computing Systems


o Goal: Extend the concept of approximate cache coherence to enhance fault tolerance in
computing systems

5/10/2024 13
THANK YOU!

5/10/2024 14

You might also like