Low Power Cache Memory Architecture Using Bandwidth Scalable Controller
Low Power Cache Memory Architecture Using Bandwidth Scalable Controller
Scalable Controller
N.MALLIKA1, A.MUTHUMANICCKAM2 ,R.SORNALATHA3
PG student [VLSI Design], Dept. of ECE, Shanmuganathan Engineering College, Pudukkottai 1.
Assistant Professor, Dept. of ECE, Shanmuganathan Engineering College,Pudukkottai 2.
Assistant Professor, Dept. of ECE, Shanmuganathan Engineering College,Pudukkottai 3.
Email :[email protected],
[email protected],[email protected].
ABSTRACT:
A new cache design technique referred
to as Early Tag Access (ETA) cache, to improve
the energy efficiency of data caches in
embedded processors. The proposed technique
performs ETAs to determine the destination
ways of memory instructions before the actual
cache accesses. Thus, it enables only the
destination way to be accessed if a hit occurs
during the ETA. The proposed ETA cache can be
configured under two operation modes to exploit
the tradeoffs between energy efficiency and
performance. It is shown that this technique is
very effective in reducing the number of ways
accessed during cache accesses. This enables
significant energy reduction with negligible
performance overheads. Simulation results show
that the proposed ETA cache consumes a power
of 203mW on average in the L1 data cache and
translation look aside buffer. Compared with the
existing cache design techniques, the ETA cache
is more effective in energy reduction while
maintaining better performance.
Bandwidth Scalable Controller (BSC) is
another technique used in this work to reduce
power consumption. BSC is defined as a concept
that allocates the Bandwidth depending on the
input data size. Since it changes the bandwidth
accordingly the power is reduced.
I. INTRODUCTION
Multi-level on-chip cache systems are
wide adopted in superior microprocessors.To
keep information consistence throughout the
memory hierarchy, write-through and write-back
policies ar unremarkably used.Beneath the
into account a two-level (i.e., Level-1 and Level2) cache system as an example. If the L1
knowledge cache implements the write-back
policy, a write hit within the L1 cache doesn't
have to be compelled to access the L2 cache. In
distinction, if the L1 cache is write-through, then
each L1 and L2 caches have to be compelled to
be accessed for each write operation. Obviously,
the write-through policy incurs a lot of write
accesses within the L2 cache, that successively
will increase the energy consumption of the
cache system. Power dissipation is currently
thought-about in concert of the essential
problems in cache style. Studies have shown that
on-chip caches will consume concerning five
hundredth of the entire power in superior
microprocessors.
In this paper, we have a tendency to
propose a replacement cache technique, cited as
early tag access (ETA) cache, to boost the
energy potency of L1 information caches. In an
exceedingly physical tag and virtual index
cache, a vicinity of the physical address is keep
within the tag arrays whereas the conversion
between the virtual address and therefore the
physical address is performed by the TLB. By
accessing tag arrays and TLB throughout the
LSQ stage, the destination ways that of most
memory directions will be determined before
accessing the L1 information cache. As a result,
just one method within the L1 information cache
has to be accessed for these directions, thereby
reducing the energy consumption considerably.
Note that the physical addresses generated from
the TLB at the LSQ stage may be used for future
cache accesses. Therefore, for many memory
directions, the energy overhead of method
determination at the LSQ stage will be
remunerated for by skipping the TLB accesses
throughout the cache access stage. For memory
directions whose destination ways that cannot be
determined at the LSQ stage, Associate in
Nursing increased mode of the ETA cache is
projected to cut back the quantity of the way
accessed at the cache access stage. Note that in
several high-end processors, accessing L2 tags is
completed in parallel with the accesses to the L1
cache. Our technique is essentially completely
different as ETAs are unit performed at the L1
cache.
II. PROPOSED ETA CACHE
FIG.3.Information Buffer
The information buffer has separate write and
read ports to support parallel write and read
operations. The write operations of the
information buffer always start one clock cycle
later than the corresponding write operations in
the LSQ. This is because the accesses to the
LSQ, LSQ tag arrays, and LSQ TLB occur
simultaneously. Since the way information is
available after the write operations in the LSQ,
this data will be written into the information
buffer one clock cycle later than the
corresponding write operation in the LSQ.
V.WAY HIT/MISS DECODER
IV.RESULTS
POWER ANALYSIS REPORT
FIG.5.Way Decoder
In the projected ETA cache, method enabling
signals required to manage the access to the
ways that within the information arrays. The
implementation of the method decoder that
generates these signals. Once the instruction is
related to associate degree early hit (e.g., 1),
the information arrays have to be compelled to
be accessed in line with the first destination
method. If the instruction experiences associate
degree early tag miss or associate degree early
TLB miss, the configuration bit in figure
determines that method within the information
arrays the L1 information cache must be
accessed.
Specifically, by setting
the
configuration bit to 1, the ETA cache can
operate below the fundamental mode.
LATENCY REPORT
V.CONCLUSION
This paper presents a brand new energy-efficient
cache technique for superior microprocessors
using the write-through policy. The planned
technique attaches a tag to every means within
the L2 cache. this fashion tag is shipped to the
way-tag arrays within the L1 cache once the
information is loaded from the L2cache to the
L1 cache. Utilizing the means tags hold on
within the way-tag arrays, the L2 cache are often
accessed as a direct-mapping cache throughout
the next write hits, thereby reducing cache
energy consumption.
Simulation
results
demonstrate considerably reduction in cache