CAO Assignment

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

CAO

Assignment

Submitted By:- Submitted To:-


Lakhbir Singh Mrs. Hina Sood
Sg 18329
CSE 4th Sem.
1.Explain in detail SRAM and DRAM.
SRAM
SRAM stands for Static Random Access Memory.

Static Ram Chip from a Nintendo Entertainment system.

SRAM is a type of semiconductor random access


memory that uses bistable latching circuitry (flip-flop)
to store each bit.SRAM exhibits data remanence, but
is still volatile in conventional sense and data is
eventually lost when memory is not powered. The
term static differentiates SRAM from DRAM
(dynamic random-access memory) which must be
periodically refreshed. SRAM is faster and more
expensive than DRAM; it is typically used for CPU
cache while DRAM is used for a computer's main
memory .
History
In 1965, Arnold Farber and Eugene Schlig, working
for IBM, created a hard-wired memory cell, using a
transistor gate and tunnel diode latch. They replaced
the latch with two transistors and two resistors, a
configuration that became known as the Farber-
Schlig cell. In 1965, Benjamin Agusta and his team
at IBM created a 16-bit silicon memory chip based on
the Farber-Schlig cell, with 80 transistors, 64
resistors, and 4 diodes.
The first commercial DRAM (built from discrete
transistors and capacitors) was produced the same
year, 1965.
Advantages and Disadvantages of SRAM.
Characteristics

Advantages:
● Simplicity – a refresh circuit is not needed
● Performance
● Reliability
● Low idle power consumption
Disadvantages:
● Price
● Density
● High operational power consumption
USE OF SRAM
Embedded use

Many categories of industrial and scientific


subsystems, automotive electronics, and similar,
contain static RAM which, in this context, may be
referred to as ESRAM. Some amount (kilobytes or
less) is also embedded in practically all modern
appliances, toys, etc. that implement an electronic user
interface. Several megabytes may be used in complex
products such as digital cameras, cell phones,
synthesizers, game consoles, etc.
SRAM in its dual-ported form is sometimes used for
real-time digital signal processing circuits.
In computers

SRAM is also used in personal computers,


workstations, routers and peripheral equipment: CPU
register files, internal CPU caches and external burst
mode SRAM caches, hard disk buffers, router buffers,
etc. LCD screens and printers also normally employ
static RAM to hold the image displayed (or to be
printed). Static RAM was used for the main memory of
some early personal computers.
Design of SRAM

A typical SRAM cell is made up of six MOSFETS. Each


bit in an SRAM is stored on four transistors (M1, M2,
M3, M4) that form two cross-coupled inverters. This
storage cell has two stable states which are used to
denote 0 and 1. Two additional access transistors
serve to control the access to a storage cell during
read and write operations. In addition to such six-
transistor (6T) SRAM, other kinds of SRAM chips use
4, 8, 10 (4T, 8T, 10T SRAM), or more transistors per
bit. Four-transistor SRAM is quite common in stand-
alone SRAM devices (as opposed to SRAM used for
CPU caches), implemented in special processes with
an extra layer of polysilicon, allowing for very high-
resistance pull-up resistors.The principal drawback of
using 4T SRAM is increased static power due to the
constant current flow through one of the pull-down
transistors.
SRAM Operation
An SRAM cell has three different states: standby (the
circuit is idle), reading (the data has been requested) or
writing (updating the contents). SRAM operating in
read mode and write modes should have "readability"
and "write stability", respectively. The three different
states work as follows:
Standby
If the word line is not asserted, the access transistors
M5 and M6 disconnect the cell from the bit lines. The
two cross-coupled inverters formed by M1 – M4 will
continue to reinforce each other as long as they are
connected to the supply.
Reading
In theory, reading only requires asserting the word line
WL and reading the SRAM cell state by a single
access transistor and bit line, e.g. M6, BL. However, bit
lines are relatively long and have large parasitic
capacitance. To speed up reading, a more complex
process is used in practice: The read cycle is started
by precharging both bit lines BL and BL, to high (logic
1) voltage. Then asserting the word line WL enables
both the access transistors M5 and M6, which causes
one bit line BL voltage to slightly drop. Then the BL and
BL lines will have a small voltage difference between
them. A sense amplifier will sense which line has the
higher voltage and thus determine whether there was 1
or 0 stored. The higher the sensitivity of the sense
amplifier, the faster the read operation. As the NMOS
is more powerful, the pull-down is easier. Therefore, bit
lines are traditionally precharged to high voltage. Many
researchers are also trying to precharge at a slightly
low voltage to reduce the power consumption.
Writing

The write cycle begins by applying the value to be


written to the bit lines. If we wish to write a 0, we would
apply a 0 to the bit lines, i.e. setting BL to 1 and BL to
0. This is similar to applying a reset pulse to an SR-
latch, which causes the flip flop to change state. A 1 is
written by inverting the values of the bit lines. WL is
then asserted and the value that is to be stored is
latched in. This works because the bit line input-drivers
are designed to be much stronger than the relatively
weak transistors in the cell itself so they can easily
override the previous state of the cross-coupled
inverters. In practice, access NMOS transistors M5 and
M6 have to be stronger than either bottom NMOS (M1,
M3) or top PMOS (M2, M4) transistors. This is easily
obtained as PMOS transistors are much weaker than
NMOS when same sized. Consequently, when one
transistor pair (e.g. M3 and M4) is only slightly
overridden by the write process, the opposite
transistors pair (M1 and M2) gate voltage is also
changed. This means that the M1 and M2 transistors
can be easier overridden, and so on. Thus, cross-
coupled inverters magnify the writing process

DRAM
DRAM stands for Dynamic Random Access Memory.
Dynamic random-access memory (DRAM) is a type
of random access semiconductor memory that stores
each bit of data in a memory cell consisting of a tiny
capacitor and a transistor, both typically based on
metal-oxide-semiconductor (MOS) technology. The
capacitor can either be charged or discharged; these
two states are taken to represent the two values of a
bit, conventionally called 0 and 1. The electric charge
on the capacitors slowly leaks off, so without
intervention the data on the chip would soon be lost. To
prevent this, DRAM requires an external memory
refresh circuit which periodically rewrites the data in the
capacitors, restoring them to their original charge. This
refresh process is the defining characteristic of
dynamic random-access memory, in contrast to static
random-access memory (SRAM) which does not
require data to be refreshed. Unlike flash memory,
DRAM is volatile memory (vs. non-volatile memory),
since it loses its data quickly when power is removed.
However, DRAM does exhibit limited data remanence.
DRAM typically takes the form of an integrated circuit
chip, which can consist of dozens to billions of DRAM
memory cells. DRAM chips are widely used in digital
electronics where low-cost and high-capacity computer
memory is required. One of the largest applications for
DRAM is the main memory (colloquially called the
"RAM") in modern computers and graphics cards
(where the "main memory" is called the graphics
memory). It is also used in many portable devices and
video game consoles. In contrast, SRAM, which is
faster and more expensive than DRAM, is typically
used where speed is of greater concern than cost and
size, such as the cache memories in processors.
History
The cryptanalytic machine code-named "Aquarius"
used at Bletchley Park during World War II
incorporated a hard-wired dynamic memory. Paper
tape was read and the characters on it "were
remembered in a dynamic store. ... The store used a
large bank of capacitors, which were either charged or
not, a charged capacitor representing cross (1) and an
uncharged capacitor dot (0). Since the charge
gradually leaked away, a periodic pulse was applied to
top up those still charged (hence the term 'dynamic')"

Characteristics
Advantages and Disadvantages of DRAM
Advantages
➨DRAM memory can be deleted and refreshed
while running the program.
➨Cheaper compare to SRAM.
➨It is smaller in size.
➨It has higher storage capacity. Hence it is
used to create larger RAM space system.
➨It is simple in structure than SRAM.
Disadvantages
➨It is comparatively slower than SRAM. Hence
it takes more time for accessing data or
information.
➨It loses data when power is OFF.
➨It has higher power consumption compare to
SRAM.
USE OF DRAM
Dynamic Random Access Memory (DRAM) is a type
of memory used for the temporary storage of
information in computer systems. DRAM has dozens
of applications, many which have evolved over the
past ten years. As technology advances and next-
generation devices are in development, the
applications for various forms of DRAM are
becoming broader.
Personal Computers and Mobile Devices
Early DRAM was most used in personal computers
and related components. With the great demand for
mobile technology, DRAM is now needed for many
handheld and mobile devices as well. For example,
special DRAM is designed for certain devices that
require low power consumption and long battery life.
Below is a list of PC and mobile related equipment in
which DRAM is critical:
● Cell Phones
● Desktop Computers
● Digital Signal Controller (DSC)
● Global Positioning System (GPS)
● Personal Data Assistant (PDA)
● Smartphones
● Tablets and Pads
Consumer Electronics

With items like smart watches and mobile


gadgets expected to be big sellers in the coming
year, DRAM in consumer electronics is still a hot
commodity. Some of these devices also require low
power consumption:

● Digital Cameras
● Portable Media Players
● Set-top Boxes
● Smart TVs
● Video Cards

DESIGN of DRAM
A Dram cell consists of a capacitor connected by a
pass transistor to the bit line (or digit line or column
line). The digit line (or column line) is connected to a
multitude of cells arranged in a column. The word line
(or row line) is also connected to a multitude of cells,
but arranged in a row. (See Figure 2.) If the word line is
ascertained, then the pass transistor T1 in Figure 1 is
opened and the capacitor C1 is connected to the bit
line.
The DRAM memory cell stores binary information in
the form of a stored charge on the capacitor. The
capacitor's common node is biased approximately at
VCC/2. The cell therefore contains a charge of Q =
±VCC/2 • Ccell, if the capacitance of the capacitor is Ccell.
The charge is Q = +VCC/2 • Ccell if the cell stores a 1,
otherwise the charge is Q = -VCC/2 • Ccell. Various leak
currents will slowly remove the charge, making a
refresh operation necessary.
If we open the pass transistor by asserting the word
line, then the charge will dissipate over the digit line,
leading to a voltage change. The voltage change is
given by (Vsignal observed voltage change in the digit
line, Ccell the capacitance of the DRAM cell capacitor,
and Cline the capacitance of the digit line
Vsignal = Vcell• Ccell• (Ccell + Cline)-1
For example, if VCC is 3.3V, then Vcell is 1.65V. Typical
values for the capacitances are Cline = 300fF and Ccell =
50fF. This leads to a signal strength of 235 mV. When
a DRAM cell is accessed, it shares its charge with the
digit line.
DRAM OPERATIONS

Dynamic RAM read / write operation


One of the critical issues within the dynamic RAM is to
ensure that the read and write functions are carried out
effectively. As voltages on the charge capacitors are
small, noise immunity is a key issue.
There are several lines that are used in the read and
write operations:
● /CAS, the Column Address Strobe: This line
selects the column to be addressed. The address
inputs are captured on the falling edge of /CAS. It
enables a column to be selected from the open row
for read or write operations.
● /OE, Output Enable: The /OE signal is typically
used when controlling multiple memory chips in
parallel. It controls the output to the data I/O pins.
The data pins are driven by the DRAM chip if /RAS
and /CAS are low, /WE is high, and /OE is low. In
many applications, /OE can be permanently
connected low, i.e. output always enabled if not
required for example of chips are not wired in
parallel.
● /RAS, the Row Address Strobe: As the name
implies, the /RAS line strobes the row to be
addressed. The address inputs are captured on the
falling edge of the /RAS line. The row is held open
as long as /RAS remains low.
● /WE, Write Enable: This signal determines
whether a given falling edge of /CAS is a read or
write. Low enables the write action, while high
enables a read action. If low (write), the data inputs
are also captured on the falling edge of /CAS.

Dynamic RAM refresh operation


One of the problems with this arrangement is that the
capacitors do not hold their charge indefinitely as there
is some leakage across the capacitor. It would not be
acceptable for the memory to lose its data, and to
overcome this problem the data is refreshed
periodically. The data is sensed and written and this
then ensures that any leakage is overcome, and the
data is re-instated.
One of the key elements of DRAM memory is the fact
that the data is refreshed periodically to overcome the
fact that charge on the storage capacitor leaks away
and the data would disappear after a short while.
Typically manufacturers specify that each row should
be refreshed every 64 ms. This time interval falls in line
with the JEDEC standards for dynamic RAM refresh
periods.
There are a number of ways in which the refresh
activity can be accomplished. Some processor systems
refresh every row together once every 64 ms. Other
systems refresh one row at a time, but this has the
disadvantage that for large memories the refresh rate
becomes very fast. Some other systems (especially
real time systems where speed is of the essence)
adopt an approach whereby a portion of the
semiconductor memory at a time based on an external
timer that governs the operation of the rest of the
system. In this way it does not interfere with the
operation of the system.
Whatever method is use, there is a necessity for a
counter to be able to track the next row in the DRAM
memory is to be refreshed. Some DRAM chips include
a counter, otherwise it is necessary to include an
additional counter for this purpose.
It may appear that the refresh circuitry required for
DRAM memory would over complicate the overall
memory circuit making it more expensive. However it is
found that DRAM the additional circuitry is not a major
concern if it can be integrated into the memory chip
itself. It is also found that DRAM memory is much
cheaper and has a much greater capacity than the
other major contender which might be Static RAM
(SRAM).
CACHE MEMORY

Cache Memory is a special very high-speed memory.


It is used to speed up and synchronizing with high-
speed CPU. Cache memory is costlier than main
memory or disk memory but economical than CPU
registers. Cache memory is an extremely fast memory
type that acts as a buffer between RAM and the CPU.
It holds frequently requested data and instructions so
that they are immediately available to the CPU when
needed.
Cache memory is used to reduce the average time to
access data from the Main memory. The cache is a
smaller and faster memory which stores copies of the
data from frequently used main memory locations.
There are various different independent caches in a
CPU, which store instructions and data.
Levels of memory:
● Level 1 or Register –
It is a type of memory in which data is stored and
accepted that are immediately stored in CPU. Most
commonly used register is accumulator, Program
counter, address register etc.
● Level 2 or Cache memory –
It is the fastest memory which has faster access
time where data is temporarily stored for faster
access.
● Level 3 or Main Memory –
It is memory on which computer works currently. It
is small in size and once power is off data no longer
stays in this memory.
● Level 4 or Secondary Memory –
It is external memory which is not as fast as main
memory but data stays permanently in this memory.
Cache Performance:
When the processor needs to read or write a location in
main memory, it first checks for a corresponding entry
in the cache.
● If the processor finds that the memory location is in
the cache, a cache hit has occurred and data is
read from cache
● If the processor does not find the memory location
in the cache, a cache miss has occurred. For a
cache miss, the cache allocates a new entry and
copies in data from main memory, then the request
is fulfilled from the contents of the cache.
The performance of cache memory is frequently
measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total
accesses
Cache Mapping:
There are three different types of mapping used for the
purpose of cache memory which are as follows: Direct
mapping, Associative mapping, and Set-Associative
mapping. These are explained below.
Direct Mapping –
The simplest technique, known as direct mapping,
maps each block of main memory into only one
possible cache line. or
In Direct mapping, assigne each memory block to a
specific line in the cache. If a line is previously taken
up by a memory block when a new block needs to be
loaded, the old block is trashed. An address space is
split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest
is stored in the main memory. Direct mapping`s
performance is directly proportional to the Hit ratio.
Graphical representation of Direct Mapping.

Associative Mapping –
In this type of mapping, the associative memory is
used to store content and addresses of the memory
word. Any block can go into any line of the cache.
This means that the word id bits are used to identify
which word in the block is needed, but the tag
becomes all of the remaining bits. This enables the
placement of any word at any place in the cache
memory. It is considered to be the fastest and the
most flexible mapping form.

Application of Cache Memory –


1. Usually, the cache memory can store a
reasonable number of blocks at any given time, but
this number is small compared to the total number
of blocks in the main memory.
2. The correspondence between the main memory
blocks and those in the cache is specified by a
mapping function.

Locality of reference –
Since size of cache memory is less as compared to
main memory. So to check which part of main
memory should be given priority and loaded in cache
is decided based on locality of reference.
Types of Locality of reference
1. Spatial Locality of reference
This says that there is a chance that element will
be present in the close proximity to the reference
point and next time if again searched then more
close proximity to the point of reference.
2. Temporal Locality of reference
In this Least recently used algorithm will be used.
Whenever there is page fault occurs within a word
will not only load word in main memory but
complete page fault will be loaded because spatial
locality of reference rule says that if you are
referring any word next word will be referred in its
register that’s why we load complete page table so
the complete block will be loaded.
CACHE UPDATION SCHEMES

A big challenge of caching is to keep the data stored


in the cache and the data stored in the remote
system in sync, meaning that the data is the same.
Depending on how your system is structured, there
are different ways of keeping the data in sync. I will
cover some of the possible techniques in the
following sections. This is known as Cache updation.
Write-through Caching
A write-through cache is a cache which allows both
reading and writing to it. If the computer keeping the
cache writes new data to the cache, that data is also
written to the remote system. That is why it is called a
"write-through" cache. The writes are written through to
the remote system.
Write-through caching works if the remote system can
only be updated via the computer keeping the cache. If
all data writes goes through the computer with the
cache, it is easy to forward the writes to the remote
system and update the cache correspondingly.
Time Based Expiry
If the remote system can be updated independently of
the computer keeping the cache, then it can be a
problem to keep the cache and the remote system in
sync.
One way to keep the data in sync is to let the data in
the cache expire after a certain time interval. When the
data has expired it will be removed from the cache.
When the data is needed again, a fresh version of the
data is read from the remote system and inserted into
the cache.
How long the expiration time should be depends on
your needs. Some types of data (like an article) may
not need to be fully up-to-date at all times. Maybe you
can live with a 1 hour expiration time. For some articles
you might even be able to live with a 24 hour expiration
time.
Keep in mind that a short expiration time will result in
more reads from the remote system, thus reducing the
benefit of the cache.
Active Expiry
An alternative to time based expiration is active
expiration. By active expiration I mean that you actively
expire the cached data. For instance, if your remote
system is updated you may send a message to the
computer keeping the cache, instructing it to expire the
data that was updated.
Active expiry has the advantage that the data in the
cache is made up-to-date as fast as possible after the
update in the remote system. Additionally, you don't
have any unnecessary expirations for data that has not
changed, as you may have with time based expiration.
The disadvantage of active expiration is that you need
to be able to detect changes to the remote system. If
your remote system is a relational database, and this
database can be updated through different
mechanisms, each of these mechanisms need to be
able to report what data they have updated. Otherwise
you cannot send an expiration message to the
computer keeping the cache.

You might also like