0% found this document useful (0 votes)
16 views3 pages

Other Uses of RAM

Uploaded by

Tesfaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views3 pages

Other Uses of RAM

Uploaded by

Tesfaye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Other uses of RAM

A SO-DIMM stick of laptop RAM, roughly half the size of


desktop RAM

In addition to serving as temporary storage and working space for the operating system and
applications, RAM is used in numerous other ways.

Virtual memory

Main article: Virtual memory

Most modern operating systems employ a method of extending RAM capacity, known as "virtual
memory". A portion of the computer's hard drive is set aside for a paging file or a scratch
partition, and the combination of physical RAM and the paging file form the system's total
memory. (For example, if a computer has 2 GB (10243 B) of RAM and a 1 GB page file, the
operating system has 3 GB total memory available to it.) When the system runs low on physical
memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as
to read previously swapped information back into RAM. Excessive use of this mechanism results
in thrashing and generally hampers overall system performance, mainly because hard drives are
far slower than RAM.

RAM disk

Main article: RAM drive

Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard
drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut
down, unless memory is arranged to have a standby battery source, or changes to the RAM disk
are written out to a nonvolatile disk. The RAM disk is reloaded from the physical disk upon
RAM disk initialization.

Shadow RAM

Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow
for shorter access times. The ROM chip is then disabled while the initialized memory locations
are switched in on the same block of addresses (often write-protected). This process, sometimes
called shadowing, is fairly common in both computers and embedded systems.
As a common example, the BIOS in typical personal computers often has an option called "use
shadow BIOS" or similar. When enabled, functions that rely on data from the BIOS's ROM
instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM
sections). Depending on the system, this may not result in increased performance, and may cause
incompatibilities. For example, some hardware may be inaccessible to the operating system if
shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not
used after booting in favor of direct hardware access. Free memory is reduced by the size of the
shadowed ROMs.[35]

Memory wall
The 'memory wall is the growing disparity of speed between CPU and the response time of
memory (known as memory latency) outside the CPU chip. An important reason for this
disparity is the limited communication bandwidth beyond chip boundaries, which is also referred
to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while
off-chip memory response time only improved at 10%. Given these trends, it was expected that
memory latency would become an overwhelming bottleneck in computer performance.[36]

Another reason for the disparity is the enormous increase in the size of memory since the start of
the PC revolution in the 1980s. Originally, PCs contained less than 1 mebibyte of RAM, which
often had a response time of 1 CPU clock cycle, meaning that it required 0 wait states. Larger
memory units are inherently slower than smaller ones of the same type, simply because it takes
longer for signals to traverse a larger circuit. Constructing a memory unit of many gibibytes with
a response time of one clock cycle is difficult or impossible. Today's CPUs often still have a
mebibyte of 0 wait state cache memory, but it resides on the same chip as the CPU cores due to
the bandwidth limitations of chip-to-chip communication. It must also be constructed from static
RAM, which is far more expensive than the dynamic RAM used for larger memories. Static
RAM also consumes far more power.

CPU speed improvements slowed significantly partly due to major physical barriers and partly
because current CPU designs have already hit the memory wall in some sense. Intel summarized
these causes in a 2005 document.[37]

First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current
increases, leading to excess power consumption and heat... Secondly, the advantages of higher
clock speeds are in part negated by memory latency, since memory access times have not been
able to keep pace with increasing clock frequencies. Third, for certain applications, traditional
serial architectures are becoming less efficient as processors get faster (due to the so-called von
Neumann bottleneck), further undercutting any gains that frequency increases might otherwise
buy. In addition, partly due to limitations in the means of producing inductance within solid state
devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes
shrink, imposing an additional bottleneck that frequency increases don't address.

The RC delays in signal transmission were also noted in "Clock Rate versus IPC: The End of the
Road for Conventional Microarchitectures"[38] which projected a maximum of 12.5% average
annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by 3D
integrated circuits that reduce the distance between the logic and memory aspects that are further
apart in a 2D chip.[39] Memory subsystem design requires a focus on the gap, which is widening
over time.[40] The main method of bridging the gap is the use of caches; small amounts of high-
speed memory that houses recent operations and instructions nearby the processor, speeding up
the execution of those operations or instructions in cases where they are called upon frequently.
Multiple levels of caching have been developed to deal with the widening gap, and the
performance of high-speed modern computers relies on evolving caching techniques.[41] There
can be up to a 53% difference between the growth in speed of processor and the lagging speed of
main memory access.[42]

Solid-state hard drives have continued to increase in speed, from ~400 Mbit/s via SATA3 in
2012 up to ~7 GB/s via NVMe/PCIe in 2024, closing the gap between RAM and hard disk
speeds, although RAM continues to be an order of magnitude faster, with single-lane DDR5
8000MHz capable of 128 GB/s, and modern GDDR even faster. Fast, cheap, non-volatile solid
state drives have replaced some functions formerly performed by RAM, such as holding certain
data for immediate availability in server farms - 1 terabyte of SSD storage can be had for $200,
while 1 TB of RAM would cost thousands of dollars.[43][44]

You might also like