Computer Memory
Computer Memory
Packaging
Memory chips are called DIPs which stands for Dual Inline Packages. They are black
chips with pins on both sides. Some say they look like black bugs. To make memory
installation easier than it was in the past, these DIP chips were places on modules.
There are two main module types that memory comes packaged on today.
1. SIMM - Single Inline Memory Module. They may have DIPs on one or both
sides and will have 30 or 72 pins. Today, they normally are available in the 72
pin size which supports a 32 bit data transfer between the microprocessor and
the memory.
2. DIMM - Double Inline Memory Module. The modules have 168 pins and
support a 64 bit data transfer between the microprocessor and the memory.
Synchronous Dynamic Access Memory (SDRAM) is the type of memory that is
found on DIMM packages. The term SDRAM describes the memory type, and
the term DIMM describes the package. These modules are available in 3.3 or 5
volt types and buffered or unbuffered memory. This allows four choices of
DIMM types. You should check your motherboard manual to determine the
type of memory required. You should be able to find this information on the
motherboard manufacturers website before buying the motherboard. The most
common choice for todays motherboards is 3.3 volt unbuffered DIMMs.
To install these packages, you press them into the socket on the motherboard and latch
them in with a plastic latch on both sides. Normally as the memory module is pressed
into place the latch will automatically latch the module in place. This is the essential
knowledge required to understand enough to buy and install memory on your
motherboard. The following sections give further technical details.
DRAM Access
DRAM memory is is accessed in chunks called cells. Every cell contains a certain
number of bits or bytes. A row, column scheme is used to specify the section being
accessed. The cells are arranged similar to the following table.
ROW 1, ROW 1,
ROW 1, COL 1 ROW 1, COL 4
COL 2 COL 3
ROW 2, ROW 2,
ROW 2, COL 1 ROW 2, COL 4
COL 2 COL 3
ROW 3, ROW 3,
ROW 3, COL 1 ROW 3, COL 4
COL 2 COL 3
ROW 4, ROW 4,
ROW 4, COL 1 ROW 4, COL 4
COL 2 COL 3
When the DRAM is accessed, the row, then the column address is specified. A page in
memory is considered to be the memory available in the row.
Types of DRAM
The term DRAM stands for Dynamic Random Access Memory. There are three
common types of DRAM today.
1. FPM DRAM - Fast Page Mode DRAM. When the first memory access is done,
the row or page of the memory is specified. Once this is done, FPO DRAM
allows any other row of memory to be accessed without specifying the row
number. This speeds up access time.
2. EDO DRAM - Extended Data Out DRAM. This works like FPO DRAM but it
holds the data valid even after strobe signals have gone inactive. This allows the
microprocessor to request memory, and it does not need to wait for the memory
to become valid. It can do other tasking, then come back later to get the data.
3. SDRAM - Synchronized DRAM inputs and outputs its data synchronized to a
clock that runs at some fraction of the microprocessor speed. SDRAM is the
fastest of these three types of DRAM. There is a new SDRAM called DDR
(Double Data Rate) SDRAM which allows data reads on both the rising and
falling edge of the synchronized clock.
Another new type of DRAM is called RDRAM developed by Rambus, Inc. It uses a high
bandwidth channel to transmit data at very high rates. It attempts to eliminate the time it
takes to access memory. Synclink DRAM (SLDRAM) competes with RDRAM and uses
16 bank architecture rather than 4 along with other performance enhancing
improvements.
Cache Memory
Cache memory is special memory that operates much faster than SDRAM memory. It is
also more expensive. It would be impractical to use this memory for the entire system
both for reasons of expense and physical board and bus channel design requirements.
Cache memory lies between the microprocessor and the system RAM. It is used as a
buffer to reduce the time of memory access. There are two levels to this memory called
L1 (level 1) and L2 (level 2). The level 1 memory is a part of the microprocessor, and the
level 2 memory is just outside the microprocessor.
As you can see in the diagram above, the CPU accesses memory according to a
distinct hierarchy. Whether it comes from permanent storage (the hard drive) or input
(the keyboard), most data goes inrandom access memory (RAM) first. The CPU
then stores pieces of data it will need to access, often in a cache, and maintains
certain special instructions in the register. We'll talk about cache and registers later.
All of the components in your computer, such as the CPU, the hard drive and
the operating system, work together as a team, and memory is one of the most
essential parts of this team. From the moment you turn your computer on until the
time you shut it down, your CPU is constantly using memory. Let's take a look at a
typical scenario:
• You turn the computer on.
• The computer loads data from read-only memory (ROM) and performs
a power-on self-test (POST) to make sure all the major components are
functioning properly. As part of this test, the memory controller checks all of
the memory addresses with a quick read/writeoperation to ensure that there
are no errors in the memory chips. Read/write means that data is written to
a bit and then read from that bit.
• The computer loads the basic input/output system (BIOS) from ROM. The
BIOS provides the most basic information about storage devices, boot
sequence, security, Plug and Play(auto device recognition) capability and a
few other items.
• The computer loads the operating system (OS) from the hard drive into the
system's RAM. Generally, the critical parts of the operating system are
maintained in RAM as long as the computer is on. This allows the CPU to
have immediate access to the operating system, which enhances the
performance and functionality of the overall system.
• When you open an application, it is loaded into RAM. To conserve RAM
usage, many applications load only the essential parts of the program initially
and then load other pieces as needed.
• After an application is loaded, any files that are opened for use in that
application are loaded into RAM.
• When you save a file and close the application, the file is written to the
specified storage device, and then it and the application are purged from RAM.
In the list above, every time something is loaded or opened, it is placed into RAM.
This simply means that it has been put in the computer's temporary storage
area so that the CPU can access that information more easily. The CPU requests
the data it needs from RAM, processes it and writes new data back to RAM in
a continuous cycle. In most computers, this shuffling of data between the CPU and
RAM happens millions of times every second. When an application is closed, it and
any accompanying files are usually purged (deleted) from RAM to make room for
new data. If the changed files are not saved to a permanent storage device before
being purged, they are lost.
One common question about desktop computers that comes up all the time is, "Why
does a computer need so many memory systems?"
Why so many? The answer to this question can teach you a lot about memory!
Types of
Computer
Memory
A typical
computer
has:
• Level 1 and level 2 caches
• Normal system RAM
• Virtual memory
• A hard disk
Why so many? The answer to this question can teach you a lot about memory!
Fast, powerful CPUs need quick and easy access to large amounts of data in order
to maximize their performance. If the CPU cannot get to the data it needs, it literally
stops and waits for it. Modern CPUs running at speeds of about 1 gigahertz can
consume massive amounts of data -- potentially billions ofbytes per second. The
problem that computer designers face is that memory that can keep up with a 1-
gigahertz CPU is extremely expensive -- much more expensive than anyone can
afford in large quantities.
Computer designers have solved the cost problem by "tiering" memory -- using
expensive memory in small quantities and then backing it up with larger quantities of
less expensive memory.
The cheapest form of read/write memory in wide use today is the hard disk. Hard
disks provide large quantities of inexpensive, permanent storage. You can buy hard
disk space for pennies per megabyte, but it can take a good bit of time (approaching
a second) to read a megabyte off a hard disk. Because storage space on a hard disk
is so cheap and plentiful, it forms the final stage of a CPUs memory hierarchy,
called virtual memory.
The next level of the hierarchy is RAM. We discuss RAM in detail in How RAM
Works, but several points about RAM are important here.
The bit size of a CPU tells you how many bytes of information it can access from
RAM at the same time. For example, a 16-bit CPU can process 2 bytes at a time (1
byte = 8 bits, so 16 bits = 2 bytes), and a 64-bit CPU can process 8 bytes at a time.
Megahertz (MHz) is a measure of a CPU's processing speed, or clock cycle, in
millions per second. So, a 32-bit 800-MHz Pentium III can potentially process 4
bytes simultaneously, 800 million times per second (possibly more based on
pipelining)! The goal of the memory system is to meet those requirements.
A computer's system RAM alone is not fast enough to match the speed of the CPU.
That is why you need a cache (discussed later). However, the faster RAM is, the
better. Most chips today operate with a cycle rate of 50 to 70 nanoseconds. The
read/write speed is typically a function of the type of RAM used, such as DRAM,
SDRAM, RAMBUS. We will talk about these various types of memory later.
First, let's talk about system RAM.
VIDEO: Check out these videos about laptops and 14 resourceful Microsoft
videos.>>
System RAM
System RAM speed is controlled by bus width and bus speed. Bus width refers to
the number of bits that can be sent to the CPU simultaneously, and bus speed refers
to the number of times a group of bits can be sent each second. A bus cycle occurs
every time data travels from memory to the CPU. For example, a 100-MHz 32-bit
bus is theoretically capable of sending 4 bytes (32 bits divided by 8 = 4 bytes) of
data to the CPU 100 million times per second, while a 66-MHz 16-bit bus can send 2
bytes of data 66 million times per second. If you do the math, you'll find that simply
changing the bus width from 16 bits to 32 bits and the speed from 66 MHz to 100
MHz in our example allows for three times as much data (400 million bytes versus
132 million bytes) to pass through to the CPU every second.
Cache and Registers
Caches are designed to alleviate this bottleneck by making the data used most often
by the CPU instantly available. This is accomplished by building a small amount of
memory, known as primary orlevel 1 cache, right into the CPU. Level 1 cache is
very small, normally ranging between 2 kilobytes (KB) and 64 KB.
The secondary or level 2 cache typically resides on a memory card located near
the CPU. The level 2 cache has a direct connection to the CPU. A dedicated
integrated circuit on the motherboard, the L2 controller, regulates the use of the
level 2 cache by the CPU. Depending on the CPU, the size of the level 2 cache
ranges from 256 KB to 2 megabytes (MB). In most systems, data needed by the
CPU is accessed from the cache approximately 95 percent of the time, greatly
reducing the overhead needed when the CPU has to wait for data from the main
memory.
Volatility
Memory can be split into
two main categories:
volatile and nonvolatile.
Volatile memory loses any
data as soon as the
system is turned off; it
requires constant power to
remain viable. Most types
of RAM fall into this
category.
A particular type of RAM, static random access memory (SRAM), is used primarily
for cache. SRAM uses multiple transistors, typically four to six, for each memory cell.
It has an external gatearray known as a bistable multivibrator that switches, or flip-
flops, between two states. This means that it does not have to be continually
refreshed like DRAM.
Each cell will maintain its data as long as it has power. Without the need for
constant refreshing, SRAM can operate extremely quickly. But the complexity of
each cell makes it prohibitively expensive for use as standard RAM.
The final step in memory is the registers. These are memory cells built right into the
CPU that contain specific data needed by the CPU, particularly the arithmetic and
logic unit (ALU). An integral part of the CPU itself, they are controlled directly by the
compiler that sends information for the CPU to process. SeeHow Microprocessors
Work for details on registers.
For a handy printable guide to computer memory, you can print the
HowStuffWorks Big List of Computer Memory Terms.
For more information on computer memory and related topics, check out the links on
the next page.
VIDEO: Check out these videos about laptops and 14 resourceful Microsoft
videos.>>
Computer.howstuffworks.com
Computer data storage
From Wikipedia, the free encyclopedia
1 GB of SDRAM mounted in a personal computer. An example of primary storage.
160 GB SDLT tape cartridge, an example of off-line storage. When used within a
robotic tape library, it is classified as tertiary storage instead.
Computer data storage, often called storage or memory, refers
to computer components and recording media that retain digital data used for
computing for some interval of time. Computer data storage provides one of the core
functions of the modern computer, that of information retention. It is one of the
fundamental components of all modern computers, and coupled with a central
processing unit (CPU, a processor), implements the basic computer model used
since the 1940s.
The contemporary distinctions are helpful, because they are also fundamental to the
architecture of computers in general. The distinctions also reflect an important and
significant technical difference between memory and mass storage devices, which
has been blurred by the historical usage of the termstorage. Nevertheless, this
article uses the traditional nomenclature.
Contents
[hide]
• 1 Purpose of storage
• 2 Hierarchy of storage
○ 2.1 Primary storage
○ 2.2 Secondary storage
○ 2.3 Tertiary storage
○ 2.4 Off-line storage
• 3 Characteristics of storage
○ 3.1 Volatility
○ 3.2 Differentiation
○ 3.3 Mutability
○ 3.4 Accessibility
○ 3.5 Addressability
○ 3.6 Capacity
○ 3.7 Performance
○ 3.8 Energy use
• 4 Fundamental storage technologies
○ 4.1 Semiconductor
○ 4.2 Magnetic
○ 4.3 Optical
○ 4.4 Paper
○ 4.5 Uncommon
• 5 Related technologies
○ 5.1 Network connectivity
○ 5.2 Robotic storage
• 6 See also
○ 6.1 Primary storage topics
○ 6.2 Secondary, tertiary and off-
line storage topics
○ 6.3 Data storage conferences
• 7 References
[edit]Purpose of storage
Many different forms of storage, based on various natural phenomena, have been
invented. So far, no practical universal storage medium exists, and all forms of
storage have some drawbacks. Therefore a computer system usually contains
several kinds of storage, each with an individual purpose.
A digital computer represents data using the binary numeral system. Text, numbers,
pictures, audio, and nearly any other form of information can be converted into a
string of bits, or binary digits, each of which has a value of 1 or 0. The most common
unit of storage is the byte, equal to 8 bits. A piece of information can be handled by
any computer whose storage space is large enough to accommodate the binary
representation of the piece of information, or simply data. For example, using eight
million bits, or about one megabyte, a typical computer could store a short novel.
Traditionally the most important part of every computer is the central processing
unit (CPU, or simply a processor), because it actually operates on data, performs
any calculations, and controls all the other components.
Various forms of storage, divided according to their distance from the central
processing unit. The fundamental components of a general-purpose computer
are arithmetic and logic unit, control circuitry, storage space,
and input/output devices. Technology and capacity as in common home
computers around 2005.
[edit]Primary storage
Direct links to this section: Primary storage, Main memory, Internal Memory.
Primary storage (or main memory or internal memory), often referred to
simply as memory, is the only one directly accessible to the CPU. The CPU
continuously reads instructions stored there and executes them as required.
Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or
rotating magnetic drums as primary storage. By 1954, those unreliable methods
were mostly replaced by magnetic core memory. Core memory remained
dominant until the 1970s, when advances in integrated circuit technology
allowed semiconductor memory to become economically competitive.
As shown in the diagram, traditionally there are two more sub-layers of the
primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically
holds a word of data (often 32 or 64 bits). CPU instructions instruct
the arithmetic and logic unit to perform various calculations or other
operations on this data (or with the help of it). Registers are the fastest of all
forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and
much slower main memory. It's introduced solely to increase performance of
the computer. Most actively used information in the main memory is just
duplicated in the cache memory, which is faster, but of much lesser capacity.
On the other hand it is much slower, but much larger than processor
registers. Multi-level hierarchical cache setup is also commonly used—
primary cache being smallest, fastest and located inside the
processor;secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via
a memory bus. It is actually two buses (not on the diagram): an address
bus and a data bus. The CPU firstly sends a number through an address bus, a
number called memory address, that indicates the desired location of data.
Then it reads or writes the data itself using the data bus. Additionally, a memory
management unit (MMU) is a small device between CPU and RAM recalculating
the actual memory address, for example to provide an abstraction of virtual
memory or other tasks.
As the RAM types used for primary storage are volatile (cleared at start up), a
computer containing only such storage would not have a source to read
instructions from, in order to start the computer. Hence, non-volatile primary
storage containing a small startup program (BIOS) is used to bootstrap the
computer, that is, to read a larger program from non-volatile secondary storage
to RAM and start to execute it. A non-volatile technology used for this purpose
is called ROM, for read-only memory (the terminology may be somewhat
confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates are possible;
however it is slow and memory must be erased in large portions before it can be
re-written. Some embedded systems run programs directly from ROM (or
similar), because such programs are rarely changed. Standard computers do
not store non-rudimentary programs in ROM, rather use large capacities of
secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what
was historically called, respectively, secondary storage and tertiary storage.[2]
[edit]Secondary storage
In modern computers, hard disk drives are usually used as secondary storage.
The time taken to access a given byte of information stored on a hard disk is
typically a few thousandths of a second, or milliseconds. By contrast, the time
taken to access a given byte of information stored in random access memory is
measured in billionths of a second, or nanoseconds. This illustrates the
significant access-time difference which distinguishes solid-state memory from
rotating magnetic storage devices: hard disks are typically about a million times
slower than memory. Rotating optical storage devices, such
as CD and DVDdrives, have even longer access times. With disk drives, once
the disk read/write head reaches the proper placement and the data of interest
rotates under it, subsequent data on the track are very fast to access. As a
result, in order to hide the initial seek time and rotational latency, data are
transferred to and from disks in large contiguous blocks.
When data reside on disk, block access to hide latency offers a ray of hope in
designing efficient external memory algorithms. Sequential or block access on
disks is orders of magnitude faster than random access, and many
sophisticated paradigms have been developed to design efficient algorithms
based upon sequential and block access . Another way to reduce the I/O
bottleneck is to use multiple disks in parallel in order to increase the bandwidth
between primary and secondary memory.[3]
Most computer operating systems use the concept of virtual memory, allowing
utilization of more primary storage capacity than is physically available in the
system. As the primary memory fills up, the system moves the least-used
chunks (pages) to secondary storage devices (to a swap file or page file),
retrieving them later when they are needed. As more of these retrievals from
slower secondary storage are necessary, the more the overall system
performance is degraded.
[edit]Tertiary storage
Large tape library. Tape cartridges placed on shelves in the front, robotic arm
moving in the back. Visible height of the library is about 180 cm.
When a computer needs to read information from the tertiary storage, it will first
consult a catalog database to determine which tape or disc contains the
information. Next, the computer will instruct a robotic arm to fetch the medium
and place it in a drive. When the computer has finished reading the information,
the robotic arm will return the medium to its place in the library.
[edit]Off-line storage
Off-line storage is a computer data storage on a medium or a device that is not
under the control of a processing unit.[5] The medium is recorded, usually in a
secondary or tertiary storage device, and then physically removed or
disconnected. It must be inserted or connected by a human operator before a
computer can access it again. Unlike tertiary storage, it cannot be accessed
without human interaction.
Off-line storage is used to transfer information, since the detached medium can
be easily physically transported. Additionally, in case a disaster, for example a
fire, destroys the original data, a medium in a remote location will probably be
unaffected, enabling disaster recovery. Off-line storage increases
generalinformation security, since it is physically inaccessible from a computer,
and data confidentiality or integrity cannot be affected by computer-based attack
techniques. Also, if the information stored for archival purposes is accessed
seldom or never, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are
also used for off-line storage. Optical discs and flash memory devices are most
popular, and to much lesser extent removable hard disk drives. In enterprise
uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks,
or punched cards.
[edit]Characteristics of storage
[edit]Semiconductor
Semiconductor memory uses semiconductor-based integrated circuits to store
information. A semiconductor memory chip may contain millions of
tiny transistors or capacitors. Both volatile andnon-volatile forms of semiconductor
memory exist. In modern computers, primary storage almost exclusively consists of
dynamic volatile semiconductor memory or dynamic random access memory. Since
the turn of the century, a type of non-volatile semiconductor memory known as flash
memory has steadily gained share as off-line storage for home computers. Non-
volatile semiconductor memory is also used for secondary storage in various
advanced electronic devices and specialized computers.
[edit]Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated
surface to store information. Magnetic storage is non-volatile. The information is
accessed using one or more read/write heads which may contain one or more
recording transducers. A read/write head only covers a part of the surface so that
the head or medium or both must be moved relative to another in order to access
data. In modern computers, magnetic storage will take these forms:
Magnetic disk
Floppy disk, used for off-line storage
Hard disk drive, used for secondary storage
Magnetic tape data storage, used for tertiary and off-line storage
In early computers, magnetic storage was also used for primary storage in a form
of magnetic drum, or core memory, core rope memory, thin-film memory, twistor
memory or bubble memory. Also unlike today, magnetic tape was often used for
secondary storage.
[edit]Optical
Optical storage, the typical optical disc, stores information in deformities on the
surface of a circular disc and reads this information by illuminating the surface with
a laser diode and observing the reflection. Optical disc storage is non-volatile. The
deformities may be permanent (read only media ), formed once (write once media)
or reversible (recordable or read/write media). The following forms are currently in
common use:[12]
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of
digital information (music, video, computer programs)
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line
storage
CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage,
used for tertiary and off-line storage
ity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magne
to-
optical
disc
storag
e is
optical
disc
storage
where
the
magnet
ic state
on
a ferro
magnet
ic surfa
ce
stores
informa
tion.
The
informa
tion is
read
opticall
y and
written
by
combini
ng
magnet
ic and
optical
method
s.
Magnet
o-
optical
disc
storage
is non-
volatile,
sequen
tial
access,
slow
write,
fast
read
storage
used
for
tertiary
and off-
line
storage
.
3D
optical
data
storag
e has
also
been
propos
ed.
[edit]Pa
per
Paper
data
storag
e,
typicall
y in the
form
of pape
r
tape or
punche
d
cards,
has
long
been
used to
store
informa
tion for
automa
tic
proces
sing,
particul
arly
before
general
-
purpos
e
comput
ers
existed.
Informa
tion
was
recorde
d by
punchi
ng
holes
into the
paper
or
cardbo
ard
mediu
m and
was
read
mecha
nically
(or later
opticall
y) to
determi
ne
whethe
ra
particul
ar
location
on the
mediu
m was
solid or
contain
ed a
hole. A
few
technol
ogies
allow
people
to
make
marks
on
paper
that are
easily
read by
machin
e—
these
are
widely
used
for
tabulati
ng
votes
and
grading
standar
dized
tests. B
arcode
s made
it
possibl
e for
any
object
that
was to
be sold
or
transpo
rted to
have
some
comput
er
readabl
e
informa
tion
securel
y
attache
d to it.
[edit]U
ncom
mon
Vacuu
m tube
memor
y
A Williams tube used a cathode ray tube, and a Selectron tube used a
large vacuum tube to store information. These primary storage devices were
short-lived in the market, since Williams tube was unreliable and Selectron
tube was expensive.
El
ect
ro-
ac
ou
sti
c
me
m
or
y
Delay line memory used sound waves in a substance such as mercury to
store information. Delay line memory was dynamic volatile, cycle sequential
read/write storage, and was used for primary storage.
O
p
ti
c
a
l
t
a
p
e
is a medium for optical storage generally consisting of a long and narrow strip
of plastic onto which patterns can be written and from which the patterns can
be read back. It shares some technologies with cinema film stock and optical
discs, but is compatible with neither. The motivation behind developing this
technology was the possibility of far greater storage capacities than either
magnetic tape or optical discs.
Phas
e-
chan
ge
mem
ory
uses different mechanical phases of Phase Change Material to store
information in an X-Y addressable matrix, and reads the information by
observing the varying electrical resistance of the material. Phase-change
memory would be non-volatile, random access read/write storage, and might
be used for primary, secondary and off-line storage. Most rewritable and
many write once optical disks already use phase change material to store
information.
Holograp
hic data
storage
stores information optically inside crystals or photopolymers. Holographic
storage can utilize the whole volume of the storage medium, unlike optical
disc storage which is limited to a small number of surface layers. Holographic
storage would be non-volatile, sequential access, and either write once or
read/write storage. It might be used for secondary and off-line storage.
See Holographic Versatile Disc (HVD).
Molecular
memory
stores information in polymer that can store electric charge. Molecular
memory might be especially suited for primary storage. The theoretical
storage capacity of molecular memory is 10 terabits per square inch.[13]
[edit]
utilizing compute
networks.is
concept does not
pertain to the
primary storage,
which is shared
between multiple
processors in a
much lesser
degree.
Direct-
attached
storage (DAS
is a traditional
mass storage,
that does not
use any
network. This
still a most
popular
approach. Thi
term was
coined lately,
together with
NAS and SAN
Network-
attached
storage (NAS
is mass storag
attached to a
computer whic
another
computer can
access at file
level over
a local area
network, a
private wide
area network,
or in the case
ofonline file
storage, over
the Internet.
NAS is
commonly
associated wit
the NFS and C
FS/SMB proto
ols.
Storage area
network (SAN
is a specialize
network, that
provides other
computers wit
storage
capacity. The
crucial
difference
between NAS
and SAN is th
former presen
and manages
file systems to
client
computers,
whilst the latte
provides acce
at block-
addressing
(raw) level,
leaving it to
attaching
systems to
manage data
file systems
within the
provided
capacity. SAN
is commonly
associated
with Fibre
Channel netw
ks.
[edit]Robotic
storage
Large quantities o
individual magnet
tapes, and optica
or magneto-optica
discs may be
stored in robotic
tertiary storage
devices. In tape
storage field they
are known as tap
libraries, and in
optical storage
field optical
jukeboxes, or
optical disk
libraries per
analogy. Smalles
forms of either
technology
containing just on
drive device are
referred to
as autoloaders or
utochangers.
Robotic-access
storage devices
may have a
number of slots,
each holding
individual media,
and usually one o
more picking
robots that travers
the slots and load
media to built-in
drives. The
arrangement of th
slots and picking
devices affects
performance.
Important
characteristics of
such storage are
possible expansio
options: adding
slots, modules,
drives, robots.
Tape libraries ma
have from 10 to
more than 100,00
slots, and
provide terabytes
r petabytes of nea
line information.
Optical jukeboxes
are somewhat
smaller solutions,
up to 1,000 slots.
Robotic storage is
used for backups
and for high-
capacity archives
in imaging,
medical, and vide
industries. Hierarc
ical storage
management is a
most known
archiving strategy
of
automaticallymigr
ting long-unused
files from fast har
disk storage to
libraries or
jukeboxes. If the
files are needed,
they
are retrieved back
to disk.
VIDEO: Check out these videos about laptops and 14 resourceful Microsoft
videos.>>