FOC Project
FOC Project
Roll number:- 04
Semester:- 1st
CERTIFICATE
List of Tables and diagram
>History
>MOS Memory
>Volatile memory
>Non-Volatile memory
>Semi-Voltile memory
>Management
>Virtual Memory
Computer memory operates at a high speed compared to storage that is slower but less
expensive and higher in capacity. Besides storing opened programs, computer memory
serves as disk cache and write buffer to improve both reading and writing performance.
Operating systems borrow RAM capacity for caching so long as not needed by running
software.] If needed, contents of the computer memory can be transferred to storage; a
common way of doing this is through a memory management technique called virtual
memory.
In the early 1940s, memory technology often permitted a capacity of a few bytes. The first
electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes,
could perform simple calculations involving 20 numbers of ten decimal digits stored in the
vacuum tubes.
The next significant advance in computer memory came with acoustic delay-line memory,
developed by J Prepper escert in the early 1940s. Through the construction of a glass tube
filled with mercury and plugged at each end with a quartz crystal, delay lines could store
bits of information in the form of sound waves propagating through the mercury, with the
quartz crystals acting as transducers to read and write bits. Delay-line memory was limited
to a capacity of up to a few thousand bits.
Two alternatives to the delay line, the Williams tube and selectron tube, originated in 1946,
both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred
Williams invented the Williams tube, which was the first random-access memory
computer. The Williams tube was able to store more information than the Selectron tube
(the Selectron was limited to 256 bits, while the Williams tube could store thousands) and
less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental
disturbances.
Efforts began in the late 1940s to find non-volatile memory. Magenetic-core
memory allowed for recall of memory after power loss. It was developed by Frederick W.
Viehe and An wang in the late 1940s, and improved by jay forrester and Jan A Rajman in
the early 1950s, before being commercialised with the Whirlwind computer in
1953. Magnetic-core memory was the dominant form of memory until the development
of MOS semiconductor memory in the 1960s.
The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s
using bipolar transistors.[8] Semiconductor memory made from discrete devices was first
shipped by Texas Instruments to the United States Air Force in 1961. The same year, the
concept of solid-state memory on an integrated circuit (IC) chip was proposed
by applications engineer Bob Norman at Fairchild Semiconductor.[9] The first bipolar
semiconductor memory IC chip was the SP95 introduced by IBM in 1965.[8] While
semiconductor memory offered improved performance over magnetic-core memory, it
remain larger and more expensive and did not displace magnetic-core memory until the
late 1960s.[8][10]
MOS memory
Volatile memory is a type of memory that maintains its data only while the device is
powered. If the power is interrupted for any reason, the data is lost. Volatile memory is
used extensively in computers -- ranging from servers to laptops -- as well as in other
devices, such as printers, LCD displays, routers, cell phones, wearables and medical
equipment.
In a computer, volatile memory is typically used for the system's random access memory
(RAM), both the main memory and the processor's L1, L2 and L3 cache. It is distinguished
from nonvolatile storage --such as solid-state drives (SSDs), hard disk drives (HDDs)
or optical disks -- by the fact that nonvolatile devices retain their data even when their
power is cut off.
Volatile memory is used for a computer's RAM because it is much faster to read from and
write to than today's nonvolatile memory devices. Even the latest storage class memory
(SCM) devices such as Intel Optane can't match the performance of the current RAM
modules, especially the processor cache. However, the data in RAM stays there only while
the computer is running; when the computer is shut off, RAM loses its data.
For this reason, RAM is typically used along with nonvolatile memory, which does not lose
its data when the computer's power is turned off or the storage device is disconnected from
a power source. Nonvolatile memory also does not need to have its memory content
periodically refreshed like some volatile memory. In addition, nonvolatile storage is
cheaper and can hold much more data. Even so, today's computers require the fastest
memory and cache possible, which means sticking with volatile memory until a better
technology comes along.
Most of today's computers use dynamic RAM (DRAM) for the main memory and static
RAM (SRAM) for processor cache. DRAM supports greater densities than SRAM, and it is
cheaper. However, DRAM also requires more power and does not perform as well as
SRAM. One of the biggest challenges with DRAM is that the capacitors used for storing the
data tend to leak electrons and lose their charge. This means that DRAM memory devices
need to be refreshed periodically to retain their data, which can affect access speeds and
increase power usage.
NON-VOLATILE MEMORY
Non-volatile memory (NVM) is a type of computer memory that has the capability to hold
saved data even if the power is turned off. Unlike volatile memory, NVM does not require
its memory data to be periodically refreshed. It is commonly used for secondary storage or
long-term consistent storage.
Non-volatile memory is highly popular among digital media; it is widely used in memory
chips for USB memory sticks and digital cameras. Non-volatile memory eradicates the need
for relatively slow types of secondary storage systems, including hard disks.
Mechanically addressed systems make use of a contact structure to write and read
on a selected storage medium. The amount of data stored this way is much larger
than what's possible in electrically addressed systems. A few examples of
mechanically addressed systems are optical disks, hard disks, holographic memory
and magnetic tapes.
Electrically addressed systems are categorized based on the write mechanism. They are
costly but faster than mechanically addressed systems, which are affordable but slow. A
few examples of electrically addressed systems are flash memory, FRAM and MRAM.
A third category of memory is semi-volatile. The term is used to describe a memory that
has some limited non-volatile duration after power is removed, but then data is ultimately
lost. A typical goal when using a semi-volatile memory is to provide the high performance
and durability associated with volatile memories while providing some benefits of non-
volatile memory.
For example, some non-volatile memory types experience wear when written. A worn cell
has increased volatility but otherwise continues to work. Data locations which are written
frequently can thus be directed to use worn circuits. As long as the location is updated
within some known retention time, the data stays valid. After a period of time without
update, the value is copied to a less-worn circuit with longer retention. Writing first to the
worn area allows a high write rate while avoiding wear on the not-worn circuits.[37]
The term semi-volatile is also used to describe semi-volatile behavior constructed from
other memory types. For example, a volatile and a non-volatile memory may be combined,
where an external signal copies data from the volatile memory to the non-volatile memory,
but if power is removed before the copy occurs, the data is lost. Or,
Management
The process address space is the set of logical addresses that a process references in its
code. For example, when 32-bit addressing is in use, addresses can range from 0 to
0x7fffffff; that is, 2^31 possible numbers, for a total theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at
the time of memory allocation to the program. There are three types of addresses used in a
program before and after memory is allocated −
1 Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction labels are
the basic elements of the symbolic address space.
2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative addresses.
3 Physical addresses
The loader generates these addresses at the time when a program is loaded into main memory.
Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred
to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device. MMU uses following mechanism to convert
virtual address to physical address.
The value in the base register is added to every address generated by a user
process, which is treated as offset at the time it is sent to memory. For
example, if the base register value is 10000, then an attempt by the user to use
address location 100 will be dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees the real physical
addresses.
The choice between Static or Dynamic Loading is to be made at the time of computer
program being developed. If you have to load your program statically, then at the time of
compilation, the complete programs will be compiled and linked without leaving any
external program or module dependency. The linker combines the object program with
other necessary object modules into an absolute program, which also includes logical
addresses.
If you are writing a Dynamically loaded program, then your compiler will compile the
program and for all the modules which you want to include dynamically, only references
will be provided and rest of the work will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded into
memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the program.
As explained above, when static linking is used, the linker combines all other modules
needed by a program into a single executable program to avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or library with
the program, rather a reference to the dynamic module is provided at the time of
compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared Objects
in Unix are good examples of dynamic libraries.
Swapping
1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user
processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less than
the limit register.
2 Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a partition is
free, a process is selected from the input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.
VIRUAL MEMORY
Ideally, the data needed to run applications is stored in RAM, where they can be
accessed quickly by the CPU. But when large applications are being run, or when
many applications are running at once, the system’s RAM may become full.
To get around this problem, some data stored in RAM that is not actively being used
can be temporarily moved to virtual memory (which is physically located on a hard
drive or other storage device). This frees up space in RAM, which can then be used to
accommodate data which the system needs to access imminently.
By swapping data between RAM and virtual memory when it is not needed and
back from virtual memory to RAM when it is needed, a system can continue to work
smoothly with far less physical RAM than it would otherwise require.
Virtual memory enables a system to run larger applications or run more applications
at the same time without running out of RAM. Specifically, the system can operate
as if its total RAM resources were equal to the amount of physical RAM, plus the
amount of virtual RAM.
Virtual memory was developed when physical RAM was very expensive, and RAM
is still more expensive per Gigabyte than storage media such as hard disks and solid
state drives. For that reason it is much less costly to use a combination of physical
RAM and virtual memory than to equip a computer system with more RAM.
Since using virtual memory (or increasing virtual memory) has no extra financial cost
(because it uses existing storage space) it offers a way for a computer to use more
memory than is physically available on the system.
Another key driver for the use of virtual memory is that all computer systems have a
limit (dictated by hardware and software) on the amount of physical RAM that can
be installed. Using virtual memory allows the system to continue to operate beyond
those physical RAM limits.
Virtual Memory vs. Physical Memory
Since RAM is more expensive than virtual memory, it would seem – all things being
equal – that computers should be equipped with as little RAM and as much virtual
memory as possible.
But in fact the characteristics of virtual memory are different than those of physical
memory. The key difference between virtual memory and physical memory is that
RAM is very much faster than virtual memory.
So a system with 2 GB of physical RAM and 2 GB of virtual memory will not offer
the same performance as a similar system with 4 GB of physical RAM. To understand
why, it is necessary to understand how virtual memory works.
The responsibility for keeping track of all this data as it is swapped between physical
and virtual memory falls to the computer’s memory manager. The memory manager
maintains a table which maps virtual addresses used by the operating system and
applications to the physical addresses that data is actually stored in. When data is
swapped between RAM and virtual memory, the table is updated so that a given
virtual address always points to the correct physical location.
A computer can only run threads and manipulate data that is stored in RAM rather
than virtual memory. And it takes a non-negligible amount of time to swap data that
is needed into RAM. Consequently, it follows that using virtual memory involves a
performance hit.
Put another way, a system with 4 GB RAM will generally offer higher performance
than a system with 2 GB RAM and 2 GB virtual memory because of the performance
hit caused by swapping, and for that reason it is said that virtual memory is slower
than RAM.
One potential problem with virtual memory is that if the amount of RAM present is
too small compared to the amount of virtual memory then a system can end up
spending a large proportion of its CPU resources swapping data back and forth.
Meanwhile, performance of useful work grinds to a near halt – a process known
as thrashing.
To minimize the performance hit caused by swapping between physical and virtual
memory, it is best use the fastest storage device connected to the system to host the
virtual memory, and to locate the virtual memory storage area on its own partition.
Virtual memory can act in concert with a computer’s main memory to enable faster,
more fluid operations.
How to Increase Virtual Memory in a System
Most operating systems allow users to increase virtual memory from a configuration
page.
In Windows, users can also allow the system to manage the amount of
virtual memory provided dynamically.
Similarly, in the Mac OS, users can use the preferences panel to allot virtual
memory.
During the normal course of operations, pages (i.e. memory blocks of 4K in size) are
swapped between RAM and a page file, which represents the virtual memory.
As these small chunks build up, fewer and fewer segments of useful size can be
allocated. And if the OS does start using these small segments then there are a huge
number to keep track of, and each process will need to use many different segments,
which is inefficient and can reduce performance.
Even though RAM is now relatively inexpensive compared to its cost when virtual
memory was first developed, it is still extremely useful and it is still employed in
many, perhaps most, computer systems. The key problem with virtual memory
relates to performance.
Provides a way to increase memory which is less costly than buying more
RAM.
Takes up storage space which could otherwise be used for long term data
storage
Protected Memory
The MPU register might look complicated, but as long as you have a clear idea of the
memory regions that are required for your application, it should not be difficult. Typically,
you need to have the following memory regions:
•
Program code for privileged programs (for example, OS kernel and exception
handlers)
•
Program code for user programs
•
Data memory for privileged and user programs in various memory regions (e.g.,
data and stack of the application situated in the SRAM (Static Random Access
Memory) memory region--0x20000000 to 0x3FFFFFFF)
•
Other peripherals
It is not necessary to set up a region for the memory in the private peripheral bus range.
The MPU automatically recognizes the private peripheral bus memory addresses and
allows privileged software to perform data accesses in this region.
For Cortex-M3 products, most memory regions can be set up with TEX = b000, C = 1, B =
1. System devices such as the Nested Vectored Interrupt Controller (NVIC) should be
strongly ordered, and peripheral regions can be programmed as shared devices (TEX =
b000, C = 0, B = 1). However, if you want to make sure that any bus faults occurring in the
region are precise bus faults, you should use a strongly ordered memory attribute (TEX =
b000, C = 0, B = 0) so that write buffering is disabled. However, doing so can reduce system
performance.
For users of a Cortex Microcontroller Software Interface Standard (CMSIS) compliant
device driver, the MPU registers can be accessed using the following register names as
shown in Table 13.10. A simple flow for an MPU setup routine is shown in Figure 13.3 on
page 220.
Before the MPU is enabled and if the vector table is relocated to RAM, remember to set up
the fault handler for the memory management fault in the vector table, and enable the
memory management fault in the System Handler Control and State register. They are
needed to allow the memory management fault handler to be executed if an MPU violation
takes place.
For a simple case of only four required regions, the MPU setup code (without the region
checking and enabling) looks like this:
MPU->RNR = 0; // MPU Region Number Register
// select region 0
MPU->RBAR = 0x00000000; // MPU Region Base Address Register
The operating system manages the page tables based on process creation and underlying
kernel services calls. Each process has an active page table in context when the process is
executing. The CPU contains a process ID register (CR3 on Intel architecture) that is used
to select the appropriate tree within the page table hierarchy. One of the tasks the
operating system performs is to update the CR3 value in the CPU during context switches.
The mapping between process address space and physical pages is often highly
fragmented; there is no mathematical correlation between the virtual and physical address
(it must be identified via page table lookup). On the other hand, in many systems, the page
tables configured to map kernel space usually map a large, virtually contiguous area within
the kernel to a large, physically contiguous area in physical memory. This simplifies the
calculation/translation between kernel virtual and physical address for the kernel
mappings. The addition/subtraction of an address offset can be used to convert between
kernel virtual and physical addresses. However, this should not be relied upon. You must
always use OS-provided functions to translate addresses. This attribute is a system
optimization to make the translation as efficient as possible.
In full-featured operating systems, the physical memory pages may be copied to a storage
device. This process is known as swapping. The virtual to physical mapping for the process
physical page will be removed. When an application attempts to access the virtual address
that previously had a physical page mapped, the MMU will generate a page fault. The page
fault handler copies the page from the drive into memory (not likely to be the same
physical memory used previously), sets up the correct page mapping, and returns from the
page fault exception, at which point the instruction that generated the fault is re-executed,
this time without faulting. In many embedded systems, there is no swap storage and
swapping is disabled.
Some embedded systems have no memory management unit. In this case the virtual and
linear address spaces are mapped 1:1. Each process must be created and linked to a target
physical address range in which it will reside during runtime. There is far less protection
(code from one process can access/destroy memory belonging to another process) between
backing the malloc() call is only allocated to the process that allocates it. There is also a
series of calls that can be used to allocate memory that is shared between processes. Figure
7.10 shows the series of calls to allocate a 4-K page of memory between two processes. The
calls must be coordinated by both processes.
EXAMPLES
stored in ROM can only be read and cannot be modified. ROM is used to store
the firmware of a device, which is the basic software that is required to run the
device. ROM is also used to store the operating system of a computer or other
type of electronic device. ROM is typically stored on a chip that is located on the
motherboard of a computer or other type of electronic device. ROM chips are
usually soldered onto the motherboard, which makes them difficult to replace.
ROM is a type of non-volatile memory, which means that the data stored in ROM
is not lost when the power is turned off. This is in contrast to RAM, which is a
type of volatile memory that is erased when the power is turned off. ROM is a key
part of the firmware of a device and is essential for the proper functioning of the
device.
RANDOMACCESSMEMORY(RAM)
RAM, which stands for Random Access Memory, is a hardware device generally located
on the motherboard of a computer and acts as an internal memory of the CPU. It allows
CPU store data, program, and program results when you switch on the computer. It is
the read and write memory of a computer, which means the information can be written
to it as well as read from it.
RAM is a volatile memory, which means it does not store data or instructions
permanently. When you switch on the computer the data and instructions from the hard
disk are stored in the RAM, e.g., when the computer is rebooted, and when you open a
program, the operating system (OS), and the program are loaded into RAM, generally
from an HDD or SSD. CPU utilizes this data to perform the required tasks. As soon as
you shut down the computer, the RAM loses the data. So, the data remains in the RAM
as long as the computer is on and lost when the computer is turned off. The benefit of
loading data into RAM is that reading data from the RAM is much faster than reading
from the hard drive.
In simple words, we can say that RAM is like a person?s short term memory, and hard
drive storage is like a person's long term memory. Short term memory remembers the
things for a short duration, whereas long term memory remembers for a long duration.
Short term memory can be refreshed with information stored in the brain?s long term
memory. A computer also works like this; when the RAM fills up, the processor goes to
the hard disk to overlay the old data in Ram with new data. It is like a reusable scratch
paper on which you can write notes, numbers, etc., with a pencil. If you run out of space
on the paper, you may erase what you no longer need; RAM also behaves like this, the
unnecessary data on the RAM is deleted when it fills up, and it is replaced with new data
from the hard disk which is required for the current operations.
RAM comes in the form of a chip that is individually mounted on the motherboard or in
the form of several chips on a small board connected to the motherboard. It is the main
memory of a computer. It is faster to write to and read from as compared to other
memories such as a hard disk drive (HDD), solid-state drive (SSD), optical drive, etc.
A computer's performance mainly depends on the size or storage capacity of the RAM.
If it does not have sufficient RAM (random access memory) to run the OS and software
programs, it will result in slower performance. So, the more RAM a computer has, the
faster it will work. Information stored in RAM is accessed randomly, not in a sequence as
on a CD or hard drive. So, its access time is much faster.
History of RAM:
o The first type of RAM was introduced in 1947 with the Williams tube. It was used in CRT
(cathode ray tube), and the data was stored as electrically charged spots on the face.
o The second type of RAM was a magnetic-core memory, invented in 1947. It was made of
tiny metal rings and wires connecting to each ring. A ring stored one bit of data, and it
can be accessed at any time.
o The RAM which we know today, as solid-state memory, was invented by Robert Dennard
in 1968 at IBM Thomas J Watson Research Centre. It is specifically known as dynamic
random access memory (DRAM) and has transistors to store bits of data. A constant
supply of power was required to maintain the state of each transistor.
o In October 1969, Intel introduced its first DRAM, the Intel 1103. It was its first
commercially available DRAM.
o In 1993, Samsung introduced the KM48SL2000 synchronous DRAM (SDRAM).
o In 1996, DDR SDRAM was commercially available.
o In 1999, RDRAM was available for computers.
o In 2003, DDR2 SDRAM began being sold.
o In June 2007, DDR3 SDRAM started being sold.
o In September 2014, DDR4 became available in the market.
Types of RAM:
Integrated RAM chips can be of two types:
1) Static RAM:
Static RAM (SRAM) is a type of random access memory that retains its state for data bits
or holds data as long as it receives the power. It is made up of memory cells and is
called a static RAM as it does not need to be refreshed on a regular basis because it
does not need the power to prevent leakage, unlike dynamic RAM. So, it is faster than
DRAM.
It has a special arrangement of transistors that makes a flip-flop, a type of memory cell.
One memory cell stores one bit of data. Most of the modern SRAM memory cells are
made of six CMOS transistors, but lack capacitors. The access time in SRAM chips can be
as low as 10 nanoseconds. Whereas, the access time in DRAM usually remains above 50
nanoseconds.
Furthermore, its cycle time is much shorter than that of DRAM as it does not pause
between accesses. Due to these advantages associated with the use of SRAM, It is
primarily used for system cache memory, and high-speed registers, and small memory
banks such as a frame buffer on graphics cards.
The Static RAM is fast because the six-transistor configuration of its circuit maintains the
flow of current in one direction or the other (0 or 1). The 0 or 1 state can be written and
read instantly without waiting for the capacitor to fill up or drain. The early
asynchronous static RAM chips performed read and write operations sequentially, but
the modern synchronous static RAM chips overlap read and write operations.
The drawback with Static RAM is that its memory cells occupy more space on a chip
than the DRAM memory cells for the same amount of storage space (memory) as it has
more parts than a DRAM. So, it offers less memory per chip.
2) Dynamic RAM:
Dynamic Ram (DRAM) is also made up of memory cells. It is an integrated circuit (IC)
made of millions of transistors and capacitors which are extremely small in size and each
transistor is lined up with a capacitor to create a very compact memory cell so that
millions of them can fit on a single memory chip. So, a memory cell of a DRAM has one
transistor and one capacitor and each cell represents or stores a single bit of data in its
capacitor within an integrated circuit.
The capacitor holds this bit of information or data, either as 0 or as 1. The transistor,
which is also present in the cell, acts as a switch that allows the electric circuit on the
memory chip to read the capacitor and change its state.
The capacitor needs to be refreshed after regular intervals to maintain the charge in the
capacitor. This is the reason it is called dynamic RAM as it needs to be refreshed
continuously to maintain its data or it would forget what it is holding. This is achieved
by placing the memory on a refresh circuit that rewrites the data several hundred times
per second. The access time in DRAM is around 60 nanoseconds.
We can say that a capacitor is like a box that stores electrons. To store a ?1? in the
memory cell, the box is filled with electrons. Whereas, to store a ?0?, it is emptied. The
drawback is that the box has a leak. In just a few milliseconds the full box becomes
empty. So, to make dynamic memory work, the CPU or Memory controller has to
recharge all the capacitors before they discharge. To achieve this, the memory controller
reads the memory and then writes it right back. This is called refreshing the memory and
this process continues automatically thousands of times per second. So, this type of
RAM needs to be dynamically refreshed all the time.
Types of DRAM:
i) Asynchronous DRAM:
This type of DRAM is not synchronized with the CPU clock. So, the drawback with this
RAM is that CPU could not know the exact timing at which the data would be available
from the RAM on the input-output bus. This limitation was overcome by the next
generation of RAM, which is known as the synchronous DRAM.
The next generation of the synchronous DRAM is known as the DDR RAM. It was
developed to overcome the limitations of SDRAM and was used in PC memory at the
beginning of the year 2000. In DDR SDRAM (DDR RAM), the data is transferred twice
during each clock cycle; during the positive edge (rising edge) and the negative edge
(falling edge) of the cycle. So, it is known as the double data rate SDRAM.
There are different generations of DDR SDRAM which include DDR1, DDR2, DDR3, and
DDR4. Today, the memory that we use inside the desktop, laptop, mobile, etc., is mostly
either DDR3 or DDR4 RAM. Types of DDR SDRAM:
a) DDR1 SDRAM:
DDR1 SDRAM is the first advanced version of SDRAM. In this RAM, the voltage was
reduced from 3.3 V to 2.5 V. The data is transferred during both the rising as well as the
falling edge of the clock cycle. So, in each clock cycle, instead of 1 bit, 2 bits are being
pre-fetched which is commonly known as the 2 bit pre-fetch. It is mostly operated in the
range of 133 MHz to the 200 MHz.
Furthermore, the data rate at the input-output bus is double the clock frequency
because the data is transferred during both the rising as well as falling edge. So, if a
DDR1 RAM is operating at 133 MHz, the data rate would be double, 266 Mega transfer
per second.
It is an advanced version of DDR1. It operates at 1.8 V instead of 2.5V. Its data rate is
double the data rate of the previous generation due to the increase in the number of
bits that are pre-fetched during each cycle; 4 bits are pre-fetched instead of 2 bits. The
internal bus width of this RAM has been doubled. For example, if the input-output bus is
64 bits wide, the internal bus width of it will be equal to 128 bits. So, a single cycle can
handle double the amount of data.
In this version, the voltage is further reduced from 1.8 V to the 1.5 V. The data rate has
been doubled than the previous generation RAM as the number of bits that are pre-
fetched has been increased from 4 bits to the 8 bits. We can say that the internal data
bus width of RAM has been increased 2 times than that of the last generation.
SRAM DRAM
It is a static memory as it does not need It is a dynamic memory as it needs to be refreshed continuously or
to be refreshed repeatedly. will lose the data.
Its memory cell is made of 6 transistors. Its memory cell is made of one transistor and one capacitor. So,
So its cells occupy more space on a chip cells occupy less space on a chip and provide more memory than
and offer less storage capacity SRM of the same physical size.
(memory) than a DRAM of the same
physical size.
It is more expensive than DRAM and is It is less expensive than SRAM and is mostly located on t
located on processors or between a motherboard.
processor and main memory.
It has a lower access time, e.g. 10 It has a higher access time, e.g. more than 50 nanoseconds. So, it
nanoseconds. So, it is faster than DRAM. slower than SRAM.
It stores information in a bistable The information or each bit of data is stored in a separate capacit
latching circuitry. It requires regular within an integrated circuit so it consumes less power.
power supply so it consumes more
power.
It is faster than DRAM as its memory It is not as fast as SRAM, as its memory cells are refresh
cells don't need to be refreshed and are continuously. But still, it is used in the motherboard because it
always available. So, it is mostly used in cheaper to manufacture and requires less space.
registers in the CPU and cache memory
of various devices.
Its cycle time is shorter as it does not Its cycle time is more than the SRAM's cycle time.
need to be paused between accesses
and refreshes.
Examples: L2 and LE cache in a CPU. Example: DDR3, DDR4 in mobile phones, computers, etc.
Size ranges from 1 MB to 16MB. Size ranges from 1 GB to 3 GB in smartphones and 4GB to 16GB
laptops.
FLASH MEMORY
flash drives, and solid-state drives. Flash memory is based on an electric charge.
When a charge is applied to certain areas of the chip, it creates an electric field. This
field can change the state of the transistors, which are used to store data. Flash
memory is faster than traditional RAM, and it does not require a power source to
maintain the data. However, it is more expensive than RAM, and it has a limited
number of erase-rewrite cycles.
CACHE MEMORY
copy of only the most frequently used information or program codes stored in the
main memory. The smaller capacity of the cache reduces the time required to locate
data within it and provide it to the CPU for processing.
When a computer’s CPU accesses its internal memory, it first checks to see if the
information it needs is stored in the cache. If it is, the cache returns the data to the
CPU. If the information is not in the cache, the CPU retrieves it from the main
memory. Disk cache memory operates similarly, but the cache is used to hold data
that have recently been written on, or retrieved from, a magnetic disk or other
external storage device.
APPLICATION OF PRIMARY MEMORY
1. Primary memory is much faster than secondary memory, which means that it can
provide quick access to data and programs.
2. Primary memory is usually cheaper than secondary memory, which makes it a more
cost-effective option for storing data.
3. Primary memory is more reliable than secondary memory, which means that data is less
likely to be lost or corrupted.
4. Primary memory is more durable than secondary memory, which means that it can
withstand harsh conditions and last longer.
1. Limited Capacity: Primary memory is often limited in terms of capacity, meaning that it
can only store a certain amount of data.
2. Volatile: Primary memory is volatile, meaning that it is prone to data loss in the event of
a power outage or other type of interruption.
3. Expensive: Primary memory is often more expensive than secondary memory, such as
hard drives or SSDs.
4. Slow: Primary memory is often slower than secondary memory, meaning that it can take
longer to access data stored in primary memory.
CONCLUSION
Conclusion . primary memory plays a critical role in the execution of the program . Any
improvements in memory operations will lead to faster execution of the program and in
turn, enhance business operation efficiency.
REFERANCE
You can refer Wikipedia for the articles here is the website link :
https://en.wikipedia.org/wiki/Computer_memory
here are some book for learning primary memory clearly and detailed
01.https://books.google.com/books?
id=rDe7ygAACAAJ&dq=primary+memory&hl=en&newbks=1&newbks_redir=1&sa
=X&ved=2ahUKEwiA8sXz1If7AhWxx3MBHWdMBIAQ6AF6BAgCEAE
02.https://books.google.co.in/books?
id=zm2b44hVKjoC&pg=PA393&dq=primary+memory&hl=en&newbks=1&newbks
_redir=1&sa=X&ved=2ahUKEwiA8sXz1If7AhWxx3MBHWdMBIAQ6AF6BAgKEAI