0% found this document useful (0 votes)
54 views41 pages

FOC Project

1) The document is a certificate for a student named Akash Bhanadari for completing an activity on primary memory. It provides details of the student like name, roll number, department, and semester. 2) The activity was for the course "Fundamentals of computer" and included collecting data, fulfilling roles and duties, the quality of work, and a presentation. 3) Primary memory, also called main memory or RAM, is the memory that is used to store information for immediate use by the computer's processor. It operates at high speeds compared to secondary storage like hard disks.

Uploaded by

Nagendra A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views41 pages

FOC Project

1) The document is a certificate for a student named Akash Bhanadari for completing an activity on primary memory. It provides details of the student like name, roll number, department, and semester. 2) The activity was for the course "Fundamentals of computer" and included collecting data, fulfilling roles and duties, the quality of work, and a presentation. 3) Primary memory, also called main memory or RAM, is the memory that is used to store information for immediate use by the computer's processor. It operates at high speeds compared to secondary storage like hard disks.

Uploaded by

Nagendra A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

305-DAYANANDA SAGAR INSTITUTE OF TECHNOLOGY (POLYTECHNIC)

Shavige Malleshwara Hills, Kumarswamy Layout , Bangalore-78

Academic year 2022-23

Name of Student:- Akash Bhanadari . G

Activity Name:- Primary memory

Roll number:- 04

Department:- Computer science engineering

Semester:- 1st

Course: - Fundamentals of computer


Collection of Data/ Full fill Roles Quality of Total Marks
submission(5) and Duties (5) work (5) (20)
Presentation
(5)

Signature of Course Signature of


programme
Co-ordinator
Co-Ordinator

CERTIFICATE
List of Tables and diagram

>History

>MOS Memory

>Volatile memory

>Non-Volatile memory

>Semi-Voltile memory

>Management

>Virtual Memory

>Diagram of primary memory

>image of primary memory


INTRODUCTION TO PRIMARY MEMORY

In computing, memory is a device or system that is used to store information for immediate


use in a computer or related computer hardware and digital electronics devices. The
term memory is often synonymous with the term primary storage or main memory. An
archaic synonym for memory is store.

Computer memory operates at a high speed compared to storage that is slower but less
expensive and higher in capacity. Besides storing opened programs, computer memory
serves as disk cache and write buffer to improve both reading and writing performance.
Operating systems borrow RAM capacity for caching so long as not needed by running
software.] If needed, contents of the computer memory can be transferred to storage; a
common way of doing this is through a memory management technique called virtual
memory.

Modern memory is implemented as semiconductor memory, where data is stored


within memory cells built from MOS transistor  and other components on an integrated
circuits . There are two main kinds of semiconductor memory, volatile and non-volatile.
Examples of non-volatile memory are flash
memory and ROM, PROM, EPROM and EEPROM memory. Examples of volatile
memory are dynamic random access memory (DRAM) used for primary storage, and static
random access memory (SRAM) used for CPU cache.

Most semiconductor memory is organized into memory cells


 each storing one bit (0 or 1). Flash memory organization includes both one bit per memory
cell and multy-level cell capable of storing multiple bits per cell. The memory cells are
grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each
word can be accessed by a binary address of N bits, making it possible to store 2N words in
the memory.
HISTORY

In the early 1940s, memory technology often permitted a capacity of a few bytes. The first
electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes,
could perform simple calculations involving 20 numbers of ten decimal digits stored in the
vacuum tubes.
The next significant advance in computer memory came with acoustic delay-line memory,
developed by J Prepper escert in the early 1940s. Through the construction of a glass tube
filled with mercury and plugged at each end with a quartz crystal, delay lines could store

bits of information in the form of sound waves propagating through the mercury, with the
quartz crystals acting as transducers to read and write bits. Delay-line memory was limited
to a capacity of up to a few thousand bits.
Two alternatives to the delay line, the Williams tube and selectron tube, originated in 1946,
both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred
Williams invented the Williams tube, which was the first random-access memory
computer. The Williams tube was able to store more information than the Selectron tube
(the Selectron was limited to 256 bits, while the Williams tube could store thousands) and
less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental
disturbances.
Efforts began in the late 1940s to find non-volatile memory. Magenetic-core
memory allowed for recall of memory after power loss. It was developed by Frederick W.
Viehe and An wang in the late 1940s, and improved by jay forrester and Jan A Rajman in
the early 1950s, before being commercialised with the Whirlwind computer in
1953. Magnetic-core memory was the dominant form of memory until the development
of MOS semiconductor memory in the 1960s.
The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s
using bipolar transistors.[8] Semiconductor memory made from discrete devices was first
shipped by Texas Instruments to the United States Air Force in 1961. The same year, the
concept of solid-state memory on an integrated circuit (IC) chip was proposed
by applications engineer Bob Norman at Fairchild Semiconductor.[9] The first bipolar
semiconductor memory IC chip was the SP95 introduced by IBM in 1965.[8] While
semiconductor memory offered improved performance over magnetic-core memory, it
remain larger and more expensive and did not displace magnetic-core memory until the
late 1960s.[8][10]
MOS memory

Main article:  MOS memory


The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled
the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage
elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in
1964.[11] In addition to higher performance, MOS semiconductor memory was cheaper and
consumed less power than magnetic core memory.[12] In 1965, J. Wood and R. Ball of
the Royal Radar Establishment proposed digital storage systems that
use CMOS (complementary MOS) memory cells, in addition to MOSFET power
devices for the power supply, switched cross-coupling, switches and delay-line storage.
[13]
 The development of silicon-gate MOS integrated circuit (MOS IC) technology
by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips.
[14]
 NMOS memory was commercialized by IBM in the early 1970s.[15] MOS memory
overtook magnetic core memory as the dominant memory technology in the early 1970s.[12]
The two main types of volatile random-access memory (RAM) are static random-access
memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was
invented by Robert Norman at Fairchild Semiconductor in 1963,[8] followed by the
development of MOS SRAM by John Schmidt at Fairchild in 1964.[12] SRAM became an
alternative to magnetic-core memory, but requires six transistors for each bit of data.
[16]
 Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip
for the System/360 Model 95.[8]
Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic
calculator in 1965.[17][18] While it offered improved performance, bipolar DRAM could not
compete with the lower price of the then dominant magnetic-core memory.[19] MOS
technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM
Thomas J. Watson Research Center was working on MOS memory. While examining the
characteristics of MOS technology, he found it was possible to build capacitors, and that
storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit,
while the MOS transistor could control writing the charge to the capacitor. This led to his
development of a single-transistor DRAM memory cell.[16] In 1967, Dennard filed a patent
for a single-transistor DRAM memory cell based on MOS technology.[20] This led to the
first commercial DRAM IC chip, the Intel 1103 in October 1970.[21][22][23] Synchronous
dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000
chip in 1992.[24][25]
The term memory is also often used to refer to non-volatile memory including read-only
memory (ROM) through modern flash memory. Programmable read-only
memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma
Division of the American Bosch Arma Corporation.[26][27] 
In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a
MOS semiconductor device could be used for the cell of a reprogrammable ROM, which
led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971.
[28]
 EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka
Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972.[29] Flash memory
was invented by Fujio Masuoka at Toshiba in the early 1980s.

The term memory is also often used to refer to non-volatile memory including read-only


memory (ROM) through modern flash memory. Programmable read-only
memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma
Division of the American Bosch Arma Corporation.[26][27] In 1967, Dawon Kahng and Simon
Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be
used for the cell of a reprogrammable ROM, which led to Dov
Frohman of Intel inventing EPROM (erasable PROM) in 1971.[28] EEPROM (electrically
erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at
the Electrotechnical Laboratory in 1972.[29] Flash memory was invented by Fujio
Masuoka at Toshiba in the early 1980s.[30][31] Masuoka and colleagues presented the
invention of NOR flash in 1984,[32] and then NAND flash in 1987.[33] Toshiba
commercialized NAND flash memory in 1987.[34][35][36]
Developments in technology and economies of scale have

made possible so-called very large memory (VLM) computers.[36]


VOLATILE MEMORY

Volatile memory is a type of memory that maintains its data only while the device is
powered. If the power is interrupted for any reason, the data is lost. Volatile memory is
used extensively in computers -- ranging from servers to laptops -- as well as in other
devices, such as printers, LCD displays, routers, cell phones, wearables and medical
equipment.

In a computer, volatile memory is typically used for the system's random access memory
(RAM), both the main memory and the processor's L1, L2 and L3 cache. It is distinguished
from nonvolatile storage --such as solid-state drives (SSDs), hard disk drives (HDDs)
or optical disks -- by the fact that nonvolatile devices retain their data even when their
power is cut off.

A computer's volatile memory is sometimes referred to as primary storage, as opposed to


secondary storage, which is typically made up of nonvolatile storage devices. However, the
meanings of primary and secondary storage have evolved over the years, and the terms are
now often used when describing tiered storage, although the original usage still persists.

Volatile memory is used for a computer's RAM because it is much faster to read from and
write to than today's nonvolatile memory devices. Even the latest storage class memory
(SCM) devices such as Intel Optane can't match the performance of the current RAM
modules, especially the processor cache. However, the data in RAM stays there only while
the computer is running; when the computer is shut off, RAM loses its data.

For this reason, RAM is typically used along with nonvolatile memory, which does not lose
its data when the computer's power is turned off or the storage device is disconnected from
a power source. Nonvolatile memory also does not need to have its memory content
periodically refreshed like some volatile memory. In addition, nonvolatile storage is
cheaper and can hold much more data. Even so, today's computers require the fastest
memory and cache possible, which means sticking with volatile memory until a better
technology comes along.
Most of today's computers use dynamic RAM (DRAM) for the main memory and static
RAM (SRAM) for processor cache. DRAM supports greater densities than SRAM, and it is
cheaper. However, DRAM also requires more power and does not perform as well as
SRAM. One of the biggest challenges with DRAM is that the capacitors used for storing the
data tend to leak electrons and lose their charge. This means that DRAM memory devices
need to be refreshed periodically to retain their data, which can affect access speeds and
increase power usage.
NON-VOLATILE MEMORY

Non-volatile memory (NVM) is a type of computer memory that has the capability to hold
saved data even if the power is turned off. Unlike volatile memory, NVM does not require
its memory data to be periodically refreshed. It is commonly used for secondary storage or
long-term consistent storage.

Non-volatile memory is highly popular among digital media; it is widely used in memory
chips for USB memory sticks and digital cameras. Non-volatile memory eradicates the need
for relatively slow types of secondary storage systems, including hard disks.

Non-volatile memory is also known as non-volatile storage.

Techopedia Explains Non-Volatile Memory (NVM)

Non-volatile data storage can be classified into two types:

 Mechanically addressed systems


 Electrically addressed systems

Mechanically addressed systems make use of a contact structure to write and read
on a selected storage medium. The amount of data stored this way is much larger
than what's possible in electrically addressed systems. A few examples of
mechanically addressed systems are optical disks, hard disks, holographic memory
and magnetic tapes.

Electrically addressed systems are categorized based on the write mechanism. They are
costly but faster than mechanically addressed systems, which are affordable but slow. A
few examples of electrically addressed systems are flash memory, FRAM and MRAM.

Some examples of NVM include:

 All types of read-only memory


 Flash memory
 Most of the magnetic storage devices, such as hard disks, magnetic tape and floppy
disks
 Earlier computer storage solutions, including punched cards and paper tape
 Optical disks
SEMI-VOLATILE MEMORY

A third category of memory is semi-volatile. The term is used to describe a memory that
has some limited non-volatile duration after power is removed, but then data is ultimately
lost. A typical goal when using a semi-volatile memory is to provide the high performance
and durability associated with volatile memories while providing some benefits of non-
volatile memory.

For example, some non-volatile memory types experience wear when written. A worn cell
has increased volatility but otherwise continues to work. Data locations which are written
frequently can thus be directed to use worn circuits. As long as the location is updated
within some known retention time, the data stays valid. After a period of time without
update, the value is copied to a less-worn circuit with longer retention. Writing first to the
worn area allows a high write rate while avoiding wear on the not-worn circuits.[37]

As a second example, an STT-RAM can be made non-volatile by building large cells, but


doing so raises the cost per bit and power requirements and reduces the write speed. Using
small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some
applications, the increased volatility can be managed to provide many benefits of a non-
volatile memory, for example by removing power but forcing a wake-up before data is lost;
or by caching read-only data and discarding the cached data if the power-off time exceeds
the non-volatile threshold.[38]

The term semi-volatile is also used to describe semi-volatile behavior constructed from
other memory types. For example, a volatile and a non-volatile memory may be combined,
where an external signal copies data from the volatile memory to the non-volatile memory,
but if power is removed before the copy occurs, the data is lost. Or,
Management

Memory management is the functionality of an operating system which handles or


manages primary memory and moves processes back and forth between main memory and
disk during execution. Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free. It checks how much
memory is to be allocated to processes. It decides which process will get memory at what
time. It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
This tutorial will teach you basic concepts related to Memory Management.

Process Address Space

The process address space is the set of logical addresses that a process references in its
code. For example, when 32-bit addressing is in use, addresses can range from 0 to
0x7fffffff; that is, 2^31 possible numbers, for a total theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at
the time of memory allocation to the program. There are three types of addresses used in a
program before and after memory is allocated −

S.N. Memory Addresses & Description

1 Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction labels are
the basic elements of the symbolic address space.

2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative addresses.

3 Physical addresses
The loader generates these addresses at the time when a program is loaded into main memory.

Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred
to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device. MMU uses following mechanism to convert
virtual address to physical address.
 The value in the base register is added to every address generated by a user
process, which is treated as offset at the time it is sent to memory. For
example, if the base register value is 10000, then an attempt by the user to use
address location 100 will be dynamically reallocated to location 10100.
 The user program deals with virtual addresses; it never sees the real physical
addresses.

Static vs Dynamic Loading

The choice between Static or Dynamic Loading is to be made at the time of computer
program being developed. If you have to load your program statically, then at the time of
compilation, the complete programs will be compiled and linked without leaving any
external program or module dependency. The linker combines the object program with
other necessary object modules into an absolute program, which also includes logical
addresses.
If you are writing a Dynamically loaded program, then your compiler will compile the
program and for all the modules which you want to include dynamically, only references
will be provided and rest of the work will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded into
memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the program.

Static vs Dynamic Linking

As explained above, when static linking is used, the linker combines all other modules
needed by a program into a single executable program to avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or library with
the program, rather a reference to the dynamic module is provided at the time of
compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared Objects
in Unix are good examples of dynamic libraries.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main


memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary
storage to main memory.
Though performance is usually affected by swapping process but it helps in running
multiple and big processes in parallel and that's the reason Swapping is also known as a
technique for memory compaction.
The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well as the
time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk where
swapping will take place has a data transfer rate around 1 MB per second. The actual
transfer of the 1000K process to or from memory will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.
Memory Allocation

Main memory usually has two partitions −


 Low Memory − Operating system resides in this memory.
 High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.

S.N. Memory Allocation & Description

1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user
processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less than
the limit register.

2 Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a partition is
free, a process is selected from the input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.
VIRUAL MEMORY

Virtual memory is an area of a computer system’s secondary memory storage space


(such as a hard disk or solid state drive) which acts as if it were a part of the
system’s RAM or primary memory.

Ideally, the data needed to run applications is stored in RAM, where they can be
accessed quickly by the CPU. But when large applications are being run, or when
many applications are running at once, the system’s RAM may become full.

To get around this problem, some data stored in RAM that is not actively being used
can be temporarily moved to virtual memory (which is physically located on a hard
drive or other storage device). This frees up space in RAM, which can then be used to
accommodate data which the system needs to access imminently.

By swapping data between RAM and virtual memory when it is not needed and
back  from virtual memory to RAM when it is needed, a system can continue to work
smoothly with far less physical RAM than it would otherwise require.

Virtual memory enables a system to run larger applications or run more applications
at the same time without running out of RAM.  Specifically, the system can operate
as if its total RAM resources were equal to the amount of physical RAM, plus the
amount of virtual RAM.

Why is There a Need for Virtual Memory?

Virtual memory was developed when physical RAM was very expensive, and RAM
is still more expensive per Gigabyte than storage media such as hard disks and solid
state drives. For that reason it is much less costly to use a combination of physical
RAM and virtual memory than to equip a computer system with more RAM.

Since using virtual memory (or increasing virtual memory) has no extra financial cost
(because it uses existing storage space) it offers a way for a computer to use more
memory than is physically available on the system.

Another key driver for the use of virtual memory is that all computer systems have a
limit (dictated by hardware and software) on the amount of physical RAM that can
be installed. Using virtual memory allows the system to continue to operate beyond
those physical RAM limits.
Virtual Memory vs. Physical Memory

Since RAM is more expensive than virtual memory, it would seem – all things being
equal – that computers should be equipped with as little RAM and as much virtual
memory as possible.

But in fact the characteristics of virtual memory are different than those of physical
memory. The key difference between virtual memory and physical memory is that
RAM is very much faster than virtual memory.

So a system with 2 GB of physical RAM and 2 GB of virtual memory will not offer
the same performance as a similar system with 4 GB of physical RAM. To understand
why, it is necessary to understand how virtual memory works.

How Does Virtual Memory Work?

When an application (including the operating system) is running, it stores the


location of program threads and other data at a virtual address, while the data is
actually stored at a physical address in RAM. If later that RAM space is needed more
urgently by another process, then the data may be swapped out of RAM and into
virtual memory.

The responsibility for keeping track of all this data as it is swapped between physical
and virtual memory falls to the computer’s memory manager. The memory manager
maintains a table which maps virtual addresses used by the operating system and
applications to the physical addresses that data is actually stored in. When data is
swapped between RAM and virtual memory, the table is updated so that a given
virtual address always points to the correct physical location.

A computer can only run threads and manipulate data that is stored in RAM rather
than virtual memory. And it takes a non-negligible amount of time to swap data that
is needed into RAM. Consequently, it follows that using virtual memory involves a
performance hit.

Put another way, a system with 4 GB RAM will generally offer higher performance
than a system with 2 GB RAM and 2 GB virtual memory because of the performance
hit caused by swapping, and for that reason it is said that virtual memory is slower
than RAM.
One potential problem with virtual memory is that if the amount of RAM present is
too small compared to the amount of virtual memory then a system can end up
spending a large proportion of its CPU resources swapping data back and forth.
Meanwhile, performance of useful work grinds to a near halt – a process known
as thrashing.

To prevent thrashing it is usually necessary to reduce the number of applications


being run simultaneously, or simply to increase the amount of RAM in the system.

Operating systems, such as most versions of Windows, generally recommend that


users do not increase virtual memory beyond 1.5 times the amount of physical RAM
present. So a system with 4 GB RAM should have virtual memory of no more than 6
GB.

To minimize the performance hit caused by swapping between physical and virtual
memory, it is best use the fastest storage device connected to the system to host the
virtual memory, and to locate the virtual memory storage area on its own partition.

Virtual memory can act in concert with a computer’s main memory to enable faster,
more fluid operations.
How to Increase Virtual Memory in a System

Most operating systems allow users to increase virtual memory from a configuration
page.

 In Windows, users can also allow the system to manage the amount of
virtual memory provided dynamically.

 Similarly, in the Mac OS, users can use the preferences panel to allot virtual
memory.

Types of virtual memory: Paging and Segmentation

Virtual memory can be managed in a number of different ways by a system’s


operating system, and the two most common approaches are paging and
segmentation.

Virtual Memory Paging


In a system which uses paging, RAM is divided into a number of blocks – usually 4k
in size – called pages. Processes are then allocated just enough pages to meet their
memory requirements. That means that there will always be a small amount of
memory wasted, except in the unusual case where a process requires exactly  a whole
number of pages.

During the normal course of operations, pages (i.e. memory blocks of 4K in size) are
swapped between RAM and a page file, which represents the virtual memory.

Virtual Memory Segmentation


Segmentation is an alternative approach to memory management, where instead of
pages of a fixed size, processes are allocated segments of differing length to exactly
meet their requirements. That means that unlike in a paged system, no memory is
wasted in a segment.

Segmentation also allows applications to be split up into logically independent


address spaces, which can make them easier to share, and more secure.
But a problem with segmentation is that because each segment is a different length,
it can lead to memory “fragmentation.” This means that as segments are allocated
and de-allocated, small chunks of memory can be left scattered around which are too
small to be useful.

As these small chunks build up, fewer and fewer segments of useful size can be
allocated. And if the OS does start using these small segments then there are a huge
number to keep track of, and each process will need to use many different segments,
which is inefficient and can reduce performance.

Advantages and Disadvantages of Virtual Memory

Even though RAM is now relatively inexpensive compared to its cost when virtual
memory was first developed, it is still extremely useful and it is still employed in
many, perhaps most, computer systems. The key problem with virtual memory
relates to performance.

Advantages  of Virtual Memory

 Allows more applications to be run at the same time.

 Allows larger applications to run in systems that do not have enough


physical RAM alone to run them.

 Provides a way to increase memory which is less costly than buying more
RAM.

 Provides a way to increase memory in a system which has the maximum


amount of RAM that its hardware and operating system can support.

Disadvantages of Virtual Memory

 Does not offer the same performance as RAM.

 Can negatively affect the overall performance of a system.

 Takes up storage space which could otherwise be used for long term data
storage
Protected Memory

The MPU register might look complicated, but as long as you have a clear idea of the
memory regions that are required for your application, it should not be difficult. Typically,
you need to have the following memory regions:

Program code for privileged programs (for example, OS kernel and exception
handlers)

Program code for user programs

Data memory for privileged and user programs in various memory regions (e.g.,
data and stack of the application situated in the SRAM (Static Random Access
Memory) memory region--0x20000000 to 0x3FFFFFFF)

Other peripherals
It is not necessary to set up a region for the memory in the private peripheral bus range.
The MPU automatically recognizes the private peripheral bus memory addresses and
allows privileged software to perform data accesses in this region.
For Cortex-M3 products, most memory regions can be set up with TEX = b000, C = 1, B =
1. System devices such as the Nested Vectored Interrupt Controller (NVIC) should be
strongly ordered, and peripheral regions can be programmed as shared devices (TEX =
b000, C = 0, B = 1). However, if you want to make sure that any bus faults occurring in the
region are precise bus faults, you should use a strongly ordered memory attribute (TEX =
b000, C = 0, B = 0) so that write buffering is disabled. However, doing so can reduce system
performance.
For users of a Cortex Microcontroller Software Interface Standard (CMSIS) compliant
device driver, the MPU registers can be accessed using the following register names as
shown in Table 13.10. A simple flow for an MPU setup routine is shown in Figure 13.3 on
page 220.

Table 13.10. MPU Register Names in CMSIS

Register Names MPU Register Address

MPU->TYPE MPU Type register 0xE000ED90

MPU->CTRL MPU Control register 0xE000ED94

MPU->RNR MPU Region Number register 0xE000ED98


Register Names MPU Register Address

MPU->RBAR MPU Region Base Address register 0xE000ED9C

MPU->RASR MPU Region Attribute and Size register 0xE000EDA0

MPU->RBAR_A1 MPU Alias 1 Region Base Address register 0xE000EDA4

MPU->RBAR_A2 MPU Alias 2 Region Base Address register 0xE000EDAC

MPU->RBAR_A3 MPU Alias 3 Region Base Address register 0xE000EDB4

MPU->RASR_A1 MPU Alias 1 Region Attribute and Size register 0xE000EDA8

MPU->RASR_A2 MPU Alias 2 Region Attribute and Size register 0xE000EDB0

MPU->RASR_A3 MPU Alias 3 Region Attribute and Size register 0xE000EDB8


Sign in to download full-size image

FIGURE 13.3. Example Steps to Set Up the MPU.

Before the MPU is enabled and if the vector table is relocated to RAM, remember to set up
the fault handler for the memory management fault in the vector table, and enable the
memory management fault in the System Handler Control and State register. They are
needed to allow the memory management fault handler to be executed if an MPU violation
takes place.
For a simple case of only four required regions, the MPU setup code (without the region
checking and enabling) looks like this:
MPU->RNR = 0; // MPU Region Number Register
 // select region 0
MPU->RBAR = 0x00000000; // MPU Region Base Address Register

 // Base Address = 0x00000000

MPU->RASR = 0x0307002F; // Region Attribute and Size Register

 // R/W, TEX=0,S=1,C=1,B=1, 16MB, Enable=1

MPU->RNR = 1; // select region 1

MPU->RBAR = 0x20000000; // Base Address = 0x20000000

MPU->RASR = 0x03070033; // R/W, TEX=0,S=1,C=1,B=1, 64MB, Enable=1

MPU->RNR = 2; // select region 2

MPU->RBAR = 0x40000000; // Base Address = 0x40000000

MPU->RASR = 0x03050033; // R/W, TEX=0,S=1,C=0,B=1, 64MB, Enable=1

MPU->RNR = 3; // select region 3

MPU->RBAR = 0xA0000000; // Base Address = 0xA0000000

MPU->RASR = 0x01040027; // Privileged R/W, TEX=0,S=1,C=0,B=0,


 // 1MB, Enable=1

MPU->CTRL = 1; // MPU Control register – Enable MPU

This can also be coded in assembly language:


LDR R0,=0xE000ED98 ; Region number register
MOV R1,#0 ; Select region 0
STR R1, [R0]

Virtual Memory and Protection


When process level memory protection is employed, the memory addresses allocated to the
process are virtual addresses in the process space. The memory management unit (MMU)
and associated page tables manage the translation between code, data, and heap process
memory to the underlying physical memory. Each process appears to live in the
same virtual address space, but actually resides in different physical areas of
memory. Figure 7.9 shows how memory regions in a process are mapped to physical
memory.
Sign in to download full-size image

FIGURE 7.9. Address Space Mapping to Physical Memory.

The operating system manages the page tables based on process creation and underlying
kernel services calls. Each process has an active page table in context when the process is
executing. The CPU contains a process ID register (CR3 on Intel architecture) that is used
to select the appropriate tree within the page table hierarchy. One of the tasks the
operating system performs is to update the CR3 value in the CPU during context switches.
The mapping between process address space and physical pages is often highly
fragmented; there is no mathematical correlation between the virtual and physical address
(it must be identified via page table lookup). On the other hand, in many systems, the page
tables configured to map kernel space usually map a large, virtually contiguous area within
the kernel to a large, physically contiguous area in physical memory. This simplifies the
calculation/translation between kernel virtual and physical address for the kernel
mappings. The addition/subtraction of an address offset can be used to convert between
kernel virtual and physical addresses. However, this should not be relied upon. You must
always use OS-provided functions to translate addresses. This attribute is a system
optimization to make the translation as efficient as possible.
In full-featured operating systems, the physical memory pages may be copied to a storage
device. This process is known as swapping. The virtual to physical mapping for the process
physical page will be removed. When an application attempts to access the virtual address
that previously had a physical page mapped, the MMU will generate a page fault. The page
fault handler copies the page from the drive into memory (not likely to be the same

physical memory used previously), sets up the correct page mapping, and returns from the
page fault exception, at which point the instruction that generated the fault is re-executed,
this time without faulting. In many embedded systems, there is no swap storage and
swapping is disabled.

Some embedded systems have no memory management unit. In this case the virtual and
linear address spaces are mapped 1:1. Each process must be created and linked to a target
physical address range in which it will reside during runtime. There is far less protection
(code from one process can access/destroy memory belonging to another process) between

processes if there is no MMU in the system, although some systems do implement a


memory protection unit to prevent processes from accessing other processes’ memory, but
these are limited to a small number of protection regions. Most real-time operating systems
support embedded processors with or without an MMU; there is also a variant of Linux

known as uCLinux (http://www.uclinux.org) that does not require an MMU.


Buffers allocated to a process cannot be shared between processes, as the physical memory

backing the malloc() call is only allocated to the process that allocates it. There is also a
series of calls that can be used to allocate memory that is shared between processes. Figure
7.10 shows the series of calls to allocate a 4-K page of memory between two processes. The
calls must be coordinated by both processes.
EXAMPLES

There are four types of primary storage:

 read only memory (ROM)


 random access memory (RAM)
 flash memory
 cache memory

READ ONLY MEMORY(ROM)

ROM is a type of non-volatile memory that is used in computers and other


electronic devices. ROM stands for read-only memory, which means that the data

stored in ROM can only be read and cannot be modified. ROM is used to store
the firmware of a device, which is the basic software that is required to run the
device. ROM is also used to store the operating system of a computer or other

type of electronic device. ROM is typically stored on a chip that is located on the
motherboard of a computer or other type of electronic device. ROM chips are

usually soldered onto the motherboard, which makes them difficult to replace.
ROM is a type of non-volatile memory, which means that the data stored in ROM

is not lost when the power is turned off. This is in contrast to RAM, which is a
type of volatile memory that is erased when the power is turned off. ROM is a key
part of the firmware of a device and is essential for the proper functioning of the
device.

RANDOMACCESSMEMORY(RAM)

RAM, which stands for Random Access Memory, is a hardware device generally located
on the motherboard of a computer and acts as an internal memory of the CPU. It allows
CPU store data, program, and program results when you switch on the computer. It is
the read and write memory of a computer, which means the information can be written
to it as well as read from it.

RAM is a volatile memory, which means it does not store data or instructions
permanently. When you switch on the computer the data and instructions from the hard
disk are stored in the RAM, e.g., when the computer is rebooted, and when you open a
program, the operating system (OS), and the program are loaded into RAM, generally
from an HDD or SSD. CPU utilizes this data to perform the required tasks. As soon as
you shut down the computer, the RAM loses the data. So, the data remains in the RAM
as long as the computer is on and lost when the computer is turned off. The benefit of
loading data into RAM is that reading data from the RAM is much faster than reading
from the hard drive.

In simple words, we can say that RAM is like a person?s short term memory, and hard
drive storage is like a person's long term memory. Short term memory remembers the
things for a short duration, whereas long term memory remembers for a long duration.
Short term memory can be refreshed with information stored in the brain?s long term
memory. A computer also works like this; when the RAM fills up, the processor goes to
the hard disk to overlay the old data in Ram with new data. It is like a reusable scratch
paper on which you can write notes, numbers, etc., with a pencil. If you run out of space
on the paper, you may erase what you no longer need; RAM also behaves like this, the
unnecessary data on the RAM is deleted when it fills up, and it is replaced with new data
from the hard disk which is required for the current operations.

RAM comes in the form of a chip that is individually mounted on the motherboard or in
the form of several chips on a small board connected to the motherboard. It is the main
memory of a computer. It is faster to write to and read from as compared to other
memories such as a hard disk drive (HDD), solid-state drive (SSD), optical drive, etc.

A computer's performance mainly depends on the size or storage capacity of the RAM.
If it does not have sufficient RAM (random access memory) to run the OS and software
programs, it will result in slower performance. So, the more RAM a computer has, the
faster it will work. Information stored in RAM is accessed randomly, not in a sequence as
on a CD or hard drive. So, its access time is much faster.

History of RAM:
o The first type of RAM was introduced in 1947 with the Williams tube. It was used in CRT
(cathode ray tube), and the data was stored as electrically charged spots on the face.
o The second type of RAM was a magnetic-core memory, invented in 1947. It was made of
tiny metal rings and wires connecting to each ring. A ring stored one bit of data, and it
can be accessed at any time.
o The RAM which we know today, as solid-state memory, was invented by Robert Dennard
in 1968 at IBM Thomas J Watson Research Centre. It is specifically known as dynamic
random access memory (DRAM) and has transistors to store bits of data. A constant
supply of power was required to maintain the state of each transistor.
o In October 1969, Intel introduced its first DRAM, the Intel 1103. It was its first
commercially available DRAM.
o In 1993, Samsung introduced the KM48SL2000 synchronous DRAM (SDRAM).
o In 1996, DDR SDRAM was commercially available.
o In 1999, RDRAM was available for computers.
o In 2003, DDR2 SDRAM began being sold.
o In June 2007, DDR3 SDRAM started being sold.
o In September 2014, DDR4 became available in the market.

Types of RAM:
Integrated RAM chips can be of two types:

1. Static RAM (SRAM):


2. Dynamic RAM (DRAM):
Both types of RAM are volatile, as both lose their content when the power is turned off.

1) Static RAM:

Static RAM (SRAM) is a type of random access memory that retains its state for data bits
or holds data as long as it receives the power. It is made up of memory cells and is
called a static RAM as it does not need to be refreshed on a regular basis because it
does not need the power to prevent leakage, unlike dynamic RAM. So, it is faster than
DRAM.

It has a special arrangement of transistors that makes a flip-flop, a type of memory cell.
One memory cell stores one bit of data. Most of the modern SRAM memory cells are
made of six CMOS transistors, but lack capacitors. The access time in SRAM chips can be
as low as 10 nanoseconds. Whereas, the access time in DRAM usually remains above 50
nanoseconds.

Furthermore, its cycle time is much shorter than that of DRAM as it does not pause
between accesses. Due to these advantages associated with the use of SRAM, It is
primarily used for system cache memory, and high-speed registers, and small memory
banks such as a frame buffer on graphics cards.

The Static RAM is fast because the six-transistor configuration of its circuit maintains the
flow of current in one direction or the other (0 or 1). The 0 or 1 state can be written and
read instantly without waiting for the capacitor to fill up or drain. The early
asynchronous static RAM chips performed read and write operations sequentially, but
the modern synchronous static RAM chips overlap read and write operations.
The drawback with Static RAM is that its memory cells occupy more space on a chip
than the DRAM memory cells for the same amount of storage space (memory) as it has
more parts than a DRAM. So, it offers less memory per chip.

2) Dynamic RAM:

Dynamic Ram (DRAM) is also made up of memory cells. It is an integrated circuit (IC)
made of millions of transistors and capacitors which are extremely small in size and each
transistor is lined up with a capacitor to create a very compact memory cell so that
millions of them can fit on a single memory chip. So, a memory cell of a DRAM has one
transistor and one capacitor and each cell represents or stores a single bit of data in its
capacitor within an integrated circuit.

The capacitor holds this bit of information or data, either as 0 or as 1. The transistor,
which is also present in the cell, acts as a switch that allows the electric circuit on the
memory chip to read the capacitor and change its state.

The capacitor needs to be refreshed after regular intervals to maintain the charge in the
capacitor. This is the reason it is called dynamic RAM as it needs to be refreshed
continuously to maintain its data or it would forget what it is holding. This is achieved
by placing the memory on a refresh circuit that rewrites the data several hundred times
per second. The access time in DRAM is around 60 nanoseconds.
We can say that a capacitor is like a box that stores electrons. To store a ?1? in the
memory cell, the box is filled with electrons. Whereas, to store a ?0?, it is emptied. The
drawback is that the box has a leak. In just a few milliseconds the full box becomes
empty. So, to make dynamic memory work, the CPU or Memory controller has to
recharge all the capacitors before they discharge. To achieve this, the memory controller
reads the memory and then writes it right back. This is called refreshing the memory and
this process continues automatically thousands of times per second. So, this type of
RAM needs to be dynamically refreshed all the time.

Types of DRAM:
i) Asynchronous DRAM:

This type of DRAM is not synchronized with the CPU clock. So, the drawback with this
RAM is that CPU could not know the exact timing at which the data would be available
from the RAM on the input-output bus. This limitation was overcome by the next
generation of RAM, which is known as the synchronous DRAM.

ii) Synchronous DRAM:


SDRAM (Synchronous DRAM) began to appear in late 1996. In SDRAM, the RAM was
synchronized with the CPU clock. It allowed the CPU or to be precise the memory
controller to know the exact clock cycle or timing or the number of cycles after which
the data will be available on the bus. So, the CPU does not need for the memory
accesses and thus the memory read and write speed can be increased. The SDRAM is
also known as the single data rate SDRAM (SDR SDRAM) as data is transferred only at
each rising edge of the clock cycle. See the image in the following description.

iii) DDR SDRAM:

The next generation of the synchronous DRAM is known as the DDR RAM. It was
developed to overcome the limitations of SDRAM and was used in PC memory at the
beginning of the year 2000. In DDR SDRAM (DDR RAM), the data is transferred twice
during each clock cycle; during the positive edge (rising edge) and the negative edge
(falling edge) of the cycle. So, it is known as the double data rate SDRAM.

There are different generations of DDR SDRAM which include DDR1, DDR2, DDR3, and
DDR4. Today, the memory that we use inside the desktop, laptop, mobile, etc., is mostly
either DDR3 or DDR4 RAM. Types of DDR SDRAM:

a) DDR1 SDRAM:

DDR1 SDRAM is the first advanced version of SDRAM. In this RAM, the voltage was
reduced from 3.3 V to 2.5 V. The data is transferred during both the rising as well as the
falling edge of the clock cycle. So, in each clock cycle, instead of 1 bit, 2 bits are being
pre-fetched which is commonly known as the 2 bit pre-fetch. It is mostly operated in the
range of 133 MHz to the 200 MHz.

Furthermore, the data rate at the input-output bus is double the clock frequency
because the data is transferred during both the rising as well as falling edge. So, if a
DDR1 RAM is operating at 133 MHz, the data rate would be double, 266 Mega transfer
per second.

ii) DDR2 SDRAM:

It is an advanced version of DDR1. It operates at 1.8 V instead of 2.5V. Its data rate is
double the data rate of the previous generation due to the increase in the number of
bits that are pre-fetched during each cycle; 4 bits are pre-fetched instead of 2 bits. The
internal bus width of this RAM has been doubled. For example, if the input-output bus is
64 bits wide, the internal bus width of it will be equal to 128 bits. So, a single cycle can
handle double the amount of data.

iii) DDR3 SDRAM:

In this version, the voltage is further reduced from 1.8 V to the 1.5 V. The data rate has
been doubled than the previous generation RAM as the number of bits that are pre-
fetched has been increased from 4 bits to the 8 bits. We can say that the internal data
bus width of RAM has been increased 2 times than that of the last generation.

iv) DDR4 SDRAM:


In this version, the operating voltage is further reduced from 1.5 V to 1.2 V, but the
number of bits that can be pre-fetched is same as the previous generation; 8 bits per
cycle. The Internal clock frequency of the RAM is double of the previous version. If you
are operating at 400 MHz the clock frequency of the input-output bus would be four
times, 1600 MHz and the transfer rate would be equal to 3200 Mega transfer per
second.

SRAM DRAM

It is a static memory as it does not need It is a dynamic memory as it needs to be refreshed continuously or
to be refreshed repeatedly. will lose the data.

Its memory cell is made of 6 transistors. Its memory cell is made of one transistor and one capacitor. So,
So its cells occupy more space on a chip cells occupy less space on a chip and provide more memory than
and offer less storage capacity SRM of the same physical size.
(memory) than a DRAM of the same
physical size.

It is more expensive than DRAM and is It is less expensive than SRAM and is mostly located on t
located on processors or between a motherboard.
processor and main memory.

It has a lower access time, e.g. 10 It has a higher access time, e.g. more than 50 nanoseconds. So, it
nanoseconds. So, it is faster than DRAM. slower than SRAM.

It stores information in a bistable The information or each bit of data is stored in a separate capacit
latching circuitry. It requires regular within an integrated circuit so it consumes less power.
power supply so it consumes more
power.
It is faster than DRAM as its memory It is not as fast as SRAM, as its memory cells are refresh
cells don't need to be refreshed and are continuously. But still, it is used in the motherboard because it
always available. So, it is mostly used in cheaper to manufacture and requires less space.
registers in the CPU and cache memory
of various devices.

Its cycle time is shorter as it does not Its cycle time is more than the SRAM's cycle time.
need to be paused between accesses
and refreshes.

Examples: L2 and LE cache in a CPU. Example: DDR3, DDR4 in mobile phones, computers, etc.

Size ranges from 1 MB to 16MB. Size ranges from 1 GB to 3 GB in smartphones and 4GB to 16GB
laptops.

FLASH MEMORY

Flash memory is a type of non-volatile memory that can be erased and


reprogrammed. It is often used in electronic devices, such as digital cameras, USB

flash drives, and solid-state drives. Flash memory is based on an electric charge.
When a charge is applied to certain areas of the chip, it creates an electric field. This

field can change the state of the transistors, which are used to store data. Flash
memory is faster than traditional RAM, and it does not require a power source to

maintain the data. However, it is more expensive than RAM, and it has a limited
number of erase-rewrite cycles.

CACHE MEMORY

cache memory, also called cache, supplementary memory system that temporarily


stores frequently used instructions and data for quicker processing by the central

processing unit (CPU) of a computer. The cache augments, and is an extension of, a


computer’s main memory. Both main memory and cache are internal random-access
memories (RAMs) that use semiconductor-based transistor circuits. Cache holds a

copy of only the most frequently used information or program codes stored in the
main memory. The smaller capacity of the cache reduces the time required to locate
data within it and provide it to the CPU for processing.

When a computer’s CPU accesses its internal memory, it first checks to see if the
information it needs is stored in the cache. If it is, the cache returns the data to the

CPU. If the information is not in the cache, the CPU retrieves it from the main
memory. Disk cache memory operates similarly, but the cache is used to hold data
that have recently been written on, or retrieved from, a magnetic disk or other
external storage device.
APPLICATION OF PRIMARY MEMORY

Primary storage/memory,also known as main memory,is the part of the


computer that stores current data,programmes,and intructions. The
motherboard houses the primary storage and as result , data from and to
primary storage can be read and written very quickly.

ADVANTAGES OF PRIMARY MEMORY

There are several advantages to using primary memory:

1. Primary memory is much faster than secondary memory, which means that it can
provide quick access to data and programs.

2. Primary memory is usually cheaper than secondary memory, which makes it a more
cost-effective option for storing data.

3. Primary memory is more reliable than secondary memory, which means that data is less
likely to be lost or corrupted.

4. Primary memory is more durable than secondary memory, which means that it can
withstand harsh conditions and last longer.

DISADVANTAGES OF PRIMARY MEMORY

There are several disadvantages associated with primary memory, including:

1. Limited Capacity: Primary memory is often limited in terms of capacity, meaning that it
can only store a certain amount of data.

2. Volatile: Primary memory is volatile, meaning that it is prone to data loss in the event of
a power outage or other type of interruption.
3. Expensive: Primary memory is often more expensive than secondary memory, such as
hard drives or SSDs.
4. Slow: Primary memory is often slower than secondary memory, meaning that it can take
longer to access data stored in primary memory.

CONCLUSION

Conclusion . primary memory plays a critical role in the execution of the program . Any
improvements in memory operations will lead to faster execution of the program and in
turn, enhance business operation efficiency.

REFERANCE

You can refer Wikipedia for the articles here is the website link :

https://en.wikipedia.org/wiki/Computer_memory

here are some book for learning primary memory clearly and detailed

01.https://books.google.com/books?
id=rDe7ygAACAAJ&dq=primary+memory&hl=en&newbks=1&newbks_redir=1&sa
=X&ved=2ahUKEwiA8sXz1If7AhWxx3MBHWdMBIAQ6AF6BAgCEAE

02.https://books.google.co.in/books?
id=zm2b44hVKjoC&pg=PA393&dq=primary+memory&hl=en&newbks=1&newbks
_redir=1&sa=X&ved=2ahUKEwiA8sXz1If7AhWxx3MBHWdMBIAQ6AF6BAgKEAI

You might also like