Unit 5 Coa
Unit 5 Coa
BASIC CONCEPTS:-
A memory unit is the collection of storage units or devices together. The memory unit stores
the binary information in the form of bits. Generally, memory/storage is classified into 2
categories:
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any data when
power is switched off.
CONCEPT OF HIERARCHICAL MEMORY ORGANIZATION
The total memory capacity of a computer can be visualized by hierarchy of components. The
memory hierarchy system consists of all storage devices contained in a computer system from
the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Auxillary memory access time is generally 1000 times that of the main memory, hence it is
at the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate
directly with the CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in
from auxiliary memory. Programs not currently needed in main memory are transferred into
auxiliary memory to provide space in main memory for other programs that are currently in
use.
The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to 7~10
Each memory type, is a collection of numerous memory locations. To access data from any
memory, first it must be located and then the data is read from the memory location. Following
are the methods to access information from memory locations:
1. Random Access: Main memories are random access memories, in which each memory
location has a unique address. Using this unique address any memory location can be
reached in the same amount of time in any order.
3. Direct Access: In this mode, information is stored in tracks, with each track having a
separate read/write head.
In the Computer System Design, Memory Hierarchy is an enhancement to organize the
memory such that it can minimize the access time. The Memory Hierarchy was developed
based on a program behavior known as locality of references. The figure below clearly
demonstrates the different levels of the memory hierarchy.
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to store the
most frequently used data and instructions. Registers have the fastest access time and the
smallest storage capacity, typically ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently
used data and instructions that have been recently accessed from the main memory. Cache
memory is designed to minimize the time it takes to access data by providing the CPU with
quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.
Types of Main Memory
Static RAM: Static RAM stores the binary information in flip flops and
information remains valid until power is supplied. It has a faster access time and
is used in implementing cache memory.
Dynamic RAM: It stores the binary information as a charge on the capacitor. It
requires refreshing circuitry to maintain the charge on the capacitors after a few
milliseconds. It contains more memory cells per unit area as compared to SRAM.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-
volatile memory unit that has a larger storage capacity than main memory. It is used to store
data and instructions that are not currently in use by the CPU. Secondary storage has the
slowest access time and is typically the least expensive type of memory in the memory
hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic
or a magnetized material. The Magnetic disks work at a high speed inside the computer and
these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is
generally used for the backup of data. In the case of a magnetic tape, the access time for a
computer is a little slower and therefore, it requires some amount of time for accessing the
strip.
It helps in removing some destruction, and managing the memory in a better way.
It helps in spreading the data all over the computer system.
It saves the consumer’s price and time.
MAIN MEMORY
RAM
It is one of the parts of the Main memory, also famously known as Read Write Memory.
Random Access memory is present on the motherboard and the computer’s data is
temporarily stored in RAM. As the name says, RAM can help in both Read and write. RAM
is a volatile memory, which means, it is present as long as the Computer is in the ON state,
as soon as the computer turns OFF, the memory is erased.
In order to better understand RAM, imagine the blackboard of the classroom, the students
can both read and write and also erase the data written after the class is over, some new data
can be entered now.
HISTORY OF RAM
In 1947, the Williams tube marked the debut of the first RAM type. The data was saved as
electrically charged dots on the face and was used in cathode ray tubes. A magnetic-core
memory was the second type of RAM, which was created in 1947. RAM was made of small
metal rings and each ring connected with wires. A ring stored one bit of data, and it can be
easily accessible at any time.
The RAM as solid-state memory, was invented by Robert Dennard in 1968 at IBM Thomas
J Watson Research Centre. It is generally known as dynamic random access memory
(DRAM) and has many transistors to hold or store bits of data. A constant power supply was
necessary to maintain the state of each transistor.
In October 1969, Intel launched its first DRAM, the Intel 1103. In 1993, Samsung launched
the KM48SL2000 synchronous DRAM (SDRAM). similarly In 1996, DDR SDRAM was
commercially available. In 1999, RDRAM was accessible for computers. In 2003, DDR2
SDRAM began ready to sold. In June 2007, DDR3 SDRAM ready to being sold. In
September 2014, DDR4 became ready to be sold in the market.
FEATURES OF RAM
RAM is volatile in nature, which means, the data is lost when the device is
switched off.
RAM is known as the Primary memory of the computer.
RAM is known to be expensive since the memory can be accessed directly.
RAM is the fastest memory, therefore, it is an internal memory for the computer.
The speed of computer depends on RAM, say if the computer has less RAM, it
will take more time to load and the computer slows down.
TYPES OF RAM
RAM is further divided into two types, SRAM – Static Random Access
Memory and DRAM- Dynamic Random Access Memory. Let’s learn about both of these
types in more detail.
SRAM is used for Cache memory, it can hold the data as long as the power availability
is there. It is refreshed simultaneously to store the present information. It is made with
CMOS technology. It contains 4 to 6 transistors and it also uses clocks. It does not require
a periodic refresh cycle due to the presence of transistors. Although SRAM is faster, it
requires more power and is more expensive in nature. Since SRAM requires more power,
more heat is lost here as well, another drawback of SRAM is that it can not store more
bits per chip, for instance, for the same amount of memory stored in DRAM,
SRAM would require one more chip.
Function of SRAM
The function of SRAM is that it provides a direct interface with the Central Processing
Unit at higher speeds.
Characteristics of SRAM
1. SRAM is used as the Cache memory inside the computer.
2. SRAM is known to be the fastest among all memories.
3. SRAM is costlier.
4. SRAM has a lower density (number of memory cells per unit area).
5. The power consumption of SRAM is less but when it is operated at higher
frequencies, the power consumption of SRAM is compatible with DRAM.
DRAM is used for the Main memory, it has a different construction than SRAM, it used
one transistor and one capacitor (also known as a conductor), which is needed to get
recharged in milliseconds due to the presence of the capacitor. Dynamic RAM was the
first sold memory integrated circuit. DRAM is the second most compact technology in
production (First is Flash Memory). DRAM has one transistor and one capacitor in 1
memory bit. Although DRAM is slower, it can store more bits per chip, for instance, for
the same amount of memory stored in SRAM, DRAM requires one less chip. DRAM
requires less power and hence, less heat is produced.
FUNCTION OF DRAM
The function of DRAM is that it is used for programming code by a computer processor
in order to function. It is used in our PCs (Personal Computers).
CHARACTERISTICS OF DRAM
1. DRAM is used as the Main Memory inside the computer.
2. DRAM is known to be a fast memory but not as fast as SRAM.
3. DRAM is cheaper as compared to SRAM.
4. DRAM has a higher density (number of memory cells per unit area)
5. The power consumption by DRAM is more
TYPES OF DRAM
1. SDRAM: Synchronous DRAM, increases performance through its pins, which
sync up with the data connection between the main memory and the
microprocessor.
2. DDR SDRAM: (Double Data Rate) It has features of SDRAM also but with
double speed.
3. ECC DRAM: (Error Correcting Code) This RAM can find corrupted data easily
and sometimes can fix it.
4. RDRAM: It stands for Rambus DRAM. It used to be popular in the late 1990s
and early 2000s. It has been developed by the company named Rambus Inc. at
that time it competed with SDRAM. It’s latency was higher at the beginning but
it was more stable than SDRAM, consoles like Nintendo 64 and Sony Play Station
2 used that.
5. DDR2, DDR3, AND DDR4: These are successor versions of DDR
SDRAM with upgrades in performance
ROM
ROM stands for Read-Only Memory. It is a non-volatile memory that is used to store
important information which is used to operate the system. As its name refers to read-only
memory, we can only read the programs and data stored on it. It is also a primary memory unit
of the computer system. It contains some electronic fuses that can be programmed for a piece
of specific information. The information stored in the ROM in binary format. It is also known
as permanent memory.
Block Diagram of ROM
As shown in below diagram, there are k input lines and n output lines in it. The input address
from which we wish to retrieve the ROM content is taken using the k input lines. Since each
of the k input lines can have a value of 0 or 1, there are a total of 2 k addresses that can be
referred to by these input lines, and each of these addresses contains n bits of information
that is output from the ROM.
A ROM of this type is designated as a 2k x n ROM.
ADVANTAGES OF ROM
It is cheaper than RAM and it is non-volatile memory.
It is more reliable as compared to RAM.
Its circuit is simple as compared to RAM.
It doesn’t need refreshing time because it is static.
It is easy to test.
DISADVANTAGES OF ROM
It is a read-only memory, so it cannot be modified.
It is slower as compared to RAM.
The data stored in PROM is permanently The EPROM can be reprogrammed and
stored and cannot be changed and erased. reusable multiple times.
PROM is more flexible than EPROM. EPROM is less flexible than PROM.
Memory is used to store program and data. These programs and data needs to be transferred
between CPU and memory. Faster the transfer rate better is the performance of computer
system.
That is why choosing a suitable memory becomes necessary for improved performance. There
goes several parameters in deciding the suitable memory for a computer system.
CAPACITY
The size of computer depends on its memory capacity.
Memory can be seen as a storage unit containing x number of locations, each of which stores
y number of bits.
The total capacity of memory can be calculated as x*y-bit or x-word memory.
BANDWIDTH
Bandwidth of the memory indicates the maximum amount of information that can be
transferred to or from the memory per unit time.
It is expressed as number of bytes or words per second.
SPEED
The speed of operation of the memory is very important parameter.
The speed simply indicates the time between start of an operation and end of that operation.
Speed of memory is measured in two parameters:
1. access time (ta)
2. cycle time (tc)
All the three parameters capacity, bandwidth and speed needs to be considered while choosing
a memory while designing computer architecture.
A computer system uses many storage devices.
All the storage device used by the computer can be categorized as:
Primary memory or Main memory
Secondary memory or Auxillary memory
Internal memory
Cache memory
All these memories can be viewed as hierarchy of components as shown in memory hierarchy
diagram below:
From the above diagram, we can easily figure out various factors as explained below.
In terms of Speed
Internal Memory > Cache Memory > Main Memory > Secondary Memory
In terms of Cost
Internal Memory > Cache Memory > Main Memory > Secondary Memory
In terms of Capacity
Internal Memory < Cache Memory < Main Memory < Secondary Memory
Note:-
Internal memory has maximum speed as well as it is costly.
Secondary memory has highest storage capacity but slow speed and low cost.
CACHE MEMORY AND ITS MAPPING
Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There
are various different independent caches in a CPU, which store instructions and data. The
most important use of cache memory is that it is used to reduce the average time to access
data from the main memory.
CHARACTERISTICS OF CACHE MEMORY
Cache memory is an extremely fast memory type that acts as a buffer between
RAM and the CPU.
Cache Memory holds frequently requested data and instructions so that they are
immediately available to the CPU when needed.
Cache memory is costlier than main memory or disk memory but more
economical than CPU registers.
Cache Memory is used to speed up and synchronize with a high-speed CPU.
LEVELS OF MEMORY
Level 1 or Register: It is a type of memory in which data is stored and accepted
that are immediately stored in the CPU. The most commonly used register is
Accumulator, Program counter, Address Register, etc.
Level 2 or Cache memory: It is the fastest memory that has faster access time
where data is temporarily stored for faster access.
Level 3 or Main Memory: It is the memory on which the computer works
currently. It is small in size and once power is off data no longer stays in this
memory.
Level 4 or Secondary Memory: It is external memory that is not as fast as the
main memory but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for
a corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a Cache Hit has
occurred and data is read from the cache.
If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data
from the main memory, then the request is fulfilled from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit
ratio.
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses = 1
- hit ratio(H)
We can improve Cache performance using higher cache block size, and higher associativity,
reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.
CACHE MAPPING
There are three different types of mapping used for the purpose of cache memory which is as
follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping
1. Direct Mapping
The simplest technique, known as direct mapping, maps each block of main memory into
only one possible cache line. or In Direct mapping, assign each memory block to a specific
line in the cache. If a line is previously taken up by a memory block when a new block needs
to be loaded, the old block is trashed. An address space is split into two parts index field and
a tag field. The cache is used to store the tag field whereas the rest is stored in the main
memory. Direct mapping`s performance is directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
For purposes of cache access, each main memory address can be viewed as consisting of
three fields. The least significant w bits identify a unique word or byte within a block of main
memory. In most contemporary machines, the address is at the byte level. The remaining s
bits specify one of the 2s blocks of main memory. The cache logic interprets these s bits as a
tag of s-r bits (the most significant portion) and a line field of r bits. This latter field identifies
one of the m=2r lines of the cache. Line offset is index bits in the direct mapping.
2. Associative Mapping
In this type of mapping, associative memory is used to store the content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word id bits
are used to identify which word in the block is needed, but the tag becomes all of the
remaining bits. This enables the placement of any word at any place in the cache memory. It
is considered to be the fastest and most flexible mapping form. In associative mapping, the
index bits are zero.
3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of
direct mapping are removed. Set associative addresses the problem of possible thrashing
in the direct mapping method. It does this by saying that instead of having exactly one
line that a block can map to in the cache, we will group a few lines together creating a set.
Then a block in memory can map to any one of the lines of a specific set. Set-associative
mapping allows each word that is present in the cache can have two or more words in the
main memory for the same index address. Set associative cache mapping combines the
best of direct and associative cache mapping techniques. In set associative mapping the
index bits are given by the set offset bits. In this case, the cache consists of a number of
sets, each of which consists of a number of lines.
where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
APPLICATION OF CACHE MEMORY
Here are some of the applications of Cache Memory.
1. Primary Cache: A primary cache is always located on the processor chip. This
cache is small and its access time is comparable to that of processor registers.
2. Secondary Cache: Secondary cache is placed between the primary cache and
the rest of the memory. It is referred to as the level 2 (L2) cache. Often, the Level
2 cache is also housed on the processor chip.
3. Spatial Locality of Reference: Spatial Locality of Reference says that there is a
chance that the element will be present in close proximity to the reference point
and next time if again searched then more close proximity to the point of
reference.
4. Temporal Locality of Reference: Temporal Locality of Reference uses the
Least recently used algorithm will be used. Whenever there is page fault occurs
within a word will not only load the word in the main memory but the complete
page fault will be loaded because the spatial locality of reference rule says that if
you are referring to any word next word will be referred to in its register that’s
why we load complete page table so the complete block will be loaded.
The hardware mapping system and the memory management software together form the
structure of virtual memory.
When the program implementation starts, one or more pages are transferred into the main
memory and the page table is set to denote their location. The program is implemented from
the main memory just before a reference is created for a page that is not in memory. This event
is defined as a page fault.
When a page fault appears, the program that is directly in execution is stopped just before the
required page is transferred into the main memory. Because the act of loading a page from
auxiliary memory to main memory is an I/O operation, the operating framework creates this
function for the I/O processor.
In this interval, control is moved to the next program in the main memory that is waiting to be
prepared in the CPU. Soon after the memory block is assigned and then moved, the suspended
program can resume execution.
If the main memory is full, a new page cannot be moved in. Therefore, it is important to remove
a page from a memory block to hold the new page. The decision of removing specific pages
from memory is determined by the replacement algorithm.
There are two common replacement algorithms used are the first-in, first-out (FIFO) and least
recently used (LRU).
The FIFO algorithm chooses to replace the page that has been in memory for the highest time.
Every time a page is weighted into memory, its identification number is pushed into a FIFO
stack.
FIFO will be complete whenever memory has no more null blocks. When a new page should
be loaded, the page least currently transports in is removed. The page to be removed is simply
determined because its identification number is at the high of the FIFO stack.
The FIFO replacement policy has the benefit of being simple to execute. It has the drawback
that under specific circumstances pages are removed and loaded from memory too frequently.
The LRU policy is more complex to execute but has been more interesting on the presumption
that the least recently used page is an excellent applicant for removal than the least recently
loaded page as in FIFO. The LRU algorithm can be executed by relating a counter with each
page that is in the main memory.
When a page is referenced, its associated counter is set to zero. At permanent intervals of time,
the counters related to all pages directly in memory are incremented by 1.
In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when a new page comes
in. Page replacement becomes necessary when a page fault occurs and there are no free page
frames in memory. However, another page fault would arise if the replaced page is referenced
again. Hence it is important to replace a page that is not likely to be referenced in the
immediate future. y. If no page frame is free, the virtual memory manager performs a page
replacement operation to replace one of the pages existing in memory with the page whose
reference caused the page fault. It is performed as follows: The virtual memory manager uses
a page replacement algorithm to select one of the pages currently in memory for replacement,
accesses the page table entry of the selected page to mark it as “not present” in memory, and
initiates a page-out operation for it if the modified bit of its page table entry indicates that it
is a dirty page.
PAGE FAULT:
A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space but not loaded in physical memory. Since actual physical memory
is much smaller than virtual memory, page faults happen. In case of a page fault, Operating
System might have to replace one of the existing pages with the newly needed page. Different
page replacement algorithms suggest different ways to decide which page to replace. The
target for all algorithms is to reduce the number of page faults.
PAGE REPLACEMENT ALGORITHMS:
1. FIRST IN FIRST OUT (FIFO): This is the simplest page replacement algorithm.
In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be replaced
page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find
the number of page faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.
Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO)
page replacement algorithm. For example, if we consider reference strings
3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10-page faults.
2. OPTIMAL PAGE REPLACEMENT:
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms can be analysed against it.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page
fault
when 0 comes it will take place of 3 —>1 Page fault
when 4 comes it will take place of 0 —>1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 0 —>1 Page fault
when 2 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 2 —>1 Page fault
WRITE POLICY
This policy defines the rules and guidelines for using computers and other electronic devices
at your organization. It applies to all employees, contractors, interns, and any other individuals
who use an organization’s technology. The policy aims to protect both the organization’s assets
and the privacy of its employees.
It covers topics such as password security, internet usage, email communication, software
installation, and remote access.
Having a computer use policy is crucial for several reasons. Firstly, it helps protect the
organization’s assets from unauthorized access and cyber threats.
Secondly, it ensures that employees are aware of their responsibilities when using company
devices and networks.
Thirdly, it promotes productivity by setting boundaries around internet usage and email
communication. Lastly, it helps maintain employee privacy by establishing clear guidelines for
data protection.
1. Identify your organization’s specific needs and concerns regarding computer usage.
This could involve consulting with IT, legal, and management teams.
2. Research relevant laws and regulations in your jurisdiction that relate to computer
use, such as data protection or cybersecurity legislation.
3. Consider industry best practices and standards for computer use policies. Look at
examples from similar organizations to get ideas for what to include.
4. Define the scope of the policy by specifying which devices and users it applies to.
5. Write a clear and concise statement outlining the purpose of the policy.
6. Create a list of acceptable uses and restrictions on computer usage. Think about issues
like internet browsing, email communication, software installation, and remote access.
7. Establish guidelines for password management and data security. Include
requirements for password strength, change frequency, and encryption.
8. Outline procedures for reporting violations of the policy and consequences for non-
compliance.
9. Review and revise the policy regularly to ensure it remains up-to-date and effective.
Normally we can perform both read operation and write operation on the cache memory.
Now when we perform read operation on cache memory then it doesnot change the content of
the memory. But as it is obvious that any write operation performed on the cache changes the
content of the memory. Thus it is important to perform any write on cache very carefully.
To make sure write operation is performed carefully, we can adopt two cache write methods:
1. Write-through policy
2. Write-back policy
WRITE-THROUGH POLICY
Write-through policy is the most commonly used methods of writing into the cache memory.
In write-through method when the cache memory is updated simultaneously the main memory
is also updated. Thus at any given time, the main memory contains the same data which is
available in the cache memory.
WRITE-BACK POLICY
Write-back policy can also be used for cache writing.
During a write operation only the cache location is updated while following write-back method.
When update in cache occurs then updated location is marked by a flag. The flag is known as
modified or dirty bit.
When the word is replaced from the cache, it is written into main memory if its flag bit is set.
The logic behind this technique is based on the fact that during a cache write operation, the
word present in the cache may be accessed several times (temporal locality of reference). This
method helps reduce the number of references to main memory.
The only limitation to write-back technique is that inconsistency may occur while adopting
this technique due to two different copies of the same data, one in cache and other in main
memory.
VIRTUAL MEMORY
Virtual memory in computer organization architecture is a technique and not actually a memory
in physical form present in computer system. This is the reason it is known as virtual memory.
Virtual memory in COA is simply a technique used to provide an illusion of presence of large
main memory to the programmer, when in actual it’s not present physically.
The size of virtual memory is equivalent to the size of secondary memory. Each virtual address
or logical address referenced by the CPU is mapped to a physical address in main memory.
A hardware device called Memory Management Unit (MMU) performs this mapping during
run time. To perform this activity MMU actually takes help of a memory map table, which is
maintained by the operating system.
Virtual memory is a memory management technique used by operating systems (OS). It allows
a computer to temporarily increase the capacity of its main memory — RAM — by using
secondary memory such as a hard drive or solid-state drive (SSD). Virtual memory utilizes
hardware and software to manage how information is stored and retrieved from the hard drive.
Virtual memory helps people do more on their computers without paying for more RAM
capacity. While memory is much cheaper today than it used to be, RAM is still one of the most
expensive types of memory — only cache memory is more expensive.
The set of all logical address generated by CPU or program is known as logical address space.
It is also called as virtual address space.
The set of all physical addresses corresponding to the above logical addresses is known
as physical address space.
The complete procedure is something like this. When a program is required to be executed then
the CPU or program would generate a addresses called logical addresses. And executing
program occupies corresponding addresses in physical memory called as physical address.
From the above diagram, it is visible that mapping method uses special register known as
relocation register or base register. The content of the base register is added to every logical
address generated by user program at the beginning of execution.
Whenever your computer stores something in RAM, that information is assigned a memory
address — a reference to where the information is stored on the RAM chip. When your
computer starts running low on RAM, the OS will start transferring data from the RAM to your
hard drive. The OS sets up a paging file: A dedicated space on your hard drive for virtual
memory. Data stored on a hard drive is always assigned a physical address — a reference to
where the information is stored in the drive. The OS also maps physical addresses to virtual
addresses as part of the virtual memory process. Virtual addresses look like RAM addresses.
That way, when a program is running on your computer, it can seamlessly use RAM or virtual
memory. When a program uses data stored in virtual memory, the OS knows how to find the
data using the physical address.
To better understand how virtual memory works, let’s imagine a situation: calculating the sum
of some numbers.
FIFO
First-in, first-out (FIFO) is the most straightforward virtual memory management algorithm.
The idea is to take the data that’s been in RAM the longest and move it to virtual memory.
When that space is used up, the data that’s been in RAM the second-longest is moved to
virtual memory, and so on.
It’s easy for an OS to use FIFO since it doesn’t rely on complex labeling and prediction
algorithms. On the other hand, it’s rarely the most practical algorithm to use — just because
your computer loaded some data into RAM a while ago, doesn’t mean it’s not in use.
LRU
A more sophisticated virtual memory algorithm is called the least recently used (LRU)
algorithm. Here, your computer keeps track of when data is used in RAM. When it’s time to
make more room in RAM, the computer replaces the data that hasn’t been used for the longest
time.
OPT
The most advanced virtual memory algorithm is known as the optimal algorithm, or OPT. In
addition to considering past data usage like LRU, OPT also predicts which data will be needed
soon. That way, the most relevant data is always kept in RAM.
While the OPT algorithm offers the best virtual memory performance, it’s also the hardest to
implement.
Virtual memory is an amazing tool that helps people get more out of limited memory resources
on their computers, so why not use it all the time — or even dedicate an entire hard drive for
virtual memory? There are two big reasons that virtual memory shouldn’t be the default
substitute for RAM.
First, it takes more time to read and write data on a hard drive than it does on RAM. The more
your computer relies on virtual memory, the slower your programs will run and the less you’ll
be able to effectively run multiple programs at once.
Second, swapping data between virtual memory and RAM takes time. If the OS spends too
much time swapping data between RAM and virtual memory, it can lead to thrashing — a
severe drop in performance because too many computer resources are being dedicated to
managing virtual memory and updating the paging file.
Without virtual memory, your computer would stop loading more applications once the RAM
was full. It would be up to you to close your browser tabs, shut down applications, and manage
RAM usage on your own. You might not even be able to run some applications at all without
upgrading your RAM.
MEMORY MANAGEMENT REQUIREMENTS
Memory management keeps track of the status of each memory location, whether it is
allocated or free. It allocates the memory dynamically to the programs at their request and
frees it for reuse when it is no longer needed. Memory management meant to satisfy some
requirements that we should keep in mind.
These Requirements of memory management are:
1. Relocation – The available memory is generally shared among a number of
processes in a multiprogramming system, so it is not possible to know in advance
which other programs will be resident in main memory at the time of execution of
this program. Swapping the active processes in and out of the main memory
enables the operating system to have a larger pool of ready-to-execute process.
When a program gets swapped out to a disk memory, then it is not always possible
that when it is swapped back into main memory then it occupies the previous
memory location, since the location may still be occupied by another process. We
may need to relocate the process to a different area of memory. Thus there is a
possibility that program may be moved in main memory due to swapping.
The figure depicts a process image. The process image is occupying a continuous
region of main memory. The operating system will need to know many things
including the location of process control information, the execution stack, and
the code entry. Within a program, there are memory references in various
instructions and these are called logical addresses.
After loading of the program into main memory, the processor and the operating
system must be able to translate logical addresses into physical addresses.
Branch instructions contain the address of the next instruction to be executed.
Data reference instructions contain the address of byte or word of data
referenced.
2. Protection – There is always a danger when we have multiple programs at the
same time as one program may write to the address space of another program. So
every process must be protected against unwanted interference when other process
tries to write in a process whether accidental or incidental. Between relocation and
protection requirement a trade-off occurs as the satisfaction of relocation
requirement increases the difficulty of satisfying the protection requirement.
Prediction of the location of a program in main memory is not possible, that’s why
it is impossible to check the absolute address at compile time to assure protection.
Most of the programming language allows the dynamic calculation of address at
run time. The memory protection requirement must be satisfied by the processor
rather than the operating system because the operating system can hardly control
a process when it occupies the processor. Thus it is possible to check the validity
of memory references.
Modules are written and compiled independently and all the references
from one module to another module are resolved by `the system at run
time.
Different modules are provided with different degrees of protection.
There are mechanisms by which modules can be shared among processes.
Sharing can be provided on a module level that lets the user specify the
sharing that is desired.
There are two main types of associative memory: implicit and explicit. Implicit associative
memory is an unconscious process relying on priming, whereas explicit associative memory
involves conscious recollection.
Physiological processes that are affected by implicit memory include the following:
performance, arousal level, reaction time, habituation, and thalamic (in the brain)
processing speed.
One of the most widely-used tests for implicit associative memory is priming, which was
developed by Kutas & Hillyard in 1980.4 Priming is used to test whether a word or image
influences how the subject responds to another stimulus, thus indicating that they have
previously encountered the word or image before.
An example of priming is when a person is shown a picture of a car, and then asked to
identify a second picture that is related in some way (e.g., another car). If they are able to
identify the correct match faster than if they had never seen the first picture, then it is
considered evidence that the first picture primed the person to recognize the second.
To improve associative memory, you can practice retrieval of associations, which helps
strengthen synaptic connections in the brain and enhances their ability to be activated more
quickly.
1. Create a network of associations. This means associating yourself with people who are
able to recall many things (or who say they are good at recalling things). By watching them
and modelling their actions, you can improve your own ability to recall items by overlearning.
2. Associate one person or thing to another in some way, such as using a rhyme, sentence
or phrase. The association can be general (e.g., "grass is green") or specific (e.g., "the doctor
is in the house").
3. Create a story with many associations to make it more memorable and to help you
recall details. If you have trouble recalling information, then practice recalling it again and
again, and note where you are having problems.
5. Use the method of loci to remember lists or other materials by associating them with
locations that you are familiar with (e.g., rooms in your home). This is related to space-
coding techniques used by pilots to remember flight paths and procedures, and it works best if
you create a visual image of each location.
7. Create associations that show how things are alike or different from one another. For
example, if you want to remember the steps in a process, then associate them somehow so that
they make sense to you (e.g., "take out" is similar to "out of").
8. Use memory-triggering devices (e.g., cues), which are items or actions that prompt the
recall of information that is easy to forget. You can use a memory-triggering device by tying
it to anything you want to remember, such as setting an alarm or writing down the information.
9. Associate people with words (or situations) in some way, and then try to recall the
person's name by recalling the word (e.g., the word "green" might trigger the name of
your friend, "Jenny").
10. Use a method that suits you best. Everyone is different, and some people find it easier to
create music or phrases to help them remember things.
SECONDARY STORAGE DEVICES
In a computer, memory refers to the physical devices that are used to store programs or data
on a temporary or permanent basis. It is a group of registers. Memory are of two types (i)
primary memory, (ii) secondary memory. Primary memory is made up of semiconductors, It
is also divided into two types, Read-Only Memory (ROM) and Random Access Memory
(RAM). Secondary memory is a physical device for the permanent storage of programs and
data(Hard disk, Compact disc, Flash drive, etc.).
1. Secondary memory is a type of computer memory that is used to store data and
programs that can be accessed or retrieved even after the computer is turned off.
Unlike primary memory, which is volatile and temporary, secondary memory is
non-volatile and can store data and programs for extended periods of time. Some
examples of secondary memory include hard disk drives (HDDs), solid-state
drives (SSDs), optical discs (such as CDs and DVDs), and flash memory (such as
USB drives and memory cards). These storage devices provide a much larger
capacity than primary memory and are typically used to store large amounts of
data, such as operating systems, application programs, media files, and other types
of digital content.
2. Secondary memory can be classified into two types: magnetic storage and solid-
state storage. Magnetic storage devices, such as hard disk drives and magnetic
tapes, use magnetic fields to store and retrieve data. Solid-state storage devices,
such as solid-state drives and flash memory, use semiconductor-based memory
chips to store data.
3. One of the main advantages of secondary memory is its non-volatile nature, which
means that data and programs stored on secondary memory can be accessed even
after the computer is turned off. Additionally, secondary memory devices provide
a large storage capacity, making it possible to store large amounts of data and
programs.
However, there are also some disadvantages to secondary memory, such as slower access
times and lower read/write speeds compared to primary memory. Additionally, secondary
memory devices are often more prone to mechanical failures and data corruption, which can
result in data loss.
Overall, secondary memory plays an important role in modern computing systems and is
essential for storing large amounts of data and programs.
We have read so far, that primary memory is volatile and has limited capacity. So, it is
important to have another form of memory that has a larger storage capacity and from which
data and programs are not lost when the computer is turned off. Such a type of memory is
called secondary memory. In secondary memory, programs and data are stored. It is also
called auxiliary memory. It is different from primary memory as it is not directly accessible
through the CPU and is non-volatile. Secondary or external storage devices have a much
larger storage capacity and the cost of secondary memory is less as compared to primary
memory.
Secondary memory is used for different purposes but the main purposes of using secondary
memory are:
Permanent storage: As we know that primary memory stores data only when
the power supply is on, it loses data when the power is off. So we need a
secondary memory to stores data permanently even if the power supply is off.
Large Storage: Secondary memory provides large storage space so that we can
store large data like videos, images, audios, files, etc permanently.
Portable: Some secondary devices are removable. So, we can easily store or
transfer data from one computer or device to another.
3. Digital Versatile Disc: A Digital Versa le Disc also known as DVD it is looks just like a CD,
but the storage capacity is greater compared to CD, it stores up to 4.7 GB of data. DVD-ROM
drive is needed to use DVD on a computer. The video files, like movies or video recordings,
etc., are generally stored on DVD and you can run DVD using the DVD player. DVD is of three
types:
DVD-ROM(Digital Versatile Disc Read only): In DVD-ROM the
manufacturer writes the data in it and the user can only read that data, cannot write
new data in it. For example movie DVD, movie DVD is already written by the
manufacturer we can only watch the movie but we cannot write new data into it.
DVD-R(Digital Versatile Disc Recordable): In DVD-R you can write the data
but only one time. Once the data has been written onto it cannot be erased, it can
only be read.
DVD-RW(Digital Versatile Disc Rewritable and Erasable): It is a special type
of DVD in which data can be erased and rewritten as many times as we want. It is
also called an erasable DVD.
4. Blu-ray Disc: A Blu-ray disc looks just like a CD or a DVD but it can store data
or information up to 25 GB data. If you want to use a Blu-ray disc, you need a Blu-
ray reader. The name Blu-ray is derived from the technology that is used to read the
disc ‘Blu’ from the blue-violet laser and ‘ray’ from an optical ray.
5. Hard Disk: A hard disk is a part of a unit called a hard disk drive. It is used to storing
a large amount of data. Hard disks or hard disk drives come in different storage
capacities. (like 256 GB, 500 GB, 1 TB, and 2 TB, etc.). It is created using the collection
of discs known as platters. The platters are placed one below the other. They are coated
with magnetic material. Each platter consists of a number of invisible circles and each
circle having the same centre called tracks. Hard disk is of two types (i) Internal hard
disk (ii) External hard disk.
6. Flash Drive: A flash drive or pen drive comes in various storage capacities, such
as 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, up to 1 TB. A flash drive is
used to transfer and store data. To use a flash drive, we need to plug it into a USB
port on a computer. As a flash drive is easy to use and compact in size, Nowadays
it is very popular.