0% found this document useful (0 votes)
11 views14 pages

COA Unit 4

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from auxiliary to cache memory. It explains various memory access methods, types of semiconductor memory including SRAM and DRAM, and the role of ROM in storing permanent data. Additionally, it covers 2D and 2.5D memory organization, auxiliary memory types like magnetic disks and tapes, and optical discs, highlighting their characteristics and applications.

Uploaded by

kimjohn2331
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views14 pages

COA Unit 4

The document discusses memory organization, categorizing memory into volatile and non-volatile types, and detailing the memory hierarchy from auxiliary to cache memory. It explains various memory access methods, types of semiconductor memory including SRAM and DRAM, and the role of ROM in storing permanent data. Additionally, it covers 2D and 2.5D memory organization, auxiliary memory types like magnetic disks and tapes, and optical discs, highlighting their characteristics and applications.

Uploaded by

kimjohn2331
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT 4

Memory Organization
A memory unit is the collection of storage units or storage devices together. The memory unit
stores the binary information. Generally, memory/storage is classified into 2 categories:
• Volatile Memory: This loses its data, when power is switched off.
• Non-Volatile Memory: This is a permanent storage and does not lose any data when
power is switched off.
Memory Hierarchy: The total memory capacity of a computer can be visu lized by hierarchy of
components. The memory hierarchy system consists of all storage devices contained in a
computer system from the slow Auxiliary Memory to fast Main Memory and to smaller Cache
memory.

Auxiliary memory access time is generally 1000 times that of the main memory, hence it is at
the bottom of the hierarchy. The main memory occupies the central position because it is
equipped to communicate directly with the CPU and with auxiliary memory devices through
Input/output processor (I/O). When the program not residing in main memory is needed by
the CPU, they are brought in from auxiliary memory. Programs not currently needed in main
memory are transferred into auxiliary memory to provide space in main memory for other
programs that are currently in use.
The cache memory is used to store program data which is currently being executed in the
CPU. Approximate access time ratio between cache memory and main memory is about 1 to
7~10
Memory Access Methods: Each memory type is a collection of numerous memory locations.
To access data from any memory, first it must be located and then the data is read from the
memory location. Following are the methods to access information from memory locations:
(a) Random Access: Main memories are random access memories, in which each memory
location has a unique address. Using this unique address any memory location can be
reached in the same amount of time in any order.
(b) Sequential Access: This method allows memory access in a sequence or in order.
(c) Direct Access: In this mode, information is stored in tracks, with each track having a
separate read/write head.
Some Important Data:
(a) 1 Nibble = 4 bits
(b) 1 Byte = 8 bits
(c) 1 Word = 16 bits = 2 Bytes
(d) 1 Double Word = 2 Words = 32 bits = 4 Bytes

Semiconductor Memory: Semiconductor memory is used in all forms of computer


applications: there are many types, technologies and terminologies - DRAM, SRAM, Flash,
DDR3, DDR4, DDR5, and more.
Semiconductor Memory Types: Semiconductor memory is used in any electronics assembly
that uses computer processing technology. Semiconductor memory is the essential electronics
component needed for any computer based PCB assembly.
In addition to this, memory cards have become commonplace items for temporarily storing
data - everything from the portable flash memory cards used for transferring files, to
semiconductor memory cards used in cameras, mobile phones and the like. The use of
semiconductor memory has grown, and the size of these memory cards has increased as the
need for larger and larger amounts of storage is needed. To meet the growing needs for
semiconductor memory, there are many types and technologies that are used. As the demand
grows new memory technologies are being introduced and the existing types and technologies
are being further developed.
A variety of different memory technologies are available - each one suited to different
applications. Names such as ROM, RAM, EPROM, EEPROM, Flash memory, DRAM, SRAM,
SDRAM, as well as F-RAM and MRAM are available, and new types are being developed to
enable improved performance. Terms like DDR3, DDR4, DDR5 and many more are seen and
these refer to different types of SDRAM semiconductor memory.
In addition to this the semiconductor devices are available in many forms - ICs for printed
board assembly, USB memory cards, Compact Flash cards, SD memory cards and even solid
state hard drives. Semiconductor memory is even incorporated into many microprocessor
chips as on-board memory.
Main Memory
Semiconductor RAM (Random Access Memory): As the names suggest, the RAM or random
access memory is a form of semiconductor memory technology that is used for reading and
writing data in any order - in other words as it is required by the processor. It is used for such
applications as the computer or processor memory where variables and other stored and are
required on a random basis. Data is stored or read many times to/from this type of memory.
The primary compositions of a static RAM are flip-flops that store the binary information. The
nature of the stored information is volatile, i.e. it remains valid as long as power is applied to
the system. The static RAM is easy to use and takes less time performing read and write
operations as compared to dynamic RAM. The dynamic RAM exhibits the binary information in
the form of electric charges that are applied to capacitors. The dynamic RAM consumes less
power and provides large storage capacity in a single memory chip.
Random access memory is used in huge quantities in computer applications as current day
computing and processing technology requires large amounts of memory to enable them to
handle the memory hungry applications used today. Many types of RAM including SDRAM
with its DDR3, DDR4, and soon DDR5 variants are used in huge quantities. RAM chips are
available in a variety of sizes and are used as per the system requirement. The following block
diagram demonstrates the chip interconnection in a 128 * 8 RAM chip.
128 x 8 Memory can be represented as follows:

(128 Bytes)
A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one byte) per word. This
requires a 7-bit address and an 8-bit bidirectional data bus. The 8-bit bidirectional data bus
allows the transfer of data either from memory to CPU during a read operation or from CPU to
memory during a write operatio n. The read and write inputs specify the memory operation,
and the two chip select (CS) control inputs are for enabling the chip only when the
microprocessor selects it.

Question: Calculate number of bits and number of bytes in a semiconductor RAM, which is
defined as 64x16. Also give information about word length and number of memory
locations.
Answer: Total number of bits in RAM = 64 x 16 = 26 x 24 = 210 = 1024 bits
Total number of bytes in RAM = 64 x 2 = 128 bytes
Number of bits in one register = 16 bits = 2 bytes
Hence, Word Length = 16 bit
Number of memory locations = 64 (Number of registers)

SRAM (Static RAM): Static Random Access Memory (Static RAM or SRAM) is a type of RAM
that holds data in a static form, that is, as long as the memory has power. Unlike dynamic
RAM, it does not need to be refreshed.
SRAM stores a bit of data on four transistors using two cross-coupled inverters. The two stable
states characterize 0 and 1. During read and write operations another two access transistors
are used to manage the availability to a memory cell. To store one memory bit it requires six
metal-oxide-semiconductor field-effect transistors (MOFSET). MOFSET is one of the two types
of SRAM chips; the other is the bipolar junction transistor. The bipolar junction transistor is
very fast but consumes a lot of energy. MOFSET is a popular SRAM type.

Static RAM
DRAM (Dynamic RAM): Dynamic random access memory (DRAM) is a type of random-access
memory used in computing devices (primarily PCs). DRAM stores each bit of data in a separate
passive electronic component that is inside an integrated circuit board. Each electrical
component has two states of value in one bit called 0 and 1. This ca ptivator needs to be
refreshed often otherwise information fades. DRAM has one capacitor a nd one transistor per
bit as opposed to static rando m access memory (SRAM) that requires 6 transistors. The
capacitors and transistors that are used are exceptionally small. There are millions of
capacitors and transistors that fit on one single memory chip.

Dynamic RAM
Types of DRAM
• Rambus DRAM (RDRAM) takes its name after the company that made it, Rambus. It was
popular in the early 2000s and was mainly used for video game devices and graphics
cards, with transfer speeds up to 1 GHz.
• Synchronous DRAM (SDRAM) “synchronizes” the memory speed with CPU clock speed
so that the memory controller knows the exact clock cycle when the requested data will
be ready. This allows the CPU to perform more instructions at a given time. Typical
SDRAM transfers data at speeds up to 133 MHz.
• Double Data Rate SDRAM (DDR SDRAM) is a type of synchronous memory that nearly
doubles the bandwidth of a single data rate (SDR) SDRAM running at the same clock
frequency by employing a method called "double pumping," which allows transfer of
data on both the rising and falling edges of the clock signal without any increase in clock
frequency.
DDR1 SDRAM has been succeeded by DDR2, DDR3, and most recently, DDR4 SDRAM.
Although operating on the same principles, the modules are not backward-compatible. Each
generation delivers higher transfer rates and faster performance. The latest DDR4 modules,
for example, feature fast transfer rates at 2133/2400/2666and even 3200 MT/s.

Semiconductor ROM (Semiconductor Read Only Memory): The primary component of the
main memory is RAM integrated circuit chips, but a portion of memory may be constructed
with ROM chips. A ROM memory is used for keeping programs and data that are permanently
resident in the computer.

Apart from the permanent storage of data, the ROM portion of main memory is needed for
storing an initial program called a bootstrap loader. The primary function of the bootstrap
loader program is to start the computer software operating when power is turned on.
ROM chips are also available in a variety of sizes and are also used as per the system
requirement. The following block diagram demonstrates the chip interconnection in a 512 * 8
ROM chip.
A ROM chip has a similar organization as a RAM chip. However, a ROM can only perform read
operation; the data bus can only operate in an output mode. The 9-bit address lines in the
ROM chip specify any one of the 512 bytes stored in it. The value for chip select 1 and chip
select 2 must be 1 and 0 for the unit to operate. Otherwise, the data bus is said to be in a high-
impedance state.
Read Only Memory (ROM): As the name implies, a read-only memory (ROM) is a memory
unit that performs the read operation only; it does not have a write capability. This implies
that the binary information stored in a ROM is made permanent during the hardware
production of the unit and cannot be altered by writing different words into it.
A ROM is restricted to reading words that are permanently stored within the unit. The binary
information to be stored specified by the designer, is then embedded in the unit to form the
required interconnection pattern. ROMs come with special internal electronic fuses that can
be programmed for a specific configuration. Once the pattern is established, it stays within the
unit even when power is turned off and on again.
Different Types of ROM: The required paths in a ROM may be programmed in three different
ways.
1. The first, mask programming, is done by the semiconductor company during the last
fabrication process of the unit. This procedure is costly because the vendor charges the
customer a special fee for custom masking the particular ROM. For this reason, mask
programming is economical only if a large quantity of the same ROM configuration is to
be ordered.
2. For small quantities it is more economical to use a second type of ROM called
a Programmable Read Only Memory (PROM). The hardware procedure for
programming ROMs or PROMs is irreversible, and once programmed, the fixed pattern
is permanent and cannot be altered. Once a bit pattern has been established, the unit
must be discarded if the bit pattern is to be changed.
3. A third type of ROM available is called Erasable PROM or EPROM. The EPROM can be
restructured to the initial value even though its fuses have been blown previously.
Certain PROMs can be erased with electrical signals instead of ultraviolet light. These
PROMs are called Electrically Erasable PROM or EEPROM. Flash memory is a form of
EEPROM in which a block of bytes can be erased in a very short duration.
Example applications of EEPROM devices are:
• Storing current time and date in a machine.
• Storing port status.
Example applications of Flash memory devices are:
• Storing messages in a mobile phone.
• Storing photographs in a digital camera.

2D Memory organization: Basically in 2D organization memory is divides in the form of rows


and columns. Each row contains a word now in this memory organization there is a decoder. A
decoder is a combinational circuit which contains n input lines and 2n output lines. One of the
output lines will select the row which address is contained in the MAR. And the word which is
represented by the row that will get selected and either read or write through the data lines.
2.5 D Memory organization: In 2.5D Organization the scenario is the same but we have two
different decoders one is column decoder and another is row decoder. Column decoder used
to select the column and row decoder is used to select the row. Address from the MAR will go
in decoders’ input. Decoders will select the respective cell. Through the bit outline, the data
from that location will be read or through the bit in line data will be written at that memory
location.

Read and Write Operations –


1. If the select line is in Read mode then the Word/bit which is represented by the MAR
that will be coming out to the data lines and get read.
2. If the select line is in write mode then the data from memory data register (MDR) will go
to the respective cell which is addressed by the memory address register (MAR).
3. With the help of the select line the data will get selected where the read and write
operations will take place.

Comparison between 2D & 2.5D Organizations –


1. In 2D organization hardwar e is fixed but in 2.5D hardware changes.
2. 2D Organization requires more no. of Gates while 2.5D requires less no. of Gates.
3. 2D is more complex in comparison to the 2.5D Organization.
4. Error correction is not possible in the 2D organization but in 2.5D error correction is
easy.
5. 2D is more difficult to fabricate in comparison to the 2.5D organization.

Auxiliary Memory
An Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-access storage
in a computer system. It is where programs and data are kept for long-term storage or when
not in immediate use. The most common examples of auxiliary memories are magnetic tapes
and magnetic disks.
Magnetic Disks: A magnetic disk is a type of memory constructed using a circular plate of
metal or plastic coated with magnetized materials. Usually, both sides of the disks are used to
carry out read/write operations. However, several disks may be stacked on one spindle with
read/write head available on each surface.
The following image shows the structural representation for a magnetic disk.
o The memory bits are stor ed in the magnetized surface in spots along the concentric
circles called tracks.

o The concentric circles (tracks) are commonly divided into sections called sectors.
Magnetic Tape: Magnetic tape is a storage medium that allows data archiving, collection, and
backup for different kinds of data. The magnetic tape is constructed using a plastic strip coated
with a magnetic recording medium. The bits are recorded as magnetic sp ots on the tape along
several tracks. Usually, seven or nine bits are recorded simultaneously to form a character
together with a parity bit.

Magnetic tape units can be halted, started to move forward or in reverse, or can be rewound.
However, they cannot be started or stopped fast enough between individual characters. For
this reason, information is recorded in blocks referred to as records.

Optical Disc: An optical disk is any computer disk that uses optical storage techniques and
technology to read and write data. It is a computer storage disk that sto es data digitally and
uses laser beams (transmitted from a laser head mounted on an optical disk drive) to read and
write data.
An optical disk is primarily used as a portable and secondary storage device. It can store more
data than the previous generation of magnetic storage media, and has a r latively longer
lifespan. Compact disks (CD), digital versatile/video disks (DVD) and Blu-ray disks are currently
the most commonly used forms of optical disks. These disks are generally used to:
• Distribute software to customers
• Store large amounts of dat a such as music, images and videos
• Transfer data to different computers or devices
• Back up data from a local machine
Cache Memory in Computer Organization: Cache Memory is a special very high-speed
memory. It is used to speed up and synchronizing with high-speed CPU. Cache memory is
costlier than main memory or disk memory but economical than CPU registers. Cache memory
is an extremely fast memory type that acts as a buffer between RAM and the CPU. It holds
frequently requested data and instructions so that they are immediately available to the CPU
when needed.
Cache memory is used to reduce the average time to access data from the Main memory. The
cache is a smaller and faster memory which stores copies of the data from frequently used
main memory locations. There are various different independent caches in a CPU, which store
instructions and data.

Cache Performance: When the processor needs to read or write a location in main memory, it
first checks for a corresponding entry in the cache.
If the processor finds that the memory location is in the cache, a cache hit has occurred and
data is read from cache. If the processor does not find the memory location in the cache,
a cache miss has occurred. For a cache miss, the cache allocates a new entry and copies in
data from main memory, and then n the request is fulfilled from the conten ts of the cache.
The performance of cache mem ory is frequently measured in terms of a quantity called Hit
ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher associativity, reduce
miss rate, reduce miss penalty, and reduce the time to hit in the cache.
Cache Mapping: There are three different types of mapping used for the purpose of cache
memory which are as follows: Direct mapping, Associative mapping, and Set-Associative
mapping. These are explained in a hand written notes.
Virtual Memory
In computing, virtual memory (also virtual storage) is a memory management technique that
provides an idealized abstraction of the storage resources that are actually available on a given
machine which creates the illusion to users of a very large (main) memory.
The computer's operating system, using a combination of hardware and software,
maps memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory. Main storage, as seen by a process or task, appears as a
contiguous address space or collection of contiguous segments. The operating system
manages virtual address spaces and the assignment of real memory to virtual memory.
Address translation hardware in the CPU, often referred to as a memory management
unit or MMU, automatically translates virtual addresses to physical addresses. Software within
the operating system may extend these capabilities to provide a virtual address space that can
exceed the capacity of real memory and thus reference more memory than is physically
present in the computer.
The primary benefits of virtual memory include freeing applications from having to manage a
shared memory space, increased security due to memory isolation, and being able to
conceptually use more memory than might be physically available, using the technique
of paging.

The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using disk. Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory.
• User written error handling routines are used only when an error occurred in the data
or computation.
• Certain options and features of a program may be used rarely.
• Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
• The ability to execute a program that is only partially in memory would counter many
benefits.
• Less number of I/O would be needed to load or swap each user program into memory.
• A program would no longer be constrained by the amount of physical memory that is
available.
• Each user program could take less physical memory; more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
Page Fault in Virtual Memory
Page fault occurs due to two reasons:
i) Page is not available (Missing Page)
ii) Address of page in Page Map Table is not correct
So the program will not be loaded and invalid reference symbol ‘i’ is generated.
A page fault occurs when a program attempts to access data or code that is in its address
space, but is not currently located in the system RAM. So when page fault occurs then
following sequence of events happens:

• The computer hardware traps to the kernel and program counter (PC) is saved on the
stack. Current instruction state information is saved in CPU registers.
• An assembly program is started to save the general registers and other volatile
information to keep the OS from destroying it.
• Operating system finds that a page fault has occurred and tries to find out which virtual
page is needed. Sometimes hardware register contains this required information. If not,
the operating system must retrieve PC, fetch instruction and find out what it was doing
when the fault occurred.
• Once virtual address caused page fault is known, system checks to see if address is valid
and checks if there is no protection access problem.
• If the virtual address is valid, the system checks to see if a page frame is free. If no
frames are free, the page replacement algorithm is run to remove a page.
• If frame selected is dirty, page is scheduled for transfer to disk, context switch takes
place, fault process is suspended and another process is made to run until disk transfer is
completed.
• As soon as page frame is clean, operating system looks up disk address where needed
page is, schedules disk operation to bring it in.
• When disk interrupt indicates page has arrived, page tables are updated to reflect its
position, and frame marked as being in normal state.
• Faulting instruction is backed up to state it had when it began and PC is reset. Faulting is
scheduled, operating system returns to routine that called it.
• Assembly Routine reloads register and other state information, returns to user space to
continue execution.

You might also like