0% found this document useful (0 votes)
4 views38 pages

Chapter 4B

Uploaded by

rummyflappy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views38 pages

Chapter 4B

Uploaded by

rummyflappy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

CSC 159: COMPUTER

ORGANIZATION
Memory Capacity
Memory consists of a number of cells (or locations) each of
which can store a piece of information.
Each cell has a number, called its address, by which a program
can refer to it.
Example: If a memory has n cells, they will have addresses 0 to n-
1.
Thus if a memory has 32 cells, the address will start at ____ and
finishes at ______.
…Memory Capacity
All cells in a memory contain the same number of bits. If a cell
consists of k bits, it can only hold any one of 2k different bit
combinations.
Example: If each cell has 8 bits, the possible number of bit
combination is _________
Most computing manufacturers have standardized on an 8-bit
cell which is called a byte. Bytes are grouped into words (16 bit).
…Memory Capacity -
Example

Given a memory of 128MB RAM, find the


memory capacity in bytes
address size
largest address
Byte Ordering
The bytes in a word can be numbered from left-to-right or right-
to-left.
Big endian
Bytes is numbered from left to right
Usually for Motorola family
Little endian
Bytes is numbered from right to left
Usually for Intel family
Example

MOV AX, [3002]


Draw using big endian and little endian
Memory Operation – Read/Write
Processors need to be able to read data from memory and also
write data to memory. Most computers have an address bus, a
data bus, and a control bus for communication between the
CPU and memory
Two types of registers, which act as an interface between the
CPU and memory are:
Memory Address Register (MAR)
Memory Data Register/Memory Buffer Register (MDR/MBR)
Relationship Between MAR,
MDR and Memory
Memory Address Register
(MAR)
Holds the address in the memory, which is to be “opened” for
data.
connected via the address bus to an address decoder that
interprets the address and activates a single address line into
the memory
There is a separate address line for each group of cells in the
memory.
If there are n bits of addressing, there will be 2n address lines.
Memory Data Register/Memory
Buffer Register (MDR/MBR)
connected to every cell in the memory unit and to
the data bus
only a single group of cells is activated at any given
time. Thus, only one memory location is addressed
at one time.
Control Lines
Three lines that control the memory cell are
an address line - Turn on only if the computer is addressing the
data within that cell
a read/write line - Determines whether the data will be
transferred from the cell to the MDR (read) or from the MDR to
the cell (write)
an activation line
Read Operation
If the address line and activation line are both ON and the
read/write line is set to read, then the READ SWITCH connects
the output of the cell to the MDR line.
The interaction between the CPU and the memory registers takes
place as follows:
To read data/instruction from a memory cell, the CPU puts a
memory address in the MAR via the address bus sent to the
memory. At the same time, CPU sets the control signals to
READ.
The memory cell then puts the requested item on the data bus
Steps
CPU copies an address from some register in the CPU to
the memory address register (MAR).
At the same time, the CPU sends a message to the memory
unit that the memory transfer is retrieval (READ) from
memory.
CPU then turns on the switch that connects the MDR with
the register, and transfer takes place between MDR and
memory.
The data will then be transferred to the appropriate
register in the CPU.
Write Operation
If the address line and activation line are both on and the
read/write line is set to write, then the WRITE SWITCH
connects the MDR line to the input of the memory cell, which
transfers the data bit on the MDR line to the memory cell for
storage.
The interaction between the CPU and the memory registers
takes place as follows:
The CPU puts the data to be written on the data bus and the
address to be stored into on the address bus and then it
asserts WRITE
Then the data is sent from the accumulator register to the
MDR via the data bus to a memory cell.
Steps
CPU copies an address from some register in the CPU to the
memory address register (MAR).
At the same time, the CPU sends a message to the memory unit
that the transfer is a store to memory.
CPU then momentarily turns on the switch that connects the
MDR with the register.
Data will be transferred from register (in the CPU) to the MDR.
Finally, data within MDR will be transferred to memory.
Memory Enhancement
Within the instruction fetch-execute cycle, the slowest steps are
those that require memory access
Thus, any improvement in memory access can have a major
impact on program processing speed
Three different approaches commonly used to enhance the
performance of memory:
1.Wide path memory access
2.Memory interleaving
3.Cache memory
Wide Path Memory Access
To widen the data path so as to read or write several bytes
or words between CPU and memory with each access
Instead of reading 1 byte at a time, the system can retrieve 2,
4, 8 or 16 bytes simultaneously
Because most instructions are several bytes long and most
data is at least 2 bytes and frequently more
This solution can be implemented easily by widening the bus
data path and using a larger MDR
Most modern CPUs, has a 64-bit data path and is commonly
used to read/write 8 bytes of data with a single memory
access
Memory Interleaving
To divide memory into parts so that it is possible to
access more than one location at a time
Each part has its own address register (MAR) and data
register (MDR), and each part is independently
accessible. Memory can accept one read/write request
from each part simultaneously
It is usually more useful to divide memory so that
successive access points (ex: a group of 8 bytes) are in
different blocks
Cache
Area of RAM set aside to store the most frequently accessed
information.
Improves processing by acting as a temporary high-speed
holding area between memory and the CPU
Cache memory is organized into blocks. Each block provides a
small amount of storage, perhaps between 8 and 64 bytes,
known as cache line.
Each block holds a tag. The tag identifies the location in main
memory that corresponds to the data being held in that block
Tag acts as a directory that can be used to determine exactly
which storage locations from main memory are available in
the cache
Cache…How It Works
Every CPU request to main memory, whether data or instruction,
is first seen by cache memory.
A cache controller – hardware that checks the tags to determine
if the memory location of the request is presently stored within
the cache
If the requested data exists in cache, (known as hit), cache
memory is used as if it were in main memory
Hit ratio: ratio of hits to the total number of requests
Cache…How It Works
If the requested data does not exist in cache memory, (known as
miss) a cache line that includes the required location is copied
from memory to the cache
Once this is done, the transfer is made between cache and
memory.
How Cache Works
…How Cache Works
LRU Algorithm
When cache memory is full, some block in cache memory
must be selected for replacement
Commonly, least-recently used (LRU) algorithm is used
LRU - keeps track of the usage of each block and replaces the
block that was last used the longest time ago
Updating Data Methods
Cache blocks that have been read, but not altered, can simply be
read over during replacement
Memory write requests impose an additional burden on cache
memory operations, since written data must also be written to
the main memory to protect the integrity of program and its data
Two methods of handling the process of returning changed data
from cache to main storage:
1.Write through
2.Write back
Write Through vs Write Back
Write through – writes data back to the main memory
immediately upon change in the cache
Advantage – Two copies, cache and memory, are always kept
identical
Write back/copy back – the changed data is simply held in cache
until the cache line is to be replaced
Advantage – faster since writes to memory are made only when
a cache line is actually replaced, but more care is required to
ensure that there is no data loss
Locality of Reference
Principle
Cache works based on the locality of reference principle
States that at any given time, most memory references will be
confined to one or a few small regions of memory
This is because:
Instructions are normally executed sequentially; thus,
adjoining words are likely to be accessed
Well-written program in small loop, procedure or function
Data for the program is likely taken from array
Variables for a program are all stored together
Two-Level Cache
Disk Caching
Used to reduce the time necessary to access data from a
disk
Part of the main memory can be allocated for use
When a disk read or write request is made, the system
checks the disk cache first
If the requested data is present, no disk access is
necessary.
Otherwise, a disk cache line made up of several adjoining
disk blocks is moved from the disk into the disk cache area
of memory
Virtual Memory
Virtual memory: Feature of an operating system that
increases the amount of memory available to run programs.
With large programs, parts are stored on a secondary
device like hard disk. Then each part is read in RAM only
when needed.
Need to deal with two types of address:
Physical address
Logical address
…Virtual Memory
Advantage:
Programs share memory space
More programs run at the same time
Programs run even if they cannot fit into memory all at
once
Logical Address vs Physical
Address
Logical address: relative locations of data, instructions etc.
as viewed by the process and are separate from physical
addresses.
Referred as virtual address
Physical address: memory address that is represented in
the form of a binary number on the address bus circuit in
order to enable the data bus to access a particular storage
cell of main memory (known as real address). That is,
hardware addresses of physical memory.
Physical addresses do not need to be consecutive
Logical addresses mapped to physical addresses (known as
address mapping)
…Logical Address vs Physical
Address

You might also like