Chapter 3 - CO
Chapter 3 - CO
Main memory
- The main memory is the central storage unit in a
computer system. It is a relatively large and fast memory
used to store programs and data during the computer
2

ff
ff
operation. The principal technology used for the main
memory is based on semiconductor integrated circuit.
Integrated circuit RAM chips are available in two
possible operating modes, static and dynamic.
3
fl

fl
ff
fi
fi
ff
fl
4

Memory Interfacing
5

ff
fi
fl
it is a read operation, write operation or other control
actions like enabling memory access.
Memory Interfacing
6

fi
ff
• Memory Hierarchy: Modern computer systems
often use a memory hierarchy to optimise performance.
This hierarchy includes multiple levels of memory, such
as cache, main memory, and secondary storage, with
varying access speeds and capacities.
7

fi
fi
Static variables are declared outside the main function and
are available throughout the program. These variables are
allocated memory at the time of program compilation.
Global variables are like static variables but are accessible
from all the functions in the program. Arrays are also
allocated memory at the time of program compilation, and
their size is xed.
9

fi
fi
Advantages of Dynamic Memory Allocation:
Flexible Memory Usage: Dynamic memory allocation allows
the size of the data structure to be changed dynamically
during program execution. This makes it more exible than
static memory allocation.
Difference
11
fi

ff
ff
fi
Static Memory Dynamic Memory
Allocation Allocation
This is used in an array. This is used in linked lists.
Memory Hierarchy
Memory Hierarchy
Level-0 − Registers
The registers are present inside the CPU. As they are present
inside the CPU, they have least access time. Registers are most
expensive and smallest in size generally in kilobytes. They are
implemented by using Flip-Flops.
Level-1 − Cache
13

and smaller in size generally in Megabytes and is implemented
by using static RAM.
Associative Memory
Associative memory is also known as content addressable
memory (CAM) or associative storage or associative array.
It is a special type of memory that is optimised for
performing searches through data, as opposed to providing
a simple direct access to the data based on the address.
15

It can be viewed as data correlation here. Input data is
correlated with that of stored data on the CAM.
16

The words that connect the bits of the argument
register set an equivalent bit in the match register.
After the matching process, those bits in the match
register that have been set denote the fact that their
equivalent words have been connected.
18

Applications
It can be used in memory allocation format.
Advantages
It is used where search time needs to be less or short.
Disadvantages
It is more expensive than RAM.
19

fi
fi
Cache Memory
Cache memory is a small, high-speed storage area in a
computer. The cache is a smaller and faster memory that
stores copies of the data from frequently used main
memory locations.
Cache Memory
Levels of Memory
Level 1 or Register: It is a type of memory in which data is
stored and accepted that are immediately stored in the
CPU. The most commonly used register is Accumulator,
Program counter , Address Register, etc.
21

Level 2 or Cache memory: It is the fastest memory that has
faster access time where data is temporarily stored for
faster access.
Cache Performance
When the processor needs to read or write a location in the
main memory, it rst checks for a corresponding entry in
the cache.
22

ff
fi
fi
fi
fi
Hit Ratio(H) = hit / (hit + miss) = no. of hits/total accesses
Miss Ratio = miss / (hit + miss) = no. of miss/total accesses
= 1 - hit ratio(H)
Cache Mapping
There are three di erent types of mapping used for the
purpose of cache memory which is as follows:
• Direct Mapping
• Associative Mapping
• Set-Associative Mapping
Direct Mapping
23

fi
fi
ff
fi
The cache is used to store the tag eld whereas the rest is
stored in the main memory. Direct mapping`s performance
is directly proportional to the Hit ratio.
i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
Direct Mapping
signi cant portion) and a line eld of r bits. This latter eld
identi es one of the m=2 r lines of the cache. Line o set is
index bits in the direct mapping.
24

fi
fi
fi
fi
fi
fi
ff
fi
Direct Mapping - Structure
Associative Mapping
In this type of mapping, associative memory is used to
store the content and addresses of the memory word. Any
block can go into any line of the cache. This means that the
25

word id bits are used to identify which word in the block is
needed, but the tag becomes all of the remaining bits. This
enables the placement of any word at any place in the
cache memory. It is considered to be the fastest and most
exible mapping form. In associative mapping, the index
bits are zero.
26

fl
Set-Associative Mapping
This form of mapping is an enhanced form of direct
mapping where the drawbacks of direct mapping are
removed. Set associative addresses the problem of
possible thrashing in the direct mapping method. It does
this by saying that instead of having exactly one line that a
block can map to in the cache, we will group a few lines
together creating a set . Then a block in memory can map
to any one of the lines of a speci c set. Set-associative
mapping allows each word that is present in the cache can
have two or more words in the main memory for the same
index address. Set associative cache mapping combines
the best of direct and associative cache mapping
techniques. In set associative mapping the index bits are
given by the set o set bits. In this case, the cache consists
of a number of sets, each of which consists of a number of
lines.
Set-Associative Mapping
27

ff
fi
Relationships in the Set-Associative Mapping can be de ned
as:
m=v*k
i= j mod v
where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set
29

Advantages
• Cache Memory is faster in comparison to main
memory and secondary memory.
Disadvantages
• Cache Memory is costlier than primary memory and
secondary memory .
30

ff
Cache memory is important because it helps speed up the
processing time of the CPU by providing quick access to
frequently used data, improving the overall performance of
the computer.
Virtual Memory
Virtual memory is a memory management technique used
by operating systems to give the appearance of a large,
continuous block of memory to applications, even if the
physical memory (RAM) is limited. It allows the system to
compensate for physical memory shortages, enabling larger
applications to run on systems with less RAM.
31

ff
It is a technique that is implemented using both hardware
and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses
in computer memory.
32

ff
ff
There are two main types of virtual memory:
• Paging
• Segmentation
Paging
Paging divides memory into small xed-size blocks called
pages. When the computer runs out of RAM, pages that
aren’t currently in use are moved to the hard drive, into an
area called a swap le. The swap le acts as an extension
of RAM. When a page is needed again, it is swapped back
into RAM, a process known as page swapping. This
33
Demand Paging

fi
fi
fi
ensures that the operating system (OS) and applications
have enough memory to run.
34

Hence whenever a page fault occurs these steps are
followed by the operating system and the required page is
brought into memory.
Segmentation
Segmentation divides virtual memory into segments of
di erent sizes. Segments that aren’t currently needed can
be moved to the hard drive. The system uses a segment
table to keep track of each segment’s status, including
whether it’s in memory, if it’s been modi ed, and its
physical address. Segments are mapped into a process’s
address space only when needed.
What is Swapping?
36

fi
ff
fi
ff
Swapping is a process out means removing all of its pages
from memory, or marking them so that they will be removed
by the normal page replacement process. Suspending a
process ensures that it is not runnable while it is swapped
out. At some later time, the system swaps back the process
from the secondary storage to the main memory. When a
process is busy swapping pages in and out then this
situation is called thrashing.
Swappinghierar
37

What is Thrashing?
At any given time, only a few pages of any process are in
the main memory, and therefore more processes can be
maintained in memory. Furthermore, time is saved because
unused pages are not swapped in and out of memory.
However, the OS must be clever about how it manages this
scheme. In the steady state practically, all of the main
memory will be occupied with process pages, so that the
processor and OS have direct access to as many
processes as possible. Thus when the OS brings one page
in, it must throw another out. If it throws out a page just
before it is used, then it will just have to get that page again
almost immediately. Too much of this leads to a condition
called Thrashing. The system spends most of its time
swapping pages rather than executing instructions. So a
good page replacement algorithm is required.
38

Causes of Thrashing
For example:
Let free frames = 400
Recovery of Thrashing
• Do not allow the system to go into thrashing by
instructing the long-term scheduler not to bring the
processes into memory after the threshold.
• if p = 0 no page faults
40

ffi
• if p =1, every reference is a fault.
Frame Allocation
A number of frames allocated to each process in either
static or dynamic.
Paging Policies
• Fetch Policy: It decides when a page should be loaded
into memory.
41

ff
fi
• Placement Policy: It decides where in memory should
a page be loaded.
42

ffi
ff
fi
fi
a way that assists in e cient use of memory as well as
system performance.
43
fi

fi
fi
fi
fi
ffi
ffi
2. Place the Page File on a Fast Drive
45

ffi
fi
ff
ff
Advantages of Virtual Memory
• More processes may be maintained in the main
memory: Because we are going to load only some of
the pages of any particular process, there is room for
more processes. This leads to more e cient utilisation
of the processor because it is more likely that at least
one of the more numerous processes will be in the
ready state at any particular time.
46

ffi
• When only a portion of a program is required for
execution, speed has increased.
47

ffi
Data Path Design for read/write access
BUS
In early computers BUS were parallel electrical wires with
multiple hardware connections. Therefore a bus is a
communication system that transfers data between
components inside a computer, or between computers. It
includes hardware components like wires, optical bers, etc
and software, including communication protocols. The
Registers, ALU, and the interconnecting BUS are
collectively referred to as data paths.
Data bus: carries the data between the processor and other
components. The data bus is bidirectional.
48

fi
Control bus: carries control signals from the processor to
other components.The control bus also carries the clock’s
pulses. The control bus is unidirectional.
49

ff
ff
One Bus Organisation
50


Two Bus Organisation
To overcome the disadvantage of one bus organisation
another architecture was developed known as two bus
organisation. In two bus organisations, there are two buses.
The general-purpose register can read/write from both the
buses. In this case, two operands can be fetched at the
same time because of the two buses. One bus fetch
operand for ALU and another bus fetch for register. The
situation arises when both buses are busy fetching
operands, the output can be stored in a temporary register
and when the buses are free, the particular output can be
dumped on the buses.
51


Three Bus Organisation
In three bus organisations we have three buses, OUT bus1,
OUT bus2, and an IN bus. From the out buses, we can get
the operand which can come from the general-purpose
register and evaluated in ALU and the output is dropped on
In Bus so it can be sent to respective registers. This
implementation is a bit complex but faster in nature
because in parallel two operands can ow into ALU and out
of ALU. It was developed to overcome the busy waiting
problem of two bus organisations. In this structure after
execution, the output can be dropped on the bus without
waiting because of the presence of an extra bus. The
structure is given below in the gure.
Single Cycle
This has one CPI (Clock Cycle Per Instruction) and does not
divide instructions per CPI. In such data paths, only one
instruction is executed at one time. Besides, it does not
require any registers. In a single data path, the clock time is
longer than the other types. Overlapping the clock cycle
cannot be done with such data paths.
Multiple Cycle
Such data paths have variable CPIs. Here, instructions can
be segregated into arbitrary steps. Like a single cycle,
multiple cycles can also execute only one instruction at one
time. However, they need registers to carry further.
Moreover, the clock time is shorter than a single cycle.
Clock cycle overlapping isn’t possible in the case of
multiple cycles.
54

Pipeline
Such data paths do not have xed CPIs. Here, per stage,
instruction has one step. Unlike single and multiple cycles,
pipelines can execute multiple instructions at one time.
With that, it requires registers. Like multiple-cycle data
paths, the pipeline also requires a short time per clock
cycle. Plus, a clock cycle occurs in the case of pipeline
data paths.
55

fi
Registers
These are small storage elements in a processor known for
their high speed. While processing data, it stores the data
temporarily. These data can be either intermediate results,
operands, or program counters. Registers are important for
fast data access, decreasing the requirement of retrieving
data from memory, which slows down the operation. Two
types of registers are available to serve di erent purposes.
General-purpose registers are required to hold data during
computation. On the other hand, special-purpose registers
store program status ags or addresses.
Program Counter
Instruction Register
56

fl
ff
fi
holds the instruction and provides it to the instruction
decoder circuit.
BUS
It is a communication system that transfers data between
various components of a computer or between two
computers. BUS is a collection of hardware components,
57

ff
including optical bers and wire. It also has software
components, like communication protocols. It allows data
and instructions movement between memory, registers, and
other peripherals. ALU, registers, and various BUS together
are termed as data paths.
Multiplexers
A data path also has multiplexers. These are required to
select data from multiple sources and route these to an
appropriate destination. Multiplexers are essential for the
movement of data within a processor—these help in
selecting from di erent inputs along with directing them to
the component on the focal point. For instance, a
multiplexer may choose between two registers to evaluate
and source the ALU operation.
Control Unit
Any data path interacts with the control unit. This unit
generates control signals that coordinate the data path
component’s activities. It interprets instructions fetched
from memory and generates the necessary signals to
control data movement, ALU operations, and register
manipulations. These units ensure the proper execution of
instructions in the right sequence. Besides, it ensures a
synchronised data path function.
58

ff
fi
Final Words
Data path design plays a critical role in computer
architecture. This ensures e cient processing, movement,
and manipulation of data within a system. The components
of data path design, such as registers, buses, multiplexers,
the Arithmetic Logic Unit (ALU), and the control unit, work
in harmony to enable seamless data ow and execution of
instructions. With the advancement of technology, data
path design will keep evolving and continue shaping the
future of computer architecture.
59

ffi
fl