0% found this document useful (0 votes)
2 views23 pages

Lecture Slides On Instruction Set Architecture and Memory Design

The document discusses Instruction Set Architecture (ISA), comparing Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computers (RISC) in terms of instruction complexity and execution cycles. It also covers memory systems, their hierarchy, types, and design considerations, emphasizing the differences between SRAM and DRAM. Additionally, it explains cache memory, its role in the memory hierarchy, and the concepts of cache hit and miss ratios.

Uploaded by

Taiwo Ayomide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views23 pages

Lecture Slides On Instruction Set Architecture and Memory Design

The document discusses Instruction Set Architecture (ISA), comparing Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computers (RISC) in terms of instruction complexity and execution cycles. It also covers memory systems, their hierarchy, types, and design considerations, emphasizing the differences between SRAM and DRAM. Additionally, it explains cache memory, its role in the memory hierarchy, and the concepts of cache hit and miss ratios.

Uploaded by

Taiwo Ayomide
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

INSTRUCTION SET ARCHITECTURE

• The number of instructions that a particular


CPU can have is limited and the collection of all
those instructions is called the Instruction Set.
• The discussion on ISA should include
what should be included in the instruction set (what
is a must for the machine), and what can be left as
an option,
how do instructions look like and what is the
relation between hardware and the instruction set
should we have a rich set of instructions (CISC) or a
simple one (RISC)
ISA

• For many years the memory in a computer


was very expensive; it was only after the
introduction of semiconductor memories that
prices began to fall dramatically.
• As long as the memory is expensive low end
computers cannot afford to have a lot; there is
a premium in a system with little memory to
reduce the size of programs. This was the case
in the 60s and 70s.
ISA
• Lot of effort was invested in making the
instructions smaller (tighter encoding) and in
reducing the number of instructions in a
program.
• Consequently. This led to so called complex
instruction set computers (CISC) that increase
complexity of instructions (CISC), to allow
shorter and more efficient programs, i.e., to
move the complexity from software to
hardware.
RISC Architecture

• RISC architectures have more instructions, but


they reduce the number of cycles that an
instruction takes to perform. Generally, a
single instruction in a RISC machine will take
only one CPU cycle.
• Multiplication in a RISC architecture cannot be
done with a single instruction. Instead, we
have to first load the data from the memory
using the LOAD instruction, then multiply the
numbers, and the store the result in the
memory.
• RISC processors only use simple instructions that can be
executed within one clock cycle. Thus, the "MULT" command
could be divided into three separate commands: "LOAD,"
which moves data from the memory bank to a register,
"PROD," which finds the product of two operands located
within the registers, and "STORE," which moves data from a
register to the memory banks. In order to perform the exact
series of steps described in the CISC approach, a programmer
would need to code four lines of assembly:
• LOAD A, 6
LOAD B, 13
PROD A, B
STORE 6, A
• This might seem like a lot of work, but in reality, since each of
these instructions only take up one clock cycle, the whole
multiplication operation is completed in fewer clock cycles.
CISC Architecture

• CISC is a computer where single instruction can perform


numerous low-level operations like a load from memory,
an arithmetic operation, and a memory store etc which
are accomplished by multi-step processes.
• Example; the four line- instruction given under RISC is
represented in CISC as thus
– MULT A,B
• MULT is a complex instruction which operates directly on
the computer's memory banks and does not require the
programmer to explicitly call any loading or storing
functions.
RISC / CISC, where is the difference?
S/N RISC CISC

1. Simple instructions taking one cycle Complex instructions taking multiple


cycles
2. Instructions are executed by Instructions are executed by
hardwired control unit microprogramed control unit

3. Execution of more instructions in require a smaller number of instructions


performing a single task than RISC

4. Fixed format instructions Variable format instructions

5. Few addressing mode, and most Many addressing modes


instructions have register to register
addressing mode

6. Multiple register set Single register set

7. Highly pipelined Not pipelined or less pipelined


Memory System

• The memory stores program and data in a


computer. It is an array of words limited in size
only by the number of address bits.
• In the basic Von Neumann model the memory
is a monolithic structure, it was later
recognized that computer memory has to be
organized in a hierarchy.
• In such a hierarchy, larger, cheaper and slower
memories are used to supplement smaller,
expensive and faster ones.
• The basic type of memory that is usually
employed within a basic von Neumann
architecture is random access memory
(RAM)
• A RAM has an address port, and a data
input and an output port, where the later
two might be combined in a single port.
Memory Design
• Real world issues arise:
 cost
 speed
 size /capacity
latency
 volatility
Bandwidth
Access type etc.
• What other issues can you think of that will
influence memory design?
Memory Technologies (types)
• 3 different technologies are used to
implement memory system of modern
computers. They are:
SRAM
DRAM
HARD DISKS
• Hard disks are by far the slowest of these
technologies and are reserved for the lowest
level of memory system, i.e. the virtual memory.
• SRAM and DRAM are faster than disk-based
memory and are the technologies used to
implement the caches and the main memories
of almost all computers
• Discuss the properties of the three
technologies with reference to the real issues
in memory design, stating where each of them
would be the appropriate building blocks for
implementing the memory system.
Memory hierarchy
• A typical memory hierarchy starts with a small,
expensive, and relatively fast unit, called the cache.
• The cache is followed in the hierarchy by a larger,
less expensive, and relatively slow main memory
unit. Cache and main memory are built using solid‐
state semiconductor material (typically CMOS
transistors).
• They are followed in the hierarchy by a far larger,
less expensive, and much slower magnetic
memories that consist typically of the (hard) disk
and the tape.
Multilevel memory hierarchy
Single level memory
system
Processor

Processor (MAR & Cache


MDR)

Memory

Memory
Virtual Memory (Hard Disk and Tape)
• The objective behind designing a memory hierarchy
is to have a memory system that performs as if it
consists entirely of the fastest unit and whose cost
is dominated by the cost of the slowest unit.
• The memory hierarchy is characterized by a number
of parameters. Among these parameters are:
i. The access type ii. Capacity
iii. Cycle time iv. Latency
v. Bandwidth vi. cost.
• The term access refers to the action that physically
takes place during a read or write operation.
• The capacity of a memory level is usually
measured in bytes
• The cycle time is defined as the time elapsed
from the start of a read operation to the start of a
subsequent read.
• The latency is defined as the time interval
between the request for information and the
access to the first bit of that information.
• The bandwidth provides a measure of the
number of bits per second that can be accessed.
• The cost of a memory level is usually specified as
dollars per megabytes.
Typical memory hierarchy

CPU Registers

Cache

Latency Speed
Bandwidth Main Memory cost per bit

Secondary Storage

Tertiary Storage

Capacity (Megabytes)
Access type Capacity Latency Bandwidth Cost/MB

CPU Random 64–1024 1–10 ns System clock High


registers bytes rate

Cache Random 8–512 KB 15–20 ns 10–20 MB/s $500


memory

Main Random 16–512 MB 30–50 ns 10–20 MB/s $20–50


memory

Disk Direct 1–20 GB 10–30 ms 10–30 MB/s $0.25


memory

Tape Sequential 1–20 TB 30–10,000 1–2 MB/s $0.025


memory ms
• In Random access, any access to any memory location
takes the same fixed amount of time regardless of the
actual memory location and/or the sequence of accesses
that takes place.
• E.g. if a write operation to memory location 100 takes 15
ns and if this operation is followed by a read operation to
memory location 3000, then the latter operation will also
take 15 ns. This
• However, in sequential access, if access to location 100
takes 500 ns, and if a consecutive access to location 101
takes 505 ns, then it is expected that an access to
location 300 may take 1500 ns.
• This is because the memory has to cycle through
locations 100 to 300, with each location requiring 5 ns.
Cache
• Putting aside the set of CPU registers i.e. MDR
and MAR (as 1st first level for storing &
retrieving information inside the CPU, then a
typical memory hierarchy starts with a small,
expensive, and relatively fast unit, called the
cache
• Cache (a small, expensive, high-speed memory
that is near the CPU) is used to keep the
information expected to be used more
frequently by the CPU.
• When the processor makes a request for a
memory reference, the request is first sought in
the cache.
• If the request corresponds to an element that is
currently residing in the cache, it is called a
cache hit and if not a cache miss
• A cache hit ratio, hc, is defined as the
probability of finding the requested element in
the cache.
• A cache miss ratio (1- hc) is defined as the
probability of not finding the requested element
in the cache.
• In the case that the requested element is not
found in the cache, then it has to be brought from
a subsequent memory level in the memory
hierarchy.
• Assuming that the element exists in the next
memory level, that is, the main memory, then it
has to be brought and placed in the cache
• Assignment
• Read more on calculations that include: (average)
access time, hit rate and miss rate
• Formula of Calculations involving hit and miss rate
has been given.
SRAMs

• The main differences between DRAMs and SRAMs is


how their bit cells are constructed. The core of an
SRAM bit cell consists of two inverters connected in a
back-to-back configuration. Once a value is placed in
the bit cell, the ring structure of the two inverters will
maintain the value indefinitely, because each
inverter’s input is the opposite of the other’s. this is
the reason why SRAMs are called static RAMs. DRAMs
on the other hand, will lose their stored values over
time which is while they are known as Dynamic RAMs

You might also like