0% found this document useful (0 votes)
6 views

Computer Architecture

sourc: Code Acad

Uploaded by

yixijar794
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
0% found this document useful (0 votes)
6 views

Computer Architecture

sourc: Code Acad

Uploaded by

yixijar794
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 3

Computer Architecture: Instruction Set Some functions of the CU:

Architecture - Determine what/where the next


instruction must go for processing
Instruction Set Architectures - Send clock signals to all hardware to
An Instruction Set Architecture (ISA) defines the force synchronous operations
communication rules between the hardware and - Send memory taskings if appropriate
software of the computer. The ISA is a design
principle (conceptual) and not stored in a Arithmetic and Logic Unit (ALU)
computer’s memory. An Arithmetic Logic Unit (ALU) is a digital circuit
used to perform arithmetic and logic operations. It
Some things an ISA defines: is the fundamental building block of the CPU.
- How binary instructions are formatted
- What instructions are available to be Some ALU functions:
processed on a specific hardware setup - Addition & subtraction
- How computer memory, (volatile and - Determining equality
non-volatile) is accessed - AND/OR/XOR/NOR/NOT/NAND logic
gates and more!
Complex Instruction Set Computers (CISC)
CISC (Complex Instruction Set Computer) is an Registers
ISA design practice that focuses on multi-step A register is a volatile memory system that
instructions and complex, power-consuming provides the CPU with rapid access to information
hardware. These designs primarily focus on it is immediately using.
hardware components and binary instruction
complexity. Processing components are typically Functions of a register:
not interchangeable with RISC-designed systems. - Store temporary data for immediate
processing by the ALU
CISC Instructions Attributes: - Hold "flag" information if an operation
- Single instructions take more than one CPU results in overflow or triggers other flags
cycle to complete - Hold the location of the next instruction
- Instruction length varies based on the to be processed by the CPU
instruction type
- Hardware must be designed to accept more Computer Architecture: Instruction Set
complicated instructions Architecture

Reduced Instruction Set Computers (RISC) The Compilation Process


RISC (Reduced Instruction Set Computer) is an The compilation process is the procedure code
ISA design practice of ISAs that focuses on goes through to go from high-level programming
simple, quickly executed instructions to improve languages into machine code that the hardware
efficiency and reduce power consumption. These understands. Most languages go through some
designs primarily focus on simple hardware semblance of this four-stage process:
components and reducing binary instruction
complexity. Processing components are typically Stage 1: Preprocessing
not interchangeable with CISC-designed systems. Preprocessing is the first step and is used to
prepare the user’s code for machine code by
General RISC Instructions Attributes: removing comments, expand included macros,
- Single instructions take only one CPU and perform any code maintenance prior to
cycle to complete handing the file to the compiler.
- Instruction lengths are fixed, regardless
of the instruction type Stage 2: Compiling
- Reduced complexity of hardware leads to less Compiling is the process of taking the expanded
power consumption at the expense of overall file from the preprocessor and translating the
processing times. program into the Assembly language that is
designated by the ISA. Program optimization is
Control Unit (CU) also a key part of this step.
The Control Unit (CU) on a CPU receives
information from the software; then, it distributes Stage 3: Assembling
and directs the data to the relevant hardware Assembling is the process of taking an Assembly
components. language program and using an assembler to
generate machine code for use by the computer
hardware.
Stage 4: Linking
Linking is the process of filling in function calls,
including additional objects, libraries, and source
code from other locations into the main binary
code so it is ready to be executed by the
processor.

Assembly Language
Assembly language is a low-level programming
language used to directly correspond with
machine code. It begins with an opcode and then
references memory locations or data types to Cache Hit
operate on. A cache hit is when a computer processor finds
the data it needs inside cache memory.
Computer Architecture: Cache Memory When a program requests data from memory, the
processor will first look in the cache. If the
Memory Hierarchy memory location matches one of the tags in a
A memory hierarchy organizes different forms of cache entry the result is a cache hit and the data
computer memory based on performance. is retrieved from the cache.
Memory performance decreases and capacity
increases for each level down the hierarchy. Cache hits improve performance by retrieving
data from a smaller and faster memory source.
Cache memory is placed in the middle of the
hierarchy to bridge the processor-memory Cache Miss
performance gap. A cache miss is when a computer processor does
not find the data it needs in cache memory and
must request it from the main memory. The main
memory places the memory location and data in
as an entry in the cache. The data is then
retrieved by the processor from the cache.

Replacement Policy
A replacement policy defines how data is
replaced within the cache. Examples of
replacement policies are:
 Random: The data is replaced randomly.
This policy is the easiest to implement
Cache Memory within the architecture but the resulting
Cache is memory placed in between the performance increase may be small.
processor and main memory. Cache is  Least Recently Used (LRU): The data that
responsible for holding copies of main memory has not been accessed for the longest
data for faster retrieval by the processor. period of time is replaced. This can
provide a higher performance increase as
Cache memory consists of a collection of blocks. the data that is used often stays in the
Each block can hold an entry from the main cache. Implementing this policy in the
memory. Each entry has the following architecture is difficult and may not be
information: worth the cost.
 A tag that corresponds to the main memory  First In First Out (FIFO): The data is
location replaced in the order it was placed in the
 The data from the main memory location cache. This can provide a moderate
performance increase and is not as
difficult to implement in the architecture as
LRU. The FIFO policy requires a counter
to keep track of which entry is the oldest
and next to be replaced.
Associativity
Associativity, or placement policy, is the process Control Hazards
of mapping locations in main memory to specified Control hazards occur when the system doesn’t
blocks in the cache. know which set of instructions will need to be
 A fully associative cache maps each processed. This occurs with branches, loops, or
memory location to any location in the conditional statements.
cache.
 A direct-mapped cache maps each The Instruction Cycle
memory location to one location in the In order for a single instruction to be executed by
cache. This associativity does not require the CPU, it must go through the instruction cycle
a replacement policy since there is only (also sometimes referred to as the fetch-execute
one cache entry for each location in cycle). While this cycle can vary from CPU to
memory. CPU, they typically consist of the following
 A set-associative cache maps each stages:
memory location to a specified number of  Fetch
locations in cache. A 2-way set-  Decode
associative cache has 2 blocks per set. A  Execute
cache with 4 blocks that is 2-way set  Memory Access
associative has 2 sets. Each main  Registry Write-Back
memory location maps to a set based on
the location address. Superscalar Architecture
Processors that take advantage of superscalar
Write Policy methodology are designed to use a methodology
A cache write policy defines how data is written to of parallelism where instructions are sent to
the main memory once it is written to the cache. different execution units at the same time,
 The write-through policy writes data to the allowing for more than one instruction to be
cache and the main memory at the same processed in a single clock cycle. In a
time. This policy is easy to implement in superscalar processor, each execution unit (such
the architecture but is not as efficient as an ALU) is within a single CPU.
since every write to cache is a write to the
slower main memory. Parallelism Limitations
 The write-back policy writes data to the Instruction Parallelism processing has limitations
cache but only writes the data to the main that put a restriction on the number of
memory when the data is about to be simultaneous processes that are possible. These
replaced in the cache. This policy is more include:
difficult to implement but is more efficient  The level of parallelism in the instruction
since data is only written to the main set it is working with.
memory only when it absolutely needs to  The amount of overhead needed to find
be. dependencies within the instruction set.
 The cost of examining different branches
in an instruction set.
Computer Architecture: Parallel Computing
Instruction Pipelining
Hazards of Parallelism Instruction pipelining is a hardware-based
In instruction parallelism, there are three types of technique in which the processor attempts to
hazards: Structural, Data, and Control. improve the throughput of a group of instructions
There is no way to remove all hazards from a by simultaneously processing as many
pipeline - manufacturers can only reduce the instructions as effectively possible.
risk/impact.
Parallelism Costs
Structural Hazards In instruction pipelining, an increase in the
Structural hazards are a limitation of the hardware number of steps in a pipeline causes the following
itself. Structural hazards occur when there are not side effects:
enough resources to execute multiple  More expensive to manufacture
instructions.  More power is needed to run
 An increase in the temperature of the
Data Hazards hardware
Data hazards occur when an instruction is
dependent on another instruction still in the
pipeline.

You might also like