Cse-211 Computer Architecture
Cse-211 Computer Architecture
COMPUTER
ARCHITECTURE
MODULE 1-5
ARCHITECTURE VERSUS
MICROARCHITECTURE
Architecture, also known as instruction set architecture, is an
abstraction layer provided to software that does not change
much.
It defines how a theoretical machine executes programs and does
not specify the exact implementation details.
On the other hand, microarchitecture, also known as organization,
deals with the specific implementation of the instruction set
architecture, including the size and speed of different structures.
DO DESIGN DECISIONS AFFECT THE PERFORMANCE AND
FUNCTIONALITY OF A COMPUTER SYSTEM?
Decision making factors
Performance: For example, in architecture, decisions related
to the number of registers, cache sizes, and memory
organization can impact the system's ability to process data
quickly.
Functionality: In architecture, decisions regarding the
instruction set and the types of operations supported directly
affect the range of tasks that the system can perform.
Power Efficiency: It can affect a computer's performance in
several ways, including: Instruction execution time, Clock
frequency, Cache size, Pipelining techniques, and Branch
prediction mechanisms.
Scalability: Scalability considerations include factors such as
interconnect design, memory access patterns, and support
for parallelism.
TRADE OFF
Performance vs. Power Efficiency: Increasing the performance of a computer
system often requires more power consumption. Designers need to strike a
balance between achieving high performance and minimizing power consumption
to ensure optimal energy efficiency.
Complexity vs. Simplicity: Adding more features and capabilities to a computer
system can increase its complexity, making it harder to design, implement, and
maintain. On the other hand, simplifying the system may limit its functionality.
Designers need to find the right balance between complexity and simplicity to
ensure a manageable and efficient system.
Cost vs. Performance: Cost is a significant factor in computer architecture design.
Higher-performance components and technologies often come at a higher cost.
Designers need to consider the cost implications of their design decisions and find
a balance between performance and affordability.
TRADE OFF
Flexibility vs. Specialization: Computer systems can be designed to be flexible,
accommodating a wide range of applications, or specialized for specific tasks. A
more flexible system may have a broader range of capabilities but may
sacrifice performance or efficiency in specific tasks. Specialized systems, on the
other hand, excel in specific applications but may lack versatility.
Time-to-Market vs. Optimization: Designing and optimizing a computer system
can be time-consuming. There is often a trade-off between the time it takes to
develop a system and the level of optimization achieved. Designers need to
balance the need for timely product releases with the desire for highly
optimized and efficient systems.
MACHINE MODELS
HOW DO OPERANDS AND INSTRUCTIONS WORK?
Choosing the Right Model:
Hardwired
Uses logic circuits and Finite State Machines (FSM) to generate control
signals. The components of a hardwired circuit are physically connected,
such as flip-flops, gates, and drums. Hardwired systems are faster than
microcoded systems, but they are more expensive and harder to
modify. Hardwired systems can have trouble handling complex
instructions.
Microcoded
Uses microinstructions stored in high-speed memory to translate
machine instructions into circuit-level operations. Microcoded systems
are more cost-effective and easier to modify than hardwired systems.
Microcoded systems can more easily handle complex instructions.
MICROCODED CPU
PIPELINING
pipelining is a way to implement instruction-level parallelism within a single
processor. It involves dividing the pipeline into stages, holding instructions in a
buffer, and executing instructions in parallel. This technique can lead to
reduced cycle time, increased throughput, and higher clock frequency
STAGES IN PIPELINE
The stages of pipelined control in computer architecture
typically include:
Instruction Fetch (IF): This stage fetches the instruction from
memory and prepares it for decoding.
Instruction Decode (ID): In this stage, the fetched instruction is
decoded to determine the operation to be performed and the
operands involved.
Execution (EX): The execution stage performs the actual
computation or operation specified by the instruction.
Memory Access (MEM): This stage involves accessing
memory, such as reading from or writing to data memory.
Write Back (WB): The final stage writes the result of the
computation back to the appropriate register or memory
location.
CHALLENGES IN PIPELINE
Data Hazards: Pipelining can lead to data hazards,
which occur when an instruction depends on the result
of a previous instruction that has not yet completed.
These dependencies can cause stalls in the pipeline,
reducing its efficiency.
Control Hazards: Control hazards occur when the
pipeline encounters branch instructions or other
control flow changes. The pipeline needs to predict the
outcome of these instructions in advance, and if the
prediction is incorrect, it can lead to wasted cycles and
reduced performance.
CHALLENGES IN PIPELINE
Structural Hazards: Structural hazards arise when multiple
instructions require the same hardware resource at the same
time. For example, if two instructions need to access the same
memory location simultaneously, a structural hazard occurs, and
the pipeline may need to stall or insert additional cycles to
resolve the conflict.
Pipeline Bubbles: Pipeline bubbles, also known as pipeline stalls or
pipeline flushes, occur when the pipeline needs to be temporarily
halted due to hazards or other issues. These bubbles can reduce
the overall performance gain achieved by pipelining.
Branch Misprediction Penalty: When a branch instruction is
mispredicted, the pipeline needs to be flushed, and the incorrectly
fetched instructions need to be discarded. This can result in a
significant performance penalty.
STRUCTURAL HAZARD
Structural hazards occur when two instructions need to use the
same hardware resource at the same time. This can happen due to
limitations in the hardware design or conflicts in resource allocation.
Structural hazards can arise in computer architecture when there is
a lack of available resources to handle multiple instructions
simultaneously.
For example, if two instructions require access to the same memory
location or register, a structural hazard may occur. Resolving
structural hazards requires careful resource management and
design considerations to ensure that instructions can be executed
efficiently without conflicts.
HOW TO SOLVE STRUCTRAL HAZARD?
Avoidance: In this approach, the goal is to prevent different
instructions from using the same hardware resource at the same
time. The programmer or compiler can play a role in ensuring that
instructions are scheduled in a way that minimizes structural
hazards.
Hardware-based solutions: Another approach is to address
structural hazards through hardware modifications or
enhancements. For example, one solution is to stall the processor or
a portion of it when a structural hazard occurs. This means that the
dependent instructions are delayed until the contended resource
becomes available, reducing the likelihood of structural hazards.
WHAT ARE DATA HAZARDS?
Components:
Tag: High-order bits of the
memory address, used to uniquely
identify the memory block.
Index: Determines the specific
cache line where the memory
block will be stored.
Offset: Specifies the exact location
within the cache line
CACHE MAPPING- FULLY ASSOCIATED CACHE
A fully associative cache is a type of cache memory where any block of data from
the main memory can be stored in any cache line. This means there are no
restrictions on where a particular memory block can be placed within the cache.
The cache consists of multiple cache lines.
Each cache line has a tag, a valid bit, and data.
The cache controller compares the tag field of the memory address with the tags
of all cache lines to check for a hit.
Memory Access: When a memory access
occurs, the cache controller extracts the tag
field from the memory address.
Tag Comparison: The tag field is compared
with the tags of all cache lines.
Hit or Miss: If a match is found, a cache hit
occurs and the data is retrieved from the
cache line. If no match is found, a cache miss
occurs.
CACHE MAPPING-2 WAY SET ASSOCIATIVE
A 2-way set associative cache is a type of cache memory where each set consists
of two cache lines. This means that a block can only be stored in one of the two
cache lines within a set. The set is determined by the lower bits of the memory
address, while the tag field of the address determines which cache line within the
set to use.
The cache is divided into sets.
Each set contains two cache lines.
Each cache line has a tag, a valid bit, and
data.
The lower bits of the memory address
determine the set number.
The tag field of the memory address is
compared with the tags of the two cache
lines within the set.
CACHE PERFORMANCE
Hit Ratio and Miss Ratio are key metrics used to evaluate the
performance of a cache memory system. They measure how
effectively the cache stores and retrieves data.
Hit ratio: The percentage of memory accesses that result in a cache
hit (i.e., the requested data is found in the cache).
Hit Ratio = (Number of Cache Hits) / (Total Number of Memory
Accesses)
Miss Ratio: The percentage of memory accesses that result in a
cache miss (i.e., the requested data is not found in the cache).
Miss Ratio = (Number of Cache Misses) / (Total Number of Memory
Accesses)
Average memory access time = Hit ratio × access time in cache memory
+ (1 – Hit ratio) × Access time in main memory
LET’S TRY
A cache is having 60% hit ratio for read operation. Cache access
time is 30 ns and main memory access time is 100 ns, 50%
operations are read operation.
What will be the average access time for read operation?
THANK YOU!