0% found this document useful (0 votes)
8 views20 pages

Imp

Uploaded by

nitinkumawat8000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views20 pages

Imp

Uploaded by

nitinkumawat8000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Pipeline Conflicts

● Timing Variations. All stages cannot take same amount of time. ...
● Data Hazards. When several instructions are in partial execution, and if they
reference same data then the problem arises. ...
● Branching. ...
● Interrupts. ...
● Data Dependency.

Similarly
Virtual Memory:
Paging:
Page replacement:
Cache memory

Cache memory, also simply called cache, is a special type of high-speed memory located
close to the central processing unit (CPU) in a computer system. It acts as a buffer or
temporary storage area that holds frequently used data and instructions, allowing the CPU to
access them much faster than if it had to retrieve them from the main memory (RAM) every
time.

Here's a breakdown of why cache memory is important:

● Speed Up Data Access


● Reduce Memory Traffic
● Improve System Efficiency
Hit Ratio: The hit ratio is a metric that indicates how often the CPU finds the data it needs in
the cache. It's calculated as the number of successful cache lookups (cache hits) divided by
the total number of lookups (hits + misses). Higher Hit Ratio, Better Performance

Techniques of Cache mapping


Computer organisation vs computer architect:
MRI vs Non-MRI

Fetch Cycle:

The fetch cycle, also known as the instruction fetch cycle, is a fundamental operation
in the computer architecture and organization (CAO) domain. It's the first stage of the
instruction cycle, which is the complete process by which the CPU retrieves,
decodes, and executes instructions.

Here's a breakdown of the fetch cycle in CAO:

Steps:

1. Program Counter (PC) Increment: The program counter (PC) is a register


that holds the memory address of the next instruction to be fetched. In the
fetch cycle, the PC is typically incremented by the size of the instruction being
fetched (usually 4 bytes for 32-bit architectures). This ensures the CPU
fetches the next instruction in sequence.
2. Instruction Address Calculation: Based on the updated value of the PC, the
memory address of the instruction to be fetched is calculated. This address
could involve additional processing depending on the addressing mode used
by the instruction (e.g., adding a constant value to the PC for a relative
address).
3. Instruction Fetch: The CPU sends a request to the memory unit using the
calculated address. The memory unit retrieves the instruction from the
specified location and sends it back to the CPU.
4. Instruction Buffer Load (Optional): In some architectures, the fetched
instruction might be loaded into a temporary buffer within the CPU before
proceeding to the decode stage. This buffer can help smoothen the pipeline if
the fetch and decode stages have different speeds.

Address Sequencing:

Address sequencing in Computer Architecture and Organization (CAO) refers to the


mechanism that controls the order in which the CPU fetches instructions from
memory during program execution. It's a crucial aspect of the instruction cycle,
ensuring the CPU fetches the correct sequence of instructions for proper program
execution.

Here's a breakdown of address sequencing in CAO:

Components:

1. Control Unit: This unit within the CPU is responsible for fetching and
decoding instructions. It also manages the address sequencing process.
2. Program Counter (PC): This register holds the memory address of the next
instruction to be fetched. Address sequencing involves updating the PC based
on the current instruction and program flow.
3. Control Memory (Optional): In some architectures, a dedicated control
memory might store microinstructions that define the sequencing logic for
fetching a sequence of instructions related to a specific operation.

Techniques:

● Sequential Fetching: The most basic approach where the PC is simply


incremented by the size of the fetched instruction to fetch the next instruction
in sequence.
● Branching: Based on the outcome of conditional instructions within the
program, the control unit can modify the PC to jump to a different location in
memory. This allows for conditional execution of program blocks.
● Subroutine Calls and Returns: When a program calls a subroutine
(function), the PC's value is typically saved on a stack. After the subroutine
execution, the saved PC value is restored, allowing the program to resume
execution at the correct instruction after the subroutine call.
● Looping: Instructions can modify the PC value to create loops, where a
sequence of instructions is executed repeatedly until a specific condition is
met.
Benefits:

● Ensures the CPU fetches instructions in the correct order for proper program
execution.
● Enables conditional execution and control flow within programs through
branching.
● Allows for subroutine calls and returns, facilitating modular programming.
● Supports looping constructs for iterative tasks.

Implementation:

The specific implementation of address sequencing can vary depending on the CPU
architecture. Some common mechanisms include:

● Hardwired Logic: The control unit might have dedicated logic circuits to
handle various sequencing scenarios like branching based on condition
codes.
● Microprogrammed Control: A control memory might store microinstructions
that define the sequencing logic for different instruction types and control flow
situations.

Impact on Performance:

Efficient address sequencing is crucial for optimal CPU performance. Techniques like
branch prediction can help the CPU anticipate branching behaviour and pre-fetch
instructions from the likely target address, minimising delays.

You might also like