CA1, Sem3 (Comp Org)
CA1, Sem3 (Comp Org)
Class Roll No : 03
AND
Subject : Computer Organisation
Subject code: PCC-CS302
DESIGN
St. Thomas’ College of Engineering & Technology
CONTENTS
❑INTRODUCTION TO CPU ARCHITECTURE.
❑BASIC COMPONENTS OF CPU.
❑EXPLANATION OF INSTRUCTION CYCLE IN CPU.
❑MEMORY HIERACHY AND CACHE DESIGN.
❑PIPELINE ARCHITECTURE.
❑MULTI-CORE AND PARALLEL PROCESSING.
❑CONCLUSION.
INTRODUCTION TO CPU ARCHITECTURE
❑Definition: CPU refers to Central Processing Unit. It is the brain of the computer. In other words, a Central
Processing Unit is the most important component of a computer system. A CPU is hardware that performs data
input/output, processing, and storage functions for a computer system. A CPU can be installed into a CPU
socket.
❑Basic role: The CPU interprets, processes and executes instructions, from the hardware and software programs
running on the device. The CPU performs arithmetic, logic, and other operations to transform data input given by
the information output required by the user.
➢Control Unit: The Control Unit is responsible for coordinating and directing the execution of instructions. It
instructs the memory, logic unit, and both output and input devices of the computer.
➢Registers : Registers are memory units belonging to CPU . These memories are used during CPU operations.
Some of the important registers are Program Counter (PC), Instruction Register (IR), Accumulator (ACC),General-
Purpose Registers (R0, R1, R2...),Address Registers (AR), Stack Pointer(SP) etc. The size of the registers are one or
two bytes according to the configuration of the computer.
➢Cache memory: Supplementary Memory System that temporarily stores frequently used instructions and data
for quicker processing by the central processing unit (CPU) of a computer. CACHE MEMORY is an extension of a
computer's main memory.
➢Clock: The CPU relies on a clock signal to synchronize its internal operations. The clock generates a steady pulse
at a specific frequency, and these clock cycles coordinate the CPU's operations. The clock speed is measured in
hertz (Hz) and determines how many instructions the CPU can execute per second. Modern CPUs have variable
clock speeds, which adjust based on workload to balance performance and power consumption.
EXPLANATION OF INSTRUCTION CYCLE IN
CPU
The Instruction cycle includes three steps : Fetch, Decode and execute and hence also called fetch-decode-
execute cycle.
The CPU retrieves an The control unit The CPU performs the
instruction from decodes the fetched operation specified by
memory, which instruction by the opcode using the
typically consists of an interpreting the opcode operand(s). For example,
opcode (operation to determine the specific if the opcode is an
code) and operand(s). operation, such as addition, the ALU adds
The opcode specifies addition, subtraction, or the operands and stores
the operation to be a memory access. It the result in a specified
performed, while the also identifies the location. The outcome of
operand(s) indicate operand(s), which may this step may also affect
the data or the include registers, flags in the status
address of the data. memory addresses, or register.
immediate values.
The cycle then repeats, with the program counter moving to the next instruction.
MEMORY HIERACHY AND CACHE DESIGN.
Memory Hierarchy :
➢ Structure: Organized in layers—registers, cache,
RAM, and storage—based on speed and proximity
to the CPU. COMPUTER MEMORY
➢ Purpose: Optimizes data access speed, with HIERACHY
Cache Design:
➢ Levels: Typically includes L1 (fastest, smallest), L2,
and L3 (larger, slower) caches.
➢ Function: Stores frequently used data to reduce
access time to main memory.
➢ Associativity : Determines how data is mapped in
cache (e.g., direct-mapped, set-associative).
➢ Trade-offs: Larger caches improve performance but
increase cost, size, and power consumption.
INSTRUCTION CYCLE
CACHE DESIGN Balancing these factors is key to efficient CPU design.
PIPELINE ARCHITECTURE
The pipeline organization , in general, is applicable to two areas of computer design. It includes:
1. Instruction Pipeline – An instruction pipeline receives sequential instructions from memory while prior instructions
are implemented in other portions. Pipeline processing can be seen in both the data and instruction streams.
2. Arithmetic Pipeline -An arithmetic pipeline separates a given arithmetic problem into subproblems that can be
executed in different pipeline segments. It’s used for multiplication, floating-point operations, and a variety of other
calculations.
Pipeline in a CPU
Pipelining is a technique for breaking down a sequential process into various sub-operations and executing each sub-
operation in its own dedicated segment that runs in parallel with all other segments. This parallelism reduces idle time
and increases throughput, leading to faster execution of programs. By breaking down instruction processing into discrete
stages, pipelining can significantly boost the CPU’s efficiency and speed.
The most significant feature of a pipeline technique is that it allows several computations to run in parallel in different
parts at the same time.
Pipelining Hazards
Whenever any pipeline needs to stall due to any reason, it is known as a pipeline hazard. Some of the pipelining hazards
are data dependency, memory delay, branch delay, and resource limitation.
MULTI-CORE AND PARALLEL PROCESSING
Initially, CPUs had a single core, handling one instruction at a time. Performance
improvements relied mainly on increasing clock speed and optimizing architecture.
However, physical limits on clock speed and heat dissipation constrained further
advancements.
Multiple cores in a CPU work together to handle concurrent tasks by leveraging
parallel processing. Here’s how it improves efficiency and performance:
➢ Task Distribution: Operating systems and applications divide tasks into smaller
threads or processes that can run simultaneously.