0% found this document useful (0 votes)
14 views4 pages

Csao Reviewer

The document discusses key concepts in computer architecture, including Moore's Law, Dennard Scaling, and Koomey's Law, which describe trends in transistor density, power consumption, and computational efficiency. It outlines the components of CPUs, such as the ALU and CU, and explains the instruction cycle, memory types, and processor architectures like RISC and CISC. Additionally, it covers performance metrics, memory organization, and parallel processing classifications according to Flynn's Taxonomy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views4 pages

Csao Reviewer

The document discusses key concepts in computer architecture, including Moore's Law, Dennard Scaling, and Koomey's Law, which describe trends in transistor density, power consumption, and computational efficiency. It outlines the components of CPUs, such as the ALU and CU, and explains the instruction cycle, memory types, and processor architectures like RISC and CISC. Additionally, it covers performance metrics, memory organization, and parallel processing classifications according to Flynn's Taxonomy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Moore's Law duration of a clock cycle, and the clock rate is its

inverse (Clock Rate = 1 / Clock Cycle Time). CPU


• Moore's Law refers to Moore's perception that
Time is calculated as Clock Cycles * Clock Cycle
the number of transistors on a microchip
Time.
doubles every two years, though the cost of
computers is halved. Instruction Count and CPI

Dennard Scaling • Instruction Count is the total number of


instructions executed in a program. Cycles Per
• Dennard Scaling postulated that as transistors
Instruction (CPI) is determined by CPU
get smaller, their power density stays constant,
hardware. Execution Time is given by
so the power use stays in proportion with the
(Instruction Count x CPI) / Clock Rate.
area. This allowed CPU manufacturers to raise
clock frequencies from one generation to the Processor Comparisons
next without significantly increasing overall
• When comparing processors, execution time
circuit power consumption.
and performance are calculated using clock
Koomey’s Law rate, CPI, and instruction count. For example,
"Which processor has the highest
• Koomey's law describes a trend in the history of
performance?"
computing hardware: for about a half-century,
the number of computations per joule of energy
dissipated doubled about every 1.57 years. The Introduction to CPUs
implications of Koomey's law are that the
• The Processing Unit, primarily the CPU (Central
amount of battery needed for a fixed computing
Processing Unit), is fundamental in computing
load will fall by a factor of 100 every decade.
systems. It serves as the brain of the computer,
Amdahl’s Law executing instructions from programs to
perform various tasks.
• Amdahl's law (or Amdahl's argument) is a
formula which gives the theoretical speedup in Key CPU Components
latency of the execution of a task at a fixed
1. Arithmetic Logic Unit (ALU):
workload that can be expected of a system
whose resources are improved. o The ALU is responsible for performing
arithmetic operations (addition,
Performance Measurement
subtraction, multiplication, division) and
• Performance Measurement is the process of logical operations (AND, OR, NOT,
collecting, analyzing, and reporting information comparisons).
regarding the performance of an individual,
2. Control Unit (CU):
group, organization, system, or component.
o The Control Unit orchestrates the
• Execution Time (Response Time): Time
operation of the CPU by managing the
between start and completion of a task.
execution of instructions, including
• Throughput: Total work done per unit time. fetching, decoding, and generating
control signals.
• CPU Time Formula: CPU Time = CPU Clock
Cycles × Clock Cycle Time or CPU Clock Cycles / 3. Register Set:
Clock Rate.
o Registers are small, high-speed storage
Clock Rate and Execution Time locations within the CPU used for
temporarily holding data, instructions,
• Operation of digital hardware is governed by a
and addresses.
constant-rate clock. The clock period is the
CDBM
o Types include General-Purpose and Memory Types
Special-Purpose registers (e.g., Program
1. RAM:
Counter, Stack Pointer).
o SRAM stores bits as voltage and is fast
4. Datapath:
but expensive. DRAM stores bits as
o The datapath is a collection of charge and is slower but cheaper.
functional units that processes data,
2. ROM:
including the ALU, registers, and buses.
o PROM can be programmed only once,
o Connects ALU, registers, and memory
EPROM can be erased with UV light,
via buses and multiplexers.
and EEPROM allows electrical erasing.
o Ensures efficient data flow for
Endianness
processing and storage.
• Endianness refers to the sequential order in
Instruction Cycle
which bytes are arranged in memory. Big-
• The instruction cycle describes the process the Endian systems store the most significant byte
CPU follows to execute an instruction, including at the lowest address, while Little-Endian
Fetch, Decode, Execute, and Write Back phases. systems store the least significant byte at the
lowest address.
1. Fetch: Retrieve instruction from memory.
2. Decode: Interpret the instruction.
3. Execute: Perform specified operations. System Components
4. Write Back: Store the result in memory or
register. 1. Input Unit: Provides data to the computer
system.
Types of CPUs
2. Output Unit: Delivers processed data to users.
• General-Purpose CPUs perform a wide range of
tasks, while specialized processors like GPUs 3. Storage Unit: Includes primary storage (volatile,
and microcontrollers are optimized for specific fast) and secondary storage (non-volatile, large).
operations. 4. ALU and CU: Perform processing and control
operations.
Memory Overview Flynn’s Taxonomy
• Memory is an essential element of a computer. • Parallel processors are classified into SISD,
The performance of the computer system SIMD, MISD, and MIMD based on instruction
depends upon the size of the memory. and data streams.
Memory Organization Instruction Types
• Main Memory is organized as a matrix of bits, • Data manipulation instructions perform
with rows representing memory locations. arithmetic, logic, and shift operations.
Memory can be read or written one row at a
time. Architectures

Addressing and Memory Size • Von Neumann Architecture uses a single


memory for instructions and data. Harvard
• Address Width determines the maximum Architecture separates instruction and data
address space. Memory Size is calculated as memories for simultaneous access.
Address Space x Data Lines.
Raspberry Pi
CDBM
• Platform Overview: o RISC: Higher performance per watt,
o The Raspberry Pi is a low-cost, compact simpler hardware.
computer primarily used for o CISC: Emphasis on reducing memory
educational purposes, prototyping, and access with complex operations.
IoT projects.
o It supports various operating systems,
including Raspberry Pi OS (based on
Debian Linux). Feature RISC CISC
• Hardware Features:
o Processor: Broadcom System-on-Chip Instructions Few, simple Many, complex
(SoC) with ARM-based CPU cores.
o GPIO (General Purpose Input/Output) Clock Cycles 1 per instruction Multiple
Pins: Used for interfacing with external
hardware components like sensors and Power Consumption Lower Higher
motors.
o Memory: Varies across models;
includes onboard RAM for processing
ADDITIONAL INFO
tasks.
o Connectivity: Includes Ethernet, Wi-Fi,
and Bluetooth (depending on the • Central Processing Unit (CPU): Functions as the
model). "brain" of a computer. It executes instructions
• Applications: from software to perform tasks.
o Home automation, media servers, retro
gaming, and robotics. o Components:
o Educational tools for teaching ▪ Arithmetic Logic Unit (ALU):
programming and hardware design. Performs arithmetic (e.g.,
• Programming: addition, subtraction) and
o Supports various programming logical operations (e.g., AND,
languages such as Python, C++, and OR).
Java. ▪ Control Unit (CU): Manages
o Popular for interfacing with sensors and instruction execution, including
other peripherals. fetching, decoding, and
coordinating data flow.
RISC vs. CISC ▪ Register Set: Temporary
storage for data and
• RISC (Reduced Instruction Set Computer) instructions during processing.
emphasizes efficiency and speed with simple • Instruction Cycle Phases:
instructions, while CISC (Complex Instruction 1. Fetch: Retrieving instructions from
Set Computer) uses complex instructions to memory.
minimize memory access. 2. Decode: Interpreting instructions.
3. Execute: Performing the operation.
• RISC (Reduced Instruction Set Computer): 4. Write Back: Storing the result.
o Simplified instruction set, optimized for
speed and efficiency. Processor Architectures:
o Focuses on executing instructions
within a single clock cycle. • Von Neumann Architecture: Unified memory
• CISC (Complex Instruction Set Computer): for instructions and data; sequential operations.
o Rich instruction set for complex tasks. • Harvard Architecture: Separate memory for
o Reduces memory usage by executing instructions and data; enables simultaneous
complex instructions. access.
• Comparison:
Performance Metrics:

CDBM
• Clock Speed: Inverse of clock cycle time; faster 1. Execution Time:
clock rates generally improve performance. o CPU Time = Clock Cycles × Clock Cycle
• Instruction Count & CPI: Time
o Execution time depends on instruction o Example: If clock cycle time is 5ns, clock
count and cycles per instruction (CPI). rate = 1/5ns = 200MHz.
o Formula: Clock Cycles = Instruction 2. CPI Calculation:
Count × CPI o Example: Program executes 3.6 billion
cycles with 2 billion instructions.
Memory Basics: o CPI = Cycles / Instruction Count = 3.6B /
2B = 1.8
• Stores data, instructions, and addresses. 3. Memory Size:
• Organized hierarchically to balance speed and o Address Space × Data Lines.
cost: o Example: For 64K × 8, size = 64K × 8 bits
o Registers (fastest, lowest capacity) = 512KB.
o Cache Memory
o Main Memory (RAM): Volatile storage
for temporary data.
o Secondary Storage: Non-volatile, large-
capacity storage like HDDs or SSDs.

Types of Memory:

1. RAM (Random Access Memory):


o SRAM: Fast, expensive, used in caches.
o DRAM: Slower, cheaper, requires
refreshing.
2. ROM (Read-Only Memory):
o PROM: Programmed once.
o EPROM: Erasable with UV light.
o EEPROM: Electrically erasable,
rewritable.

Parallel Processing & Flynn’s Taxonomy:

1. SISD (Single Instruction, Single Data):


Sequential execution.
2. SIMD (Single Instruction, Multiple Data):
Parallel processing on multiple data points.
3. MISD (Multiple Instruction, Single Data):
Theoretical, rarely implemented.
4. MIMD (Multiple Instruction, Multiple Data):
Used in modern multiprocessor systems.

Koomey’s and Amdahl’s Laws:

• Koomey’s Law: Efficiency of computations per


joule doubles roughly every 1.5 years.
• Amdahl’s Law: Limits the speedup of parallel
processing based on the serial portion of a task.

Key Formulas and Examples

CDBM

You might also like