0% found this document useful (0 votes)
12 views6 pages

Computerarchitecture and Organization Summary

these are computer organization notes

Uploaded by

patombithi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views6 pages

Computerarchitecture and Organization Summary

these are computer organization notes

Uploaded by

patombithi5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 1: Introduction to Computer Organization and Architecture

• Computer Organization vs. Computer Architecture:


o Computer Architecture: Defines the capabilities and programming model of a
computer but not the details of how it is implemented. It includes:
 Instruction Set: The commands a CPU can understand.
 Data Types: Types of data the CPU can work with (integers, floating-
point, etc.).
 Addressing Modes: How memory addresses are interpreted (direct,
indirect, etc.).
o Computer Organization: Details of the hardware components and their
interconnections to implement the architecture. It includes:
 Data Path and Control Signals: How data and commands are routed.
 Processor Technology: Details of CPU design like pipelining, superscalar
operations, and parallel processing.
• Hierarchy of Functions:
o A computer’s operations can be broken down into four main functions:
1. Data Processing: Operations like arithmetic, logic, and data movement.
2. Data Storage: Temporary and permanent storage for processing.
3. Data Movement: Transfer between memory, CPU, and peripherals.
4. Control: Coordination of components for orderly and correct operations.

Chapter 2: Evolution of Computers and Performance Metrics

• Generations of Computers:
o 1st Generation (1940s-50s): Vacuum Tubes - Used in ENIAC; massive
machines with limited speed and high power needs.
o 2nd Generation (1950s-60s): Transistors - Replaced tubes, improving size,
reliability, and power.
o 3rd Generation (1960s-70s): Integrated Circuits (ICs) - Allowed
miniaturization and reduced cost.
o 4th Generation (1970s-present): Microprocessors - Enabled the creation of
personal computers.
• Performance Metrics:
o Clock Speed (Hz): Number of cycles the CPU completes per second. Higher
speeds generally mean faster operations.
o MIPS (Million Instructions Per Second): Measures instruction execution speed.
Limited as it doesn’t account for instruction complexity.
o FLOPS (Floating Point Operations Per Second): Measures performance for
numerical calculations.
o Amdahl’s Law: Describes limits of performance gain from parallel processing.
Formula: Speedup=1(1−P)+PS\text{Speedup} = \frac{1}{(1 - P) +
\frac{P}{S}}Speedup=(1−P)+SP1 where PPP is the fraction parallelizable and
SSS is the speedup of the parallel portion.
Chapter 3: Interconnection Structures and Bus Architecture

• Von Neumann Architecture: Core concept where both data and instructions are stored
in memory, accessed by location, and generally processed sequentially.
• System Buses:
o Data Bus: Transfers data; wider buses (more bits) can move more data
simultaneously.
o Address Bus: Carries memory addresses specifying where data should go or
come from.
o Control Bus: Carries signals like read/write instructions, interrupt requests, and
clock.
• Bus Design:
o Synchronous Buses: Operate on a clock signal, enforcing timing for all devices
on the bus; faster but less flexible.
o Asynchronous Buses: Operate without a clock signal, using protocols for data
synchronization.
• Bus Arbitration:
o Centralized Arbitration: A single device (arbiter) decides which component
accesses the bus.
o Distributed Arbitration: Each device can signal when it needs bus access,
common in systems with multiple CPUs.

Chapter 4: Cache Memory

• Memory Hierarchy and Cache:


o Registers: Fastest, smallest, used in CPU for immediate data needs.
o Cache Memory: Small, high-speed storage close to the CPU to temporarily hold
frequently accessed data.
o Main Memory (RAM): Larger but slower than cache; holds active program data.
o Secondary Storage (e.g., SSD, HDD): Permanent storage for large datasets.
• Cache Mapping Techniques:
o Direct Mapping: Each block in memory maps to a specific cache line. Simple,
but causes conflicts.
 Formula:
Cache Line=Block Numbermod Number of Cache Lines\text{Cache
Line} = \text{Block Number} \mod \text{Number of Cache
Lines}Cache Line=Block NumbermodNumber of Cache Lines
o Fully Associative Mapping: Blocks can go into any cache line, which maximizes
flexibility but requires searching the cache.
o Set-Associative Mapping: Cache is divided into sets; a block can be loaded into
any line in a set. Balances simplicity and flexibility.
• Replacement Policies:
o Least Recently Used (LRU): Replaces the cache line that hasn’t been used for
the longest time.
o FIFO (First-In-First-Out): Replaces the oldest data in cache.
o Random Replacement: Replaces a randomly chosen line.
Chapter 5: Internal Memory and Technologies

• RAM Types:
o SRAM (Static RAM): Fast, used for cache; doesn’t require refreshing, but
expensive.
o DRAM (Dynamic RAM): Used for main memory; slower and requires
refreshing.
• ROM Types:
o ROM (Read-Only Memory): Non-volatile, stores firmware.
o PROM/EPROM: Programmable ROM that can be erased with ultraviolet light.
o EEPROM: Electrically Erasable, can be rewritten without removal.
• Error Detection and Correction:
o Parity Bit: Adds an extra bit to detect single-bit errors.
o ECC (Error-Correcting Code): Detects and corrects single-bit errors, essential
for reliability in critical systems.

Chapter 6: External Memory

• Storage Types:
o Magnetic Disk (HDD): Uses spinning platters to store data magnetically.
Important metrics:
 Seek Time: Time taken for the read/write head to reach the track.
 Rotational Latency: Time taken to reach the correct sector.
o Tape Storage: Sequential access, ideal for archival but not random access.
• RAID (Redundant Array of Independent Disks):
o RAID 0: Data is striped across disks for speed, no redundancy.
o RAID 1: Mirrors data for redundancy.
o RAID 5/6: Data is striped with parity distributed across disks, providing a balance
between speed and fault tolerance.
• Optical Storage: CDs, DVDs, Blu-rays use lasers to store data; low cost, high density.

Chapter 7: Input/Output (I/O) Systems

• I/O Techniques:
o Programmed I/O: CPU is actively involved in data transfer; inefficient for large
data.
o Interrupt-Driven I/O: CPU is alerted when data is ready, freeing CPU resources.
o DMA (Direct Memory Access): Allows direct transfer of data between memory
and I/O, reducing CPU involvement.
• I/O Interfaces:
o USB: Universal, hot-swappable, high-speed, widely used for peripherals.
o SATA: Standard for internal storage connections.
o SCSI: High-performance interface for storage devices, used in servers.

Chapter 8: Operating System Support


• Scheduling:
o Round-Robin Scheduling: Each process gets a time slice; fair but can cause
delays.
o Priority Scheduling: Prioritizes critical tasks, managing CPU allocation by
importance.
• Memory Management:
o Paging: Divides memory into fixed-size pages, reducing fragmentation by
allocating in blocks.
o Segmentation: Divides memory into logical segments (e.g., for code, data, stack).
• Device Management: Uses device drivers to interface between the OS and hardware.

Chapter 9: Computer Arithmetic

• Binary Representation:
o Signed Numbers: Use two’s complement for negative values.
o Floating-Point Representation: IEEE 754 standard with fields for sign,
exponent, and mantissa. Supports a wide range of real numbers.
• Arithmetic Operations:
o Integer Operations: Includes addition, subtraction (using two’s complement),
multiplication (repeated addition), and division.
o Floating-Point Operations: Adds complexity due to normalization, rounding,
and handling of exponents.

Chapter 10: Instruction Sets and Processor Function

• Instruction Set Architecture (ISA):


o Specifies the types of instructions, addressing modes, and register organization.
• Types of Instructions:
o Data Movement: Includes load/store and register transfer.
o Arithmetic and Logic: Basic operations like add, subtract, multiply, logical
AND, OR.
o Control Flow: Branching, jumps, and loops for program control.
• Data Types: Common types include integers, floating-point, characters, and Boolean.

Chapter 11: Addressing Modes

• Addressing Modes:
o Immediate Addressing: Operand is directly included in the instruction.
o Direct Addressing: Specifies memory address of operand.
o Indirect Addressing: Points to an address that holds the operand.
o Indexed Addressing: Uses an index register and base address for flexible
memory access.
• Instruction Formats:
o Fixed-Length: Each instruction is the same length.
o Variable-Length: Allows for more complex operations but needs decoding.
Chapter 12: Processor Structure and Function

• CPU Components:
o Control Unit: Directs operations, interprets instructions, and manages execution.
o ALU (Arithmetic Logic Unit): Performs arithmetic and logical operations.
o Registers: Temporary data storage within the CPU, like the Program Counter
(PC) and Accumulator.
• Instruction Cycle:
o Fetch: Retrieves the next instruction from memory.
o Decode: Interprets instruction.
o Execute: Carries out operation.
o Write-back: Saves results to memory or registers.

Chapter 13: Reduced Instruction Set Computer (RISC) Architecture

• RISC Characteristics:
o Simplified Instructions: Fewer and simpler instructions for efficiency.
o Large Register Set: Minimizes memory access, speeding execution.
• Comparison with CISC: RISC focuses on efficiency, while CISC provides complex
operations for higher-level functions.

Chapter 14: Superscalar and Parallel Processing

• Instruction-Level Parallelism (ILP): Allows for simultaneous instruction execution.


• Superscalar Architecture: Multiple execution units that can execute instructions in
parallel.
• Pipeline Hazards:
o Data Hazard: Dependency between instructions.
o Control Hazard: Uncertain next instruction due to branching.
o Structural Hazard: Resource conflicts.

Chapter 15 & 16: Control Unit Design

• Control Types:
o Hardwired Control: Uses fixed circuits, fast but inflexible.
o Microprogrammed Control: Uses microinstructions, flexible for complex
control flows.
• Micro-Operations: Small steps the CPU takes during instruction execution.

Chapter 17 & 18: Parallel and Multicore Processing

• Symmetric Multiprocessing (SMP): Processors share memory for enhanced


performance.
• Cache Coherence Protocols:
o MESI Protocol: Ensures data consistency across multiple CPU caches.
• Multicore Processors: Multiple processing units on a single chip for parallel processing.

You might also like