Computer_Architecture_Notes
Computer_Architecture_Notes
Computer Architecture refers to the structure and organization of a computer system's hardware
Here are key concepts and components that form the foundation of computer architecture:
- **Central Processing Unit (CPU)**: The "brain" of the computer that executes instructions. It
consists of the Arithmetic Logic Unit (ALU), Control Unit (CU), and registers.
- **Memory**: A storage area where data and instructions are stored. Memory is divided into
- **Input/Output Devices (I/O)**: Hardware interfaces through which the computer interacts with
2. **CPU Components**:
- **Arithmetic Logic Unit (ALU)**: Performs arithmetic and logical operations (addition, subtraction,
- **Control Unit (CU)**: Directs the flow of data and operations within the CPU.
- **Registers**: Small, fast storage locations in the CPU that hold intermediate data during
processing.
3. **Memory Hierarchy**:
- **Registers**: Smallest and fastest, located in the CPU. They hold data and instructions for
immediate use.
- **Cache Memory**: Fast memory located close to the CPU to store frequently accessed data.
- **Main Memory (RAM)**: Stores data that is actively used by the CPU.
- **Secondary Storage**: Larger, slower storage (hard drives, SSDs) used to store data long-term.
- **Tertiary Storage**: Used for backup and archival data (e.g., optical drives, tape storage).
- **Program Storage**: Both data and instructions are stored in memory, and the CPU fetches
- **Fetch-Decode-Execute Cycle**: The basic cycle through which the CPU processes
instructions: fetch an instruction from memory, decode it, and execute the operation.
- Defines the set of instructions the CPU can execute and how data is represented in memory.
- Includes instructions for arithmetic, logic, data movement, and control flow.
6. **Pipelining**:
- A technique where multiple instruction stages are overlapped, allowing for faster processing by
- Stages of pipelining: Instruction Fetch (IF), Instruction Decode (ID), Execute (EX), Memory
7. **Cache Memory**:
- A small, fast memory that stores frequently accessed data to reduce access time from slower
main memory.
- **Levels of Cache**:
- **L1 Cache**: Smallest and fastest, located closest to the CPU core.
- **L2 Cache**: Larger but slower, located between the CPU and main memory.
- **L3 Cache**: Even larger and slower, shared between cores in multi-core processors.
8. **Parallel Processing**:
- **Multicore Processors**: Processors with multiple cores that can handle several tasks in
parallel.
9. **Virtual Memory**:
- A memory management technique that allows the operating system to use disk storage as an
extension of RAM.
- **Paging**: Divides memory into fixed-size pages and swaps them between RAM and disk as
needed.
- **Segmentation**: Divides memory into segments (e.g., code, data, stack) and manages them
independently.
- Handles communication between the CPU and external devices like keyboards, displays, and
storage.
- **Direct Memory Access (DMA)**: A system where peripherals can access memory directly
- **Interrupts**: A mechanism where the CPU is alerted to handle I/O events or exceptional
conditions.
- A collection of pathways used to transfer data between the CPU, memory, and I/O devices.
- **RISC (Reduced Instruction Set Computing)**: Focuses on a small set of simple instructions
- **CISC (Complex Instruction Set Computing)**: Uses a large set of instructions, some of which
- **Clock Speed**: The rate at which the CPU executes instructions (measured in GHz).
- **MIPS (Million Instructions Per Second)**: A measure of how many millions of instructions the
- As processors become more powerful, energy consumption becomes a crucial factor, especially
- **Power Consumption**: Affects the performance-to-power ratio, with techniques like dynamic
Understanding these concepts is essential for understanding how computers work at a fundamental
level, and is the basis for more advanced studies in topics like computer systems design, operating