0% found this document useful (0 votes)
7 views8 pages

Imp Question Notes

Cache mapping determines how data from main memory is placed in cache memory, with three types: direct mapping (fast but prone to collisions), fully associative mapping (flexible but slower), and set-associative mapping (a balance of speed and flexibility). Memory types include registers (fastest), cache (very fast), RAM (volatile), virtual memory (slower), ROM (non-volatile), and secondary storage (permanent). Understanding computer organization and assembly language is crucial for optimizing performance and writing efficient code.

Uploaded by

nashidumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views8 pages

Imp Question Notes

Cache mapping determines how data from main memory is placed in cache memory, with three types: direct mapping (fast but prone to collisions), fully associative mapping (flexible but slower), and set-associative mapping (a balance of speed and flexibility). Memory types include registers (fastest), cache (very fast), RAM (volatile), virtual memory (slower), ROM (non-volatile), and secondary storage (permanent). Understanding computer organization and assembly language is crucial for optimizing performance and writing efficient code.

Uploaded by

nashidumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

What is Cache Mapping?

Cache mapping is the method used to decide where data from main memory will be placed in the
cache memory.

Types of Cache Mapping

1. Direct Mapping

• Simple and fast

• Each block of memory maps to exactly one cache line

Formula:

Cache Line = (Main Memory Block Address) MOD (Number of Cache Lines)

Main Memory → Cache Line

Block 5 → Line 1 (5 mod 4 = 1)

Block 9 → Line 1 (9 mod 4 = 1)

Fast
Can lead to frequent collisions if multiple blocks map to the same line.

2. Fully Associative Mapping

• Any block can go anywhere in cache.

• No fixed mapping; requires searching entire cache.

Flexible, fewer collisions


Slower lookup (needs associative search)

3. Set-Associative Mapping (Most Common)

• Compromise between the above two.

• Cache is divided into sets, and each set has multiple lines (ways).

• A memory block maps to a specific set, but any line within that set.

E.g. 4-way set associative cache: each set has 4 lines.

Formula:

Set Number = (Block Address) MOD (Number of Sets)

Balance of speed and flexibility


Slightly more complex logic
Summary Table

Mapping Type Speed Flexibility Hardware Cost

Direct Mapping Fast Low Low

Fully Associative Slower High High

Set-Associative Medium Moderate Medium

Types of Memory (by Function and Speed)

1. Registers

• Smallest, fastest memory

• Located inside the CPU

• Store operands and intermediate results

• Example: EAX, EBX, ECX, etc. in x86

2. Cache Memory

• Very fast memory between CPU and RAM

• Stores frequently accessed data

• Types (based on proximity to CPU):

o L1 Cache – Smallest & fastest, inside the CPU core

o L2 Cache – Larger, slightly slower

o L3 Cache – Shared by all cores, even larger and slower

3. Main Memory (RAM)

• Volatile memory (loses data on shutdown)

• Stores programs and data currently in use

• Types:

o DRAM (Dynamic RAM) – Common type of main memory

o SRAM (Static RAM) – Faster, used in cache


4. Virtual Memory

• Part of hard drive/SSD used as extra RAM

• Managed by the OS

• Slower than actual RAM but allows running bigger programs

• Uses paging and segmentation

5. ROM (Read-Only Memory)

• Non-volatile memory (retains data after power off)

• Stores firmware (e.g., BIOS/UEFI)

• Usually read-only, though modern versions can be updated (like EEPROM, Flash)

6. Secondary Storage

• Permanent, large-capacity storage

• Examples: HDD, SSD, USB drives

• Much slower than RAM but non-volatile

Summary Table

Memory Type Speed Volatile Example Use

Registers Fastest Yes CPU calculations

Cache (L1/L2/L3) Very Fast Yes Frequently used data

RAM (DRAM) Fast Yes Running programs/data

Virtual Memory Slow Yes Overflow from RAM

ROM Medium No Firmware (BIOS)

HDD/SSD Slowest No File storage

Von Neumann Architecture

Features:

• Single memory for both data and instructions

• One bus for data and instructions

• Instructions and data are fetched one at a time


Advantages:

• Simple and cheaper to design

• Flexible (easier to implement general-purpose systems)

Disadvantages:

• Von Neumann Bottleneck: CPU must wait when fetching instructions and data

• Slower performance due to shared bus

Used In:

• Most general-purpose computers (PCs, laptops, Intel x86 processors)

Harvard Architecture

Features:

• Separate memory for instructions and data

• Separate buses for fetching data and instructions

• Can fetch both at the same time

Advantages:

• Faster execution (no bottleneck)

• Higher performance in embedded or specialized systems

Disadvantages:

• More complex and expensive hardware

• Less flexible for general-purpose computing

Used In:

• Microcontrollers, DSPs, some embedded systems (e.g., ARM Cortex-M)

Summary Table

Feature Von Neumann Harvard

Memory for Instructions & Data Shared Separate

Buses One shared bus Separate instruction & data buses

Speed Slower (bottleneck) Faster (parallel access)

Complexity Simpler More complex


Feature Von Neumann Harvard

Usage PCs, laptops (Intel, AMD) Embedded systems, microcontrollers

A CPU cycle refers to a single tick of the clock signal that drives a computer's central processing unit
(CPU). It's the basic unit of time in which the CPU can perform a task, like moving data, performing
arithmetic, or accessing memory.

Here’s a simple breakdown:

What is a CPU Cycle?

• The CPU clock generates regular pulses (like a metronome).

• Each pulse = one CPU cycle.

• During each cycle, the CPU can execute a small part of an instruction (or a full instruction,
depending on complexity).

Example:

Let’s say the CPU is running at 3 GHz (3 gigahertz) — that means it can perform 3 billion cycles per
second.

• Simple instructions (like ADD A, B) might take 1 cycle.

• Complex instructions (like memory access or division) might take multiple cycles.

Why CPU Cycles Matter:

• Performance: More cycles per second (higher clock speed) usually means faster processing.

• Efficiency: CPUs are designed to do more work per cycle (this is why a 2 GHz CPU can
outperform a 3 GHz CPU if it does more per cycle).

• Optimization: Software developers try to write code that uses fewer CPU cycles to complete
a task — this means faster and more efficient programs.

Analogy:

Think of the CPU like a factory machine:

• Each cycle is a moment when the machine moves or processes.

• Faster cycles = faster machine.

• But smarter designs (more efficient CPUs) can do more with each move, not just move
faster.
What are Registers in a CPU?

Registers are small, ultra-fast memory units located inside the CPU. They are used to store and
manipulate data that the CPU is currently working on.

Key Characteristics of Registers:

Feature Description

Location Inside the CPU

Speed Faster than RAM and cache

Size Very small (usually a few bytes)

Purpose Temporary storage during execution

Why Registers Are Important

Registers hold:

• Operands (data to be processed)

• Instruction addresses

• Intermediate results

They enable the CPU to access and manipulate data without delay, unlike RAM or cache which takes
more time.

Common Types of Registers:

Register Type Purpose

Accumulator (ACC) Stores results of arithmetic/logic operations

Program Counter (PC) Holds the address of the next instruction to execute

Instruction Register (IR) Holds the current instruction being executed

General-Purpose Registers (e.g., AX, BX,


Store temporary data and results
R1, R2)

Stack Pointer (SP) Points to the top of the stack in memory

Shows the result status of operations (e.g., zero, carry,


Status Register (Flags)
overflow)
Computer Organization and Assembly Language – Explained Simply

These are two fundamental concepts in computer science that help you understand how computers
work at a low level — from hardware to the code that directly interacts with it.

1. Computer Organization

Computer Organization focuses on how a computer system is structured internally and how its
different parts work together.

Main Components:

Component Function

CPU (Processor) Executes instructions

Memory (RAM) Stores data and instructions temporarily

I/O Devices Input (keyboard, mouse), Output (monitor, printer)

Control Unit Directs the operation of the processor

ALU (Arithmetic Logic Unit) Performs arithmetic and logic operations

Registers Small, fast storage within the CPU

Buses Carry data, address, and control signals between components

Topics in Computer Organization:

• Number systems (binary, hexadecimal)

• Data representation (signed/unsigned integers, floating points)

• Instruction execution cycle (fetch → decode → execute)

• Memory hierarchy (registers → cache → RAM → hard drive)

• Pipelining and parallelism

2. Assembly Language

Assembly Language is a low-level programming language that has a 1-to-1 mapping with machine
instructions. It is specific to the CPU architecture (e.g., x86, ARM).

Key Features:

• Closest human-readable form of machine code

• Each instruction typically translates to one machine instruction

• Uses mnemonics (short codes) for operations (e.g., MOV, ADD, SUB)
• Requires an assembler to convert to machine code

Example (x86 assembly):

MOV AX, 5 ; Move 5 into register AX

MOV BX, 10 ; Move 10 into register BX

ADD AX, BX ; Add BX to AX (AX = AX + BX)

This is a simple instruction set showing data movement and arithmetic.

How They're Related:

• Computer Organization teaches you how the hardware executes instructions.

• Assembly Language shows you how to write those instructions in a form the CPU
understands.

Understanding both gives you deep insight into:

• How programs run behind the scenes

• How memory and registers are used

• How performance can be optimized at the hardware level

Why Learn This?

• Great for system programming, embedded systems, reverse engineering, and game dev.

• Helps you write efficient, optimized code.

• Makes you a better programmer even at higher levels like C++ or Python.

You might also like