Question1 To 12
Question1 To 12
1. Computer Architecture
2. Computer Organization
Simple Summary:
1. Registers: Fastest, very small size (inside CPU), holds data being processed.
2. Cache Memory: Faster than RAM, small in size, stores frequently accessed data. (L1,
L2, L3 caches)
3. Main Memory (RAM): Larger than cache, slower, stores active programs and data.
4. Secondary Storage (Hard Disk/SSD): Larger, slower, stores long-term data.
5. Tertiary Storage: Very slow, used for backups or archives (e.g., cloud storage,
tapes).
Why Important: It ensures fast access to data with a balance between speed and
cost by using different types of memory at each level.
A Multiplexer (MUX) is a digital device that selects one of many input signals and forwards
it to a single output line. It acts as a data selector that allows multiple data sources to share a
single resource (such as a communication line or data bus).
Key Points:
A multiplexer uses control signals to select which input line (Input 0, Input 1, etc.) is
connected to the output.
The number of control lines depends on the number of input lines. For example, for 2^n
inputs, you need n control lines.
Example: A 4-to-1 MUX has 4 input lines, 1 output line, and 2 control lines (since 2^2 = 4).
De-Multiplexer (DEMUX)
Key Points:
A demultiplexer uses control signals to decide which output line will receive the input signal.
The number of control lines depends on the number of output lines. For 2^n outputs, you
need n control lines.
Example: A 1-to-4 DEMUX takes 1 input and has 4 output lines, with 2 control lines (since
2^2 = 4).
Summary:
Multiplexer (MUX): Selects one input from many and sends it to the output.
De-Multiplexer (DEMUX): Takes one input and sends it to one of many outputs, based on
control signals.
1. Input Devices
o Definition: Used to input data into the computer.
o Example: Keyboard (for typing)
2. Output Devices
o Definition: Used to display or output data from the computer.
o Example: Monitor (for displaying images)
3. Storage Devices
o Definition: Used to store data permanently or temporarily.
o Example: Hard Disk Drive (HDD) (for long-term storage)
4. Communication Devices
o Definition: Used for data transfer between computers or networks.
o Example: Modem (for internet connection)
Summary:
Input: Keyboard
Output: Monitor
Storage: HDD
Communication: Modem
Virtual Memory allows programs to use more memory than the physical RAM
by using secondary storage (like hard drives). It makes programs believe they
have access to a larger memory space than what is physically available.
Given:
Steps:
1. Page Mapping:
o Each address maps to page 0 (since all addresses fall within 32-byte page
size).
2. Page Replacement:
o Cache is large enough to hold all accessed pages, so no replacement needed.
3. Hit Ratio:
o All 8 accesses are to page 0, so all are hits.
o Hit Ratio = 8 hits / 8 accesses = 100%.
Summary:
Why It Is Essential:
In Summary:
The I/O interface is crucial for enabling interaction between the computer and
external devices, ensuring smooth data transfer, compatibility, and efficient
system performance.
QUESTION8. List and briefly describe two types of Read-Only Memory (ROM)
and their uses.
Summary:
PROM: Programmable once, used for permanent storage (e.g., embedded systems).
EEPROM: Can be erased and rewritten, used for settings and firmware updates (e.g.,
BIOS in computers).
QUESTION9. Explain Cache Mapping techniques. A computer has a
256 KB, 2-way set associative, write back data cache with block
size of 16 bytes. The processor sends 32 bit addresses to the
cache controller. Each cache tag directory entry contains in
addition to address tag, 2 valid bits, 1 modified bit and
replacement bit.1. Find the number of bits in the tag field. 2. Find
the size of the cache tag directory.
Cache mapping refers to the method used by the cache to determine how data
from main memory is stored and retrieved. There are three main types of cache
mapping techniques:
1. Direct-Mapped Cache:
In this method, each block in the main memory maps to exactly one cache
line. The address is divided into three parts: Tag, Index, and Block
offset.
2. Fully Associative Cache:
In this method, any block from the main memory can be stored in any
cache line. The cache controller needs to search all cache lines to check if
a block is present.
3. Set-Associative Cache:
This is a combination of direct-mapped and fully associative caches. The
cache is divided into sets, and each set contains multiple lines (blocks). A
block in main memory can be placed in any line of a set, and the cache
controller searches within the set to find the block. The 2-way set
associative cache means each set has 2 lines.
Problem Breakdown
Given:
To calculate the number of bits in the tag field, we need to break down the 32-
bit address into three parts: Block Offset, Index, and Tag.
1. Block Offset:
Since the block size is 16 bytes, we need log216=4\log_2 16 = 4 bits to
represent the byte position within the block.
2. Index:
The cache is 2-way set associative. To determine the number of sets:
3. Tag:
The total address length is 32 bits. So, the number of bits in the tag field
is:
The cache tag directory stores the tag for each set, along with the status bits
(valid, modified, replacement). To calculate the size of the cache tag directory,
we need to determine how much storage is needed for each entry.
Summary:
1. Page Size:
o The page size is 4 KB, which is 2122^{12} bytes.
o Therefore, each page covers 4 KB of memory.
Summary:
using a general register computer with three-address instructions, let's break it down into
smaller steps for clarity. We'll use three-address instructions, where each instruction
operates on three operands (two source operands and one destination operand).
1. Calculate D + E.
2. Calculate A * B.
3. Calculate C * (D + E).
4. Calculate X - (A * B).
5. Finally, calculate X - (A * B) - (C * (D + E)).
Three-Address Instructions:
LOAD R1, D ; R1 = D
ADD R1, E ; R1 = D + E
STORE R1, R5 ; Store D + E in R5
LOAD R2, A ; R2 = A
MUL R2, B ; R2 = A * B
STORE R2, R6 ; Store A * B in R6
LOAD R3, C ; R3 = C
MUL R3, R5 ; R3 = C * (D + E), using R5 for (D + E)
STORE R3, R7 ; Store C * (D + E) in R7
LOAD R4, X ; R4 = X
SUB R4, R6 ; R4 = X - (A * B), using R6 for A * B
SUB R4, R7 ; R4 = (X - (A * B)) - (C * (D + E)), using R7 for C *
(D + E)
Explanation:
1. Step 1: Load D into register R1 and add E to it. Store the result (D + E) in register R5.
2. Step 2: Load A into register R2 and multiply it by B. Store the result (A * B) in
register R6.
3. Step 3: Load C into register R3 and multiply it by the value stored in R5 (which is D +
E). Store the result (C * (D + E)) in register R7.
4. Step 4: Load X into register R4 and subtract the value in R6 (A * B) from it.
5. Step 5: Subtract the value in R7 (C * (D + E)) from the result in R4, and store the
final result back into X.
Summary:
This approach ensures that we respect the order of operations in the expression and efficiently
use the registers for intermediate results.
The instruction IR <- M[100] means "Load the contents of memory location
100 into the Instruction Register (IR)." This involves a sequence of operations
in the basic instruction cycle. The basic instruction cycle typically involves the
Fetch, Decode, and Execute phases.
oMAR <- PC (load the address of the next instruction into MAR)
o IR <- M[MAR] (fetch instruction from memory and load it into
IR)
o PC <- PC + 1 (increment the program counter to point to the next
instruction)
2. Decode the Instruction:
o The instruction in the IR is decoded. In this case, the instruction is
IR <- M[100], which means it is a Memory Read operation.
o The Control Unit decodes this instruction and generates the
control signals to initiate the memory read operation.
Execution Cycle:
After executing the instruction, the value from memory location 100 is
now in the IR.
1. Fetch Phase:
o MAR <- PC
o M[MAR] -> IR (fetch instruction from memory into IR)
o PC <- PC + 1
2. Decode Phase:
o Control Unit decodes the instruction (IR <- M[100]), generating
control signals for a memory read operation.
3. Execute Phase:
o MAR <- 100 (load the address 100 into the MAR)
o M[MAR] -> IR (fetch the data from memory at location 100 and
load it into the IR)
At the end of the cycle, the IR holds the data from memory location 100, and
the PC is incremented to the next instruction location.
In Summary: