Computer-Architecture-Answers
Computer-Architecture-Answers
Key Differences:
Abstraction Level:
o Architecture: High-level, concerned with the logical aspects and functionality.
o Organization: Low-level, concerned with the physical implementation.
Focus:
o Architecture: What the system does.
o Organization: How the system does it.
Visibility to Programmer:
o Architecture: Attributes visible to the programmer (instruction sets, addressing
modes).
o Organization: Transparent to the programmer (hardware details).
Basic Functions:
1. Input:
o Data and instructions are entered into the computer via input devices.
2. Processing:
o The CPU interprets and executes instructions.
o Data is manipulated according to the program's requirements.
3. Storage:
o Memory units store data and instructions temporarily (RAM) or permanently
(storage devices).
4. Output:
o Processed data is presented to the user through output devices.
A bus interconnection scheme connects all major components of a computer system, allowing
data transfer and communication among them.
1. Data Bus:
o Transfers actual data between components.
o Bidirectional, allowing for reading and writing.
2. Address Bus:
o Carries memory addresses from the processor to other components.
o Unidirectional, from CPU to memory and I/O devices.
3. Control Bus:
o Carries control signals and coordination commands.
o Bidirectional, facilitating communication between the CPU and other
components.
Diagram:
Explanation:
The System Bus comprises the Data Bus, Address Bus, and Control Bus.
All main components (CPU, Memory, I/O Devices) are interconnected via the System
Bus.
The buses facilitate communication and data transfer among the components.
5. Differences Among Sequential, Direct, and Random Access Data from Memory
Sequential Access:
Data is accessed in a specific linear sequence.
Access time depends on the data's position in the sequence.
Example: Magnetic tape storage, where you must pass through data sequentially to reach
a specific point.
Direct Access:
Random Access:
Any data location can be accessed directly and in approximately the same amount of
time.
Access time is constant and independent of data location.
Example: RAM (Random Access Memory), where any memory cell can be accessed
directly.
Summary Table:
Cache Memory: Small, fast memory located close to the CPU to reduce the time to
access data from the main memory.
Main Memory: Larger, slower memory (RAM) that stores data and instructions
currently in use.
Structure Diagram:
+-----------+ +-----------+ +------------+
| | | | | |
| CPU |<--------->| Cache |<--------->| Main Memory|
| | | | | |
+-----------+ +-----------+ +------------+
Explanation:
The CPU first checks the Cache for data (fast access).
If data is not in the cache (cache miss), it retrieves data from Main Memory.
Cache acts as a buffer between the CPU and Main Memory, storing frequently accessed
data to improve performance.
Levels of Cache:
Uses virtual memory addresses generated by the CPU for cache indexing.
Accesses data before virtual-to-physical address translation.
Advantages:
o Faster access since it avoids address translation delay.
Disadvantages:
o Potential for synonym problems (different virtual addresses mapping to the same
physical address).
o Increased complexity in maintaining cache coherence.
Physical Cache:
Summary:
Direct-Mapped Cache:
Diagram:
Cache Lines:
+---------+---------+---------+---------+
| Line 0 | Line 1 | Line 2 | Line 3 | ...
+---------+---------+---------+---------+
Mapping Function:
Cache Line = (Main Memory Block Number) MOD (Number of Cache Lines)
Explanation:
Tag Field: High-order bits of the memory address used to determine if the block in the
cache corresponds to the requested memory block.
Index Field: Determines which cache line a memory block maps to.
Offset Field: Specifies the exact byte within the cache block.
Operation:
When accessing memory, the CPU uses the index to find the cache line.
The tag is compared to verify a cache hit.
On a miss, the block is fetched from main memory and placed in the corresponding cache
line.
Stores data using flip-flops that retain data as long as power is supplied.
Characteristics:
o Density: Lower, less memory capacity per chip.
o Cost: Higher per bit.
o Speed: Faster access times.
o Power Consumption: Consumes more power continuously.
Usage: Cache memory within the CPU.
Summary Table:
DRAM SRAM
Storage Capacitors (need refresh) Flip-flops (no refresh)
Speed Slower Faster
Cost Less expensive More expensive
Density Higher Lower
Usage Main Memory Cache Memory
Memory Organization:
Diagram:
Column Address
+---+---+---+---+
| C0| C1| C2| C3|
+---+---+---+---+---+
Row R0| | | | | |
+---+---+---+---+---+
Row R1| | | | | |
+---+---+---+---+---+
Row R2| | | | | |
+---+---+---+---+---+
Row R3| | | | | |
+---+---+---+---+---+
Explanation:
Row Address Strobe (RAS): Selects a specific row in the memory array.
Column Address Strobe (CAS): Selects a specific column within the selected row.
Address Lines: Carry the address bits, split into row and column addresses.
Sense Amplifiers: Read the data from the selected memory cell.
Operation:
A machine instruction is composed of several elements that specify the operation and the
operands.
Elements:
Addressing Modes:
1. Immediate Addressing:
o Operand is part of the instruction.
o Instruction: Opcode + Operand.
o Example: ADD #5 (Add 5 to accumulator).
2. Direct Addressing:
o Instruction contains the memory address of the operand.
o Instruction: Opcode + Address.
o Example: LOAD 1000 (Load data from memory address 1000).
3. Indirect Addressing:
o Instruction points to a memory location that contains the address of the operand.
o Instruction: Opcode + Address.
o Example: LOAD (1000) (Load data from the address found at memory location
1000).
4. Register Addressing:
o Operand is in a CPU register.
o Instruction: Opcode + Register.
o Example: ADD R1 (Add contents of R1 to accumulator).
5. Register Indirect Addressing:
o Register contains the address of the operand.
o Instruction: Opcode + Register.
o Example: LOAD (R1) (Load data from the address in R1).
6. Indexed Addressing:
o Effective address is the sum of a base address and an index register.
o Instruction: Opcode + Base Address + Index Register.
o Example: LOAD BASE(R1) (Load data from BASE + contents of R1).
Memory:
+-------+-------+
| A | EA | --> EA (Effective Address)
+-------+-------+
| EA | Data | --> Data Operand
+-------+-------+
Explanation:
The CPU consists of several key components that collaborate to execute instructions.
Components:
Operation Flow:
In the indirect addressing mode, an additional memory access is required to obtain the effective
address.
Diagram:
[Instruction Fetch]
CPU Memory
IR <- [PC] PC -> Address Bus
Memory[PC] -> IR
[Operand Fetch]
EA -> Address Bus
Memory[EA] -> Operand
Explanation:
css
Copy code
[Fetch Cycle]
CPU Memory
PC -> Address Bus
Memory[PC] -> Data Bus -> IR
PC = PC + 1
Explanation:
Interrupts are signals that alter the sequence in which the processor executes instructions.
Types of Interrupts:
1. Hardware Interrupts:
o Generated by hardware devices to signal that they need attention.
o Maskable Interrupts: Can be ignored or delayed (e.g., keyboard input).
o Non-Maskable Interrupts (NMI): High-priority interrupts that cannot be
ignored (e.g., hardware failure).
2. Software Interrupts:
o Initiated by software instructions.
o Exceptions: Result from errors during instruction execution (e.g., divide by zero).
o System Calls (Traps): Used by programs to request services from the OS.
3. External Interrupts:
o Originating outside the CPU (e.g., I/O devices, timers).
o Examples:
I/O Interrupt: Signals completion of data transfer.
Timer Interrupt: Generated by system timers for time-sharing.
4. Internal Interrupts (Exceptions):
o Caused by illegal operations within the CPU.
o Examples:
Arithmetic Overflow: Result exceeds the size limit.
Invalid Opcode: Unrecognized instruction.
Two-Address Instructions:
Three-Address Instructions:
Comparison:
Instruction Length:
o Three-address instructions are longer due to more operands.
Flexibility:
o Three-address provides more flexibility and reduces the number of instructions
needed.
Code Density:
o One-address may require more instructions to perform complex operations.
Performance:
o Fewer instructions with multiple addresses can improve performance despite
longer instruction length.
Process States:
Transitions:
1. Initialize:
o Set Multiplier Register (MQ) with multiplier.
o Set Multiplicand Register (MD) with multiplicand.
o Set Accumulator (AC) to zero.
2. Repeat for each bit of the multiplier:
o If LSB of MQ is 1, then:
AC = AC + MD.
o Shift AC and MQ right by one bit (together).
3. Result:
o After n shifts (n is the number of bits), the combined content of AC and MQ is
the product.
Flowchart:
[Start]
|
[Initialize AC, MQ, MD]
|
[Check LSB of MQ]
|
--Yes--> [AC = AC + MD]
| |
[Shift AC and MQ Right]
|
[Repeat until n bits shifted]
|
[Product in AC and MQ]
|
[End]
1. Initialization:
o Set Accumulator (A) and Q-1 to zero.
o Load Multiplicand (M) and Multiplier (Q).
2. Repeat for n bits:
o If Q0 = 0 and Q-1 = 1, then A = A + M.
o If Q0 = 1 and Q-1 = 0, then A = A - M.
o Arithmetic Right Shift A, Q, Q-1.
o Update Q-1.
3. Result:
o Product is in A and Q.
Steps:
Initialization:
o A = 0, Q = 1100, Q-1 = 0, M = 0011, -M = 1101.
Cycle 1:
o Q0Q-1 = 0 0: Do nothing.
o Shift: A, Q, Q-1 shifted right.
Cycle 2:
o Q0Q-1 = 0 0: Do nothing.
o Shift.
Cycle 3:
o Q0Q-1 = 0 0: Do nothing.
o Shift.
Cycle 4:
o Q0Q-1 = 0 0: Do nothing.
o Shift.
Final Result:
Context:
Classification of OS:
Scheduling Queues:
Diagram:
Explanation:
23. Paging, Paging Hardware with a Diagram, and Memory Mapping Example
Paging:
Diagram:
[CPU]
|
[Logical Address (Page # + Offset)]
|
[MMU]
|--[Page #]--> [Page Table] --> [Frame #]
|
[Physical Address (Frame # + Offset)]
|
[Physical Memory]
Memory Mapping Example:
Explanation:
Deadlock Definition:
A situation where a set of processes are blocked because each process holds a resource
and waits for another resource held by another process.
Graph:
P1 --> R2 (request)
R1 --> P1 (assignment)
P2 --> R1 (request)
R2 --> P2 (assignment)
Explanation:
The graph shows a circular wait.
Both processes are waiting indefinitely, resulting in a deadlock.
Components:
1. Process Management:
o Creation, scheduling, and termination of processes.
2. Memory Management:
o Allocation and deallocation of memory space.
o Virtual memory implementation.
3. File System Management:
o Controls file operations (creation, deletion, access).
4. Device Management:
o Manages I/O devices and drivers.
o Provides a uniform interface for hardware.
5. Secondary Storage Management:
o Manages storage devices and data retrieval.
6. Security and Protection:
o Controls access to resources.
o Protects data and system integrity.
7. Networking:
o Facilitates communication between processes over a network.
8. Command Interpreter (Shell):
o Interface between the user and the OS.
Schedulers:
Diagram:
[Job Queue]
|
[Long-Term Scheduler]
|
[Ready Queue] <---> [Medium-Term Scheduler]
|
[Short-Term Scheduler]
|
[CPU]
Explanation: