0% found this document useful (0 votes)
6 views

Computer-Architecture-Answers

The document outlines the distinction between Computer Architecture and Computer Organization, highlighting their focus on high-level design versus hardware implementation. It details the main structural components of a computer, the basic functions of a computer, and various memory access methods, including cache organization and addressing modes. Additionally, it discusses the internal structure of the CPU, data flow in indirect addressing, and compares different types of RAM.

Uploaded by

rakib islam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Computer-Architecture-Answers

The document outlines the distinction between Computer Architecture and Computer Organization, highlighting their focus on high-level design versus hardware implementation. It details the main structural components of a computer, the basic functions of a computer, and various memory access methods, including cache organization and addressing modes. Additionally, it discusses the internal structure of the CPU, data flow in indirect addressing, and compares different types of RAM.

Uploaded by

rakib islam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1.

Distinction between Computer Organization and Computer Architecture

Computer Architecture and Computer Organization are fundamental concepts in computer


science that, while closely related, focus on different aspects of computer systems.

 Computer Architecture refers to the conceptual design and fundamental operational


structure of a computer system. It encompasses the design of the instruction set, data
types, addressing modes, memory architecture, and the overall system design as seen by a
programmer. Essentially, it is the abstract model and high-level functionality of the
system.
 Computer Organization, on the other hand, deals with the operational units and their
interconnections that realize the architectural specifications. It focuses on the hardware
implementation aspects, including the control signals, interfaces, memory technology,
and the physical components that constitute the computer system.

Key Differences:

 Abstraction Level:
o Architecture: High-level, concerned with the logical aspects and functionality.
o Organization: Low-level, concerned with the physical implementation.
 Focus:
o Architecture: What the system does.
o Organization: How the system does it.
 Visibility to Programmer:
o Architecture: Attributes visible to the programmer (instruction sets, addressing
modes).
o Organization: Transparent to the programmer (hardware details).

2. Main Structural Components of a Computer and Processor

Computer Structural Components:

1. Central Processing Unit (CPU):


o Executes instructions and processes data.
o Consists of the Control Unit, Arithmetic Logic Unit, and Registers.
2. Memory:
o Primary Memory (RAM): Volatile memory for storing data and instructions
currently in use.
o Secondary Memory (Storage): Non-volatile memory for long-term data storage
(e.g., HDD, SSD).
3. Input Devices:
o Allow users to input data into the computer (e.g., keyboard, mouse).
4. Output Devices:
o Display or output data from the computer (e.g., monitor, printer).
5. System Bus:
o A communication system that transfers data between components.

Processor (CPU) Structural Components:

1. Arithmetic Logic Unit (ALU):


o Performs arithmetic operations (addition, subtraction) and logical operations
(AND, OR, NOT).
2. Control Unit (CU):
o Directs the operation of the processor.
o Fetches instructions from memory, decodes them, and executes them.
3. Registers:
o Small, fast storage locations within the CPU.
o Types include General-Purpose Registers, Program Counter, Instruction Register,
and Accumulator.
4. Cache Memory:
o A small, fast memory located close to the CPU to speed up data access.
5. Internal Buses:
o Connect internal components of the CPU, such as data bus, address bus, and
control bus.

3. Basic Function of a Computer Using a Top-Level View of Components

At a fundamental level, a computer processes data by executing instructions through the


coordinated efforts of its main components.

Basic Functions:

1. Input:
o Data and instructions are entered into the computer via input devices.
2. Processing:
o The CPU interprets and executes instructions.
o Data is manipulated according to the program's requirements.
3. Storage:
o Memory units store data and instructions temporarily (RAM) or permanently
(storage devices).
4. Output:
o Processed data is presented to the user through output devices.

Top-Level Components Involved:

 CPU (Central Processing Unit): Executes instructions and performs calculations.


 Memory: Stores data and instructions.
 Input Devices: Feed data into the system.
 Output Devices: Display results to the user.
 System Bus: Facilitates communication between components.

4. Bus Interconnection Scheme with a Diagram

A bus interconnection scheme connects all major components of a computer system, allowing
data transfer and communication among them.

Components of a Bus System:

1. Data Bus:
o Transfers actual data between components.
o Bidirectional, allowing for reading and writing.
2. Address Bus:
o Carries memory addresses from the processor to other components.
o Unidirectional, from CPU to memory and I/O devices.
3. Control Bus:
o Carries control signals and coordination commands.
o Bidirectional, facilitating communication between the CPU and other
components.

Diagram:

+----------------+ +----------------+ +----------------+


| | | | | |
| CPU |<------>| Memory |<------>| I/O |
| | | | | |
+----------------+ +----------------+ +----------------+
^ ^ ^
| | |
+---------------------------------------------------------+
| System Bus |
+---------------------------------------------------------+

Explanation:

 The System Bus comprises the Data Bus, Address Bus, and Control Bus.
 All main components (CPU, Memory, I/O Devices) are interconnected via the System
Bus.
 The buses facilitate communication and data transfer among the components.

5. Differences Among Sequential, Direct, and Random Access Data from Memory

Sequential Access:
 Data is accessed in a specific linear sequence.
 Access time depends on the data's position in the sequence.
 Example: Magnetic tape storage, where you must pass through data sequentially to reach
a specific point.

Direct Access:

 Access data directly using a physical address.


 Access time varies depending on the data's location and physical movement (e.g., disk
rotation).
 Example: Hard disk drives, where the read/write head moves to a specific track and
sector.

Random Access:

 Any data location can be accessed directly and in approximately the same amount of
time.
 Access time is constant and independent of data location.
 Example: RAM (Random Access Memory), where any memory cell can be accessed
directly.

Summary Table:

Access Type Access Time Dependency Example Device


Sequential Depends on data position Magnetic Tape
Direct Varies; some physical delay Hard Disk Drive
Random Constant, direct access RAM

6. Cache/Main Memory Structure with a Diagram

Cache/Main Memory Hierarchy:

 Cache Memory: Small, fast memory located close to the CPU to reduce the time to
access data from the main memory.
 Main Memory: Larger, slower memory (RAM) that stores data and instructions
currently in use.

Structure Diagram:
+-----------+ +-----------+ +------------+
| | | | | |
| CPU |<--------->| Cache |<--------->| Main Memory|
| | | | | |
+-----------+ +-----------+ +------------+

Explanation:

 The CPU first checks the Cache for data (fast access).
 If data is not in the cache (cache miss), it retrieves data from Main Memory.
 Cache acts as a buffer between the CPU and Main Memory, storing frequently accessed
data to improve performance.

Levels of Cache:

1. L1 Cache: Smallest and fastest, built into the CPU chip.


2. L2 Cache: Larger than L1, may be on the CPU chip or on a separate chip.
3. L3 Cache: Even larger, shared among cores in multi-core processors.

7. Differences Between Logical and Physical Cache

Logical Cache (Virtual Cache):

 Uses virtual memory addresses generated by the CPU for cache indexing.
 Accesses data before virtual-to-physical address translation.
 Advantages:
o Faster access since it avoids address translation delay.
 Disadvantages:
o Potential for synonym problems (different virtual addresses mapping to the same
physical address).
o Increased complexity in maintaining cache coherence.

Physical Cache:

 Uses physical memory addresses after address translation.


 Data is cached using the actual physical addresses.
 Advantages:
o Avoids synonym problems.
o Simpler cache coherence.
 Disadvantages:
o Slightly slower access due to address translation overhead.

Summary:

 Logical Cache: Faster but complex, operates on virtual addresses.


 Physical Cache: Simpler but may have access delays, operates on physical addresses.

8. Direct-Mapping Cache Organization with a Diagram

Direct-Mapped Cache:

 Each block of main memory maps to exactly one cache line.


 Simple and fast cache organization.

Diagram:

Main Memory Blocks:


+---------+---------+---------+---------+
| Block 0 | Block 1 | Block 2 | Block 3 | ...
+---------+---------+---------+---------+

Cache Lines:
+---------+---------+---------+---------+
| Line 0 | Line 1 | Line 2 | Line 3 | ...
+---------+---------+---------+---------+

Mapping Function:
Cache Line = (Main Memory Block Number) MOD (Number of Cache Lines)

Explanation:

 Tag Field: High-order bits of the memory address used to determine if the block in the
cache corresponds to the requested memory block.
 Index Field: Determines which cache line a memory block maps to.
 Offset Field: Specifies the exact byte within the cache block.

Operation:

 When accessing memory, the CPU uses the index to find the cache line.
 The tag is compared to verify a cache hit.
 On a miss, the block is fetched from main memory and placed in the corresponding cache
line.

9. Differences Between DRAM and SRAM

Dynamic RAM (DRAM):

 Stores data using capacitors that need periodic refreshing.


 Characteristics:
o Density: Higher, allowing for more memory capacity.
o Cost: Lower per bit.
o Speed: Slower access times.
o Power Consumption: Lower when idle, but higher due to refresh cycles.
 Usage: Main system memory (RAM).

Static RAM (SRAM):

 Stores data using flip-flops that retain data as long as power is supplied.
 Characteristics:
o Density: Lower, less memory capacity per chip.
o Cost: Higher per bit.
o Speed: Faster access times.
o Power Consumption: Consumes more power continuously.
 Usage: Cache memory within the CPU.

Summary Table:

DRAM SRAM
Storage Capacitors (need refresh) Flip-flops (no refresh)
Speed Slower Faster
Cost Less expensive More expensive
Density Higher Lower
Usage Main Memory Cache Memory

10. Internal Organization of Memory with a Diagram

Memory Organization:

 Memory is organized as an array of cells, each storing a bit of data.


 Cells are arranged in rows and columns.

Diagram:

Column Address
+---+---+---+---+
| C0| C1| C2| C3|
+---+---+---+---+---+
Row R0| | | | | |
+---+---+---+---+---+
Row R1| | | | | |
+---+---+---+---+---+
Row R2| | | | | |
+---+---+---+---+---+
Row R3| | | | | |
+---+---+---+---+---+
Explanation:

 Row Address Strobe (RAS): Selects a specific row in the memory array.
 Column Address Strobe (CAS): Selects a specific column within the selected row.
 Address Lines: Carry the address bits, split into row and column addresses.
 Sense Amplifiers: Read the data from the selected memory cell.

Operation:

 The CPU sends an address to memory.


 The address is divided into row and column addresses.
 RAS and CAS signals select the specific memory cell.
 Data is read from or written to the cell.

11. Elements of a Machine Instruction

A machine instruction is composed of several elements that specify the operation and the
operands.

Elements:

1. Operation Code (Opcode):


o Specifies the operation to be performed (e.g., ADD, SUB, LOAD).
2. Source Operand Reference(s):
o Specifies the source data for the operation.
o Can be a register, memory location, or immediate value.
3. Destination Operand Reference:
o Specifies where the result of the operation should be stored.
4. Addressing Mode Specifier:
o Indicates how to interpret the operands (e.g., direct, indirect, immediate).
5. Instruction Format:
o Length: Total number of bits in the instruction.
o Fields: Divided into opcode, operand addresses, and mode bits.

Example Instruction Format:

| Opcode | Addressing Mode | Operand 1 | Operand 2 |

 Opcode: Specifies the operation.


 Addressing Mode: Indicates how to access the operands.
 Operands: Contain the actual data or references to data.
12. Addressing Modes with a Diagram

Addressing Modes:

1. Immediate Addressing:
o Operand is part of the instruction.
o Instruction: Opcode + Operand.
o Example: ADD #5 (Add 5 to accumulator).
2. Direct Addressing:
o Instruction contains the memory address of the operand.
o Instruction: Opcode + Address.
o Example: LOAD 1000 (Load data from memory address 1000).
3. Indirect Addressing:
o Instruction points to a memory location that contains the address of the operand.
o Instruction: Opcode + Address.
o Example: LOAD (1000) (Load data from the address found at memory location
1000).
4. Register Addressing:
o Operand is in a CPU register.
o Instruction: Opcode + Register.
o Example: ADD R1 (Add contents of R1 to accumulator).
5. Register Indirect Addressing:
o Register contains the address of the operand.
o Instruction: Opcode + Register.
o Example: LOAD (R1) (Load data from the address in R1).
6. Indexed Addressing:
o Effective address is the sum of a base address and an index register.
o Instruction: Opcode + Base Address + Index Register.
o Example: LOAD BASE(R1) (Load data from BASE + contents of R1).

Diagram for Indirect Addressing:

Instruction Register (IR):


+-----------------------+
| Opcode | Address (A) |
+-----------------------+

Memory:
+-------+-------+
| A | EA | --> EA (Effective Address)
+-------+-------+
| EA | Data | --> Data Operand
+-------+-------+

Explanation:

 The instruction contains address A.


 Memory location A contains EA, the effective address.
 The operand is located at memory address EA.

13. Internal Structure of CPU

The CPU consists of several key components that collaborate to execute instructions.

Components:

1. Control Unit (CU):


o Directs the operation of the processor.
o Fetches, decodes, and executes instructions.
2. Arithmetic Logic Unit (ALU):
o Performs arithmetic operations (addition, subtraction) and logical operations
(AND, OR).
3. Registers:
o Small, fast storage locations within the CPU.
o General-Purpose Registers: Store data and addresses.
o Special-Purpose Registers:
 Program Counter (PC): Holds the address of the next instruction.
 Instruction Register (IR): Holds the current instruction.
 Accumulator (AC): Used in arithmetic operations.
 Status Register: Holds flags (zero, carry, overflow).
4. Internal Buses:
o Data Bus: Transfers data within the CPU.
o Address Bus: Carries addresses within the CPU.
o Control Bus: Carries control signals within the CPU.
5. Cache Memory:
o Provides fast access to frequently used data and instructions.

Operation Flow:

 Instructions are fetched from memory into the IR.


 The CU decodes and signals the ALU and registers.
 The ALU performs computations.
 Results are stored back in registers or memory.

14. Data Flow of Indirect Cycle with a Diagram

In the indirect addressing mode, an additional memory access is required to obtain the effective
address.

Data Flow Steps:


1. Fetch Instruction:
o Instruction is fetched from memory into the IR.
2. Decode Instruction:
o The CU decodes the opcode and identifies indirect addressing.
3. Fetch Effective Address:
o The address field of the instruction is used to access memory.
o The content at this address is the effective address (EA).
4. Fetch Operand:
o The EA is used to access the operand in memory.

Diagram:

[Instruction Fetch]
CPU Memory
IR <- [PC] PC -> Address Bus
Memory[PC] -> IR

[Effective Address Fetch]


IR.Address -> Address Bus
Memory[IR.Address] -> EA

[Operand Fetch]
EA -> Address Bus
Memory[EA] -> Operand

Explanation:

 The instruction points to a memory location containing EA.


 EA is fetched from memory.
 Operand is accessed using EA.

15. Data Flow of Fetch Cycle with a Diagram

The fetch cycle retrieves an instruction from memory for execution.

Data Flow Steps:

1. Fetch Instruction Address:


o The PC contains the address of the next instruction.
2. Memory Read:
o Address in PC is sent to memory via the address bus.
o Memory returns the instruction to the CPU via the data bus.
3. Update Registers:
o The instruction is loaded into the IR.
o PC is incremented to point to the next instruction.
Diagram:

css
Copy code
[Fetch Cycle]
CPU Memory
PC -> Address Bus
Memory[PC] -> Data Bus -> IR
PC = PC + 1

Explanation:

 Step 1: PC provides the address.


 Step 2: Instruction is fetched into IR.
 Step 3: PC is updated for the next cycle.

16. Types of Interrupts with Examples

Interrupts are signals that alter the sequence in which the processor executes instructions.

Types of Interrupts:

1. Hardware Interrupts:
o Generated by hardware devices to signal that they need attention.
o Maskable Interrupts: Can be ignored or delayed (e.g., keyboard input).
o Non-Maskable Interrupts (NMI): High-priority interrupts that cannot be
ignored (e.g., hardware failure).
2. Software Interrupts:
o Initiated by software instructions.
o Exceptions: Result from errors during instruction execution (e.g., divide by zero).
o System Calls (Traps): Used by programs to request services from the OS.
3. External Interrupts:
o Originating outside the CPU (e.g., I/O devices, timers).
o Examples:
 I/O Interrupt: Signals completion of data transfer.
 Timer Interrupt: Generated by system timers for time-sharing.
4. Internal Interrupts (Exceptions):
o Caused by illegal operations within the CPU.
o Examples:
 Arithmetic Overflow: Result exceeds the size limit.
 Invalid Opcode: Unrecognized instruction.

17. Comparison of One-, Two-, and Three-Address Instructions


One-Address Instructions:

 Format: Opcode + Operand.


 Uses an implied accumulator (AC) register for operations.
 Example: ADD X (AC = AC + X).

Two-Address Instructions:

 Format: Opcode + Operand1 + Operand2.


 One operand acts as both source and destination.
 Example: ADD A, B (A = A + B).

Three-Address Instructions:

 Format: Opcode + Operand1 + Operand2 + Operand3.


 Separate operands for both sources and destination.
 Example: ADD A, B, C (C = A + B).

Comparison:

 Instruction Length:
o Three-address instructions are longer due to more operands.
 Flexibility:
o Three-address provides more flexibility and reduces the number of instructions
needed.
 Code Density:
o One-address may require more instructions to perform complex operations.
 Performance:
o Fewer instructions with multiple addresses can improve performance despite
longer instruction length.

18. Process States with a State Diagram

Process States:

1. New: Process is being created.


2. Ready: Process is waiting to be assigned to the CPU.
3. Running: Process instructions are being executed.
4. Waiting (Blocked): Process cannot execute until an event occurs (e.g., I/O completion).
5. Terminated: Process has finished execution.
State Diagram:

New --> Ready --> Running --> Terminated


| ^
v |
Waiting ----

Transitions:

 New to Ready: Process admitted into the ready queue.


 Ready to Running: Scheduler dispatches process.
 Running to Waiting: Process requests I/O or event.
 Waiting to Ready: I/O or event completes.
 Running to Ready: Process is preempted by scheduler.
 Running to Terminated: Process completes execution.

19. Hardware Implementation of Unsigned Binary Multiplication with a Flowchart

Algorithm (Shift-and-Add Method):

1. Initialize:
o Set Multiplier Register (MQ) with multiplier.
o Set Multiplicand Register (MD) with multiplicand.
o Set Accumulator (AC) to zero.
2. Repeat for each bit of the multiplier:
o If LSB of MQ is 1, then:
 AC = AC + MD.
o Shift AC and MQ right by one bit (together).
3. Result:
o After n shifts (n is the number of bits), the combined content of AC and MQ is
the product.

Flowchart:

[Start]
|
[Initialize AC, MQ, MD]
|
[Check LSB of MQ]
|
--Yes--> [AC = AC + MD]
| |
[Shift AC and MQ Right]
|
[Repeat until n bits shifted]
|
[Product in AC and MQ]
|
[End]

20. Booth’s Algorithm with an Example

Booth's Algorithm Steps:

1. Initialization:
o Set Accumulator (A) and Q-1 to zero.
o Load Multiplicand (M) and Multiplier (Q).
2. Repeat for n bits:
o If Q0 = 0 and Q-1 = 1, then A = A + M.
o If Q0 = 1 and Q-1 = 0, then A = A - M.
o Arithmetic Right Shift A, Q, Q-1.
o Update Q-1.
3. Result:
o Product is in A and Q.

Example: Multiply 3 (0011) by -4 (1100).

Steps:

 Initialization:
o A = 0, Q = 1100, Q-1 = 0, M = 0011, -M = 1101.
 Cycle 1:
o Q0Q-1 = 0 0: Do nothing.
o Shift: A, Q, Q-1 shifted right.
 Cycle 2:
o Q0Q-1 = 0 0: Do nothing.
o Shift.
 Cycle 3:
o Q0Q-1 = 0 0: Do nothing.
o Shift.
 Cycle 4:
o Q0Q-1 = 0 0: Do nothing.
o Shift.

Final Result:

 A and Q contain the product (-12 in two's complement).


21. OS in the Context of Hardware and Software and Its Classification

Operating System (OS):

 Acts as an intermediary between users/applications and hardware.


 Manages hardware resources and provides services to software.

Context:

 Hardware: Physical components (CPU, memory, I/O devices).


 Software: Applications and programs that perform tasks.
 OS Role:
o Manages hardware resources.
o Provides a stable, consistent environment for applications.

Classification of OS:

1. Single-User vs. Multi-User:


o Single-User: Supports one user at a time (e.g., DOS).
o Multi-User: Supports multiple users simultaneously (e.g., Unix).
2. Single-Tasking vs. Multi-Tasking:
o Single-Tasking: Runs one program at a time.
o Multi-Tasking: Runs multiple programs concurrently.
3. Distributed OS:
o Manages a collection of independent computers and makes them appear as a
single coherent system.
4. Embedded OS:
o Designed for embedded systems (e.g., appliances, cars).
5. Real-Time OS:
o Provides immediate processing and response for time-critical tasks.

22. Queuing Diagram of Scheduling Queues, MMU, and Memory Protection

Scheduling Queues:

 Job Queue: All processes in the system.


 Ready Queue: Processes ready for execution.
 Device Queues: Processes waiting for I/O devices.

Memory Management Unit (MMU):

 Handles virtual to physical address translation.


 Supports memory protection by ensuring processes access only their allocated memory.
Memory Protection:

 Prevents processes from accessing memory outside their allocation.


 Implemented via hardware (MMU) and OS support.

Diagram:

[Job Queue] --> [Long-Term Scheduler] --> [Ready Queue]


|
[CPU Scheduler]
|
[CPU]
|
[MMU]
|
[Physical Memory]

Explanation:

 Processes move from the job queue to the ready queue.


 The CPU scheduler selects processes for execution.
 MMU translates addresses and enforces memory protection.

23. Paging, Paging Hardware with a Diagram, and Memory Mapping Example

Paging:

 Divides memory into fixed-size pages (logical) and frames (physical).


 Eliminates external fragmentation.

Paging Hardware Components:

 Page Table: Maps pages to frames.


 Page Table Base Register (PTBR): Points to the page table.
 Translation Lookaside Buffer (TLB): Cache for page table entries.

Diagram:

[CPU]
|
[Logical Address (Page # + Offset)]
|
[MMU]
|--[Page #]--> [Page Table] --> [Frame #]
|
[Physical Address (Frame # + Offset)]
|
[Physical Memory]
Memory Mapping Example:

 Logical Address: Page 5, Offset 100.


 Page Table Entry: Page 5 maps to Frame 3.
 Physical Address: Frame 3, Offset 100.

Explanation:

 CPU generates a logical address.


 MMU uses the page number to look up the frame number in the page table.
 Physical address is formed by combining the frame number with the offset.

24. Definition of Deadlock, Conditions of a Deadlock, and Example with a Resource


Allocation Graph

Deadlock Definition:

 A situation where a set of processes are blocked because each process holds a resource
and waits for another resource held by another process.

Conditions for Deadlock (Coffman Conditions):

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode.


2. Hold and Wait: Processes hold resources while waiting for others.
3. No Preemption: Resources cannot be forcibly removed from processes.
4. Circular Wait: A circular chain of processes exists, each waiting for a resource held by
the next.

Resource Allocation Graph Example:

 Processes: P1, P2.


 Resources: R1, R2.
 Edges:
o P1 holds R1, requests R2.
o P2 holds R2, requests R1.

Graph:

P1 --> R2 (request)
R1 --> P1 (assignment)
P2 --> R1 (request)
R2 --> P2 (assignment)

Explanation:
 The graph shows a circular wait.
 Both processes are waiting indefinitely, resulting in a deadlock.

25. System Components of an Operating System

Components:

1. Process Management:
o Creation, scheduling, and termination of processes.
2. Memory Management:
o Allocation and deallocation of memory space.
o Virtual memory implementation.
3. File System Management:
o Controls file operations (creation, deletion, access).
4. Device Management:
o Manages I/O devices and drivers.
o Provides a uniform interface for hardware.
5. Secondary Storage Management:
o Manages storage devices and data retrieval.
6. Security and Protection:
o Controls access to resources.
o Protects data and system integrity.
7. Networking:
o Facilitates communication between processes over a network.
8. Command Interpreter (Shell):
o Interface between the user and the OS.

26. Different Types of Schedulers with a Diagram

Schedulers:

1. Long-Term Scheduler (Job Scheduler):


o Determines which processes are admitted to the system for processing.
o Controls degree of multiprogramming.
2. Short-Term Scheduler (CPU Scheduler):
o Selects processes from the ready queue for execution.
o Determines which process runs next.
3. Medium-Term Scheduler:
o Swaps processes in and out of memory.
o Balances the load and manages memory.

Diagram:
[Job Queue]
|
[Long-Term Scheduler]
|
[Ready Queue] <---> [Medium-Term Scheduler]
|
[Short-Term Scheduler]
|
[CPU]

Explanation:

 The Long-Term Scheduler controls process admission.


 The Short-Term Scheduler decides which process runs next.
 The Medium-Term Scheduler manages memory by swapping processes.

You might also like