Architecture_Notes
Architecture_Notes
Same physical memory address is used Separate physical memory address is used
for instructions and data. for instructions and data.
There is common bus for data and Separate buses are used for transferring
instruction transfer. data and instruction.
CPU can not access instructions and CPU can access instructions and read/write
read/write at the same time. at the same time.
It is used in personal computers and small It is used in micro controllers and signal
computers. processing.
2. Explain the role of system bus in a computer system?
Ans: Bus Notes Given Before. Check that.
3. Explain the role of program counter?
Ans: The PC plays a crucial role in coordinating the smooth progression of tasks within a
computing system's architecture.
All the instructions and data present in memory have a special address. As each instruction is
processed, the program counter is updated to the address of the next instruction to be
fetched. When a byte (machine code) is fetched, the PC is incremented by one. So that it can
fetch the next instruction. If the computer is reset or restarted, the program counter returns
to zero value. Following describes the roles of program counter.
Task Sequencing: By holding the address of the current task, and incrementing to
point to the next task once the current one is completed, it ensures tasks are
executed in the correct sequence.
Control Flow Maintenance: The Program Counter helps control the flow of execution
by holding the address of the next instruction. This is particularly useful for
implementing jumps and branches in the control flow.
Synchronization: In multi-threading CPUs, a separate Program Counter per thread
ensures smooth synchronization of various operations.
An arithmetic pipeline divides an arithmetic problem into various sub problems for
execution in various pipeline segments. It is used for floating point operations, multiplication
and various other computations.
Instruction Pipeline:
In this a stream of instructions can be executed by overlapping fetch, decode and execute
phases of an instruction cycle. This type of technique is used to increase the throughput of
the computer system. An instruction pipeline reads instruction from the memory while
previous instructions are being executed in other segments of the pipeline. Thus we can
execute multiple instructions simultaneously. The pipeline will be more efficient if the
instruction cycle is divided into segments of equal duration.
5. Explain priority interrupt and its working.
Ans: In computer architecture, priority interrupts are a mechanism for handling requests
from devices and internal components in a controlled and efficient manner. Here's a
breakdown of how they work:
Modern computer systems have many devices and internal components (e.g.,
keyboard, network card, disk drive, timers) that can interrupt the CPU to signal
events requiring attention.
Not all interrupts are equally important. For example, a network failure might
require more immediate attention than a key being pressed on the keyboard.
How Priority Interrupts Work:
The ISR is a small program snippet loaded into memory and designed to handle a
specific type of interrupt.
It performs the necessary actions for the interrupt source (e.g., read data from a
network card, update disk status).
After Interrupt Handling:
Fixed Priority: Each interrupt source has a pre-assigned, static priority level. Simple
but less flexible.
Dynamic Priority: Priority levels can be adjusted based on the situation. More
complex but allows for more fine-grained control. Often used in real-time systems.
Benefits of Priority Interrupts:
After all the program parts have been processed, the result is a fully processed program
segment. This is true whether the number of processors and tasks and processors were
equal and they all finished simultaneously or one after the other.
There are two types of parallel processes: fine-grained and coarse-grained. Tasks
communicate with one another numerous times per second in fine-grained parallelism to
deliver results in real-time or very close to real-time. The slower speed of coarse-grained
parallel processes results from their infrequent communication.
Advantages
Speed up.
Better cost per performance in the long run.
1. Fetch:
In this stage, the CPU retrieves an instruction from memory. It involves the following steps:
The Program Counter (PC) register stores the memory address of the next instruction to be
fetched.
The CPU sends the address from the PC to the Memory Address Register (MAR).
The Memory Data Register (MDR) fetches the instruction from the memory location
specified by the MAR.
The fetched instruction is loaded into the Instruction Register (IR).
2. Decode:
Once the instruction is in the IR, the CPU decodes it to understand what needs to be done.
Decoding involves:
Identifying the operation (opcode) specified in the instruction.
Determining the operands (data) required for the operation. These operands can be located
in registers, memory addresses, or immediate values included in the instruction itself.
3. Execute:
In this stage, the CPU executes the decoded instruction based on the opcode and operands.
This might involve:
Performing arithmetic operations (addition, subtraction, multiplication, division) using the
Arithmetic Logic Unit (ALU).
Accessing data from memory using the Memory Unit (MU).
Performing logical operations (AND, OR, NOT) using the ALU.
Branching to a different location in the program based on conditional statements.
4. Store (Optional):
Some instructions may involve storing the result of an operation back to memory. This
typically happens after the execution stage.
10. Explain Von Numen Architecture.
Ans: Given before.
11. Compare Static and Dynamic RAM.
Ans:
SRAM DRAM
Capacitors are not used To store information for a longer time, the
hence no refreshing is contents of the capacitor need to be refreshed
required. periodically.
In this bits are stored in In this bits are stored in the form of electric
voltage form. energy.
SRAMs has lower latency DRAM has more latency than SRAM
SRAMs are more resistant to DRAMs are less resistant to radiation than
radiation than DRAM SRAMs
All the instructions are executed All the instructions are executed
1. in a sequence, one at a time. parallelly.
The ISA provides the only way through which a user is able to interact with the hardware. It
can be viewed as a programmer’s manual because it’s the portion of the machine that’s
visible to the assembly language programmer, the compiler writer, and the application
programmer.
The ISA defines the supported data types, the registers, how the hardware manages main
memory, key features (such as virtual memory), which instructions a microprocessor can
execute, and the input/output model of multiple ISA implementations. The ISA can be
extended by adding instructions or other capabilities, or by adding support for larger
addresses and data values.
Section B
1. Describe fetch decode and execute cycle in details.
Ans: The fetch-decode-execute cycle, also known as the instruction cycle, is the fundamental
process by which a central processing unit (CPU) executes instructions in a computer system.
It's a continuous loop that the CPU repeats for every instruction in a program. Here's a
detailed breakdown of each stage:
1. Fetch:
Retrieving the Instruction: In this stage, the CPU retrieves an instruction from
memory. It involves the following steps:
o The Program Counter (PC) register stores the memory address of the next
instruction to be fetched.
o The CPU sends the address from the PC to the Memory Address Register
(MAR).
o The Memory Data Register (MDR) fetches the instruction from the memory
location specified by the MAR. This memory location contains the binary
code representing the instruction.
o The fetched instruction is loaded into the Instruction Register (IR).
2. Decode:
Understanding the Instruction: Once the instruction is in the IR, the CPU decodes it
to determine what needs to be done. Decoding involves:
o Identifying the operation (opcode) specified in the instruction. The opcode
tells the CPU what kind of operation to perform (e.g., addition, subtraction,
load from memory, store to memory, branch).
o Determining the operands (data) required for the operation. These operands
can be located in various places based on the instruction format:
Registers: The CPU has a set of internal registers that can hold
temporary data. The instruction might specify operands by referring to
their register numbers.
Memory addresses: The instruction might contain a memory address
where the operand is located.
Immediate values: Some instructions include the operand value
directly within the instruction itself. This is known as immediate
addressing mode.
3. Execute:
Taking Action: In this stage, the CPU executes the decoded instruction based on the
opcode and operands. This might involve:
o Performing arithmetic operations (addition, subtraction, multiplication,
division) using the Arithmetic Logic Unit (ALU).
o Accessing data from memory using the Memory Unit (MU) based on memory
addresses specified in the instruction.
o Performing logical operations (AND, OR, NOT) using the ALU.
o Branching to a different location in the program based on conditional
statements. For example, a jump instruction might change the value of the PC
to point to a different instruction.
4. Store (Optional):
Storing Results (if applicable): Some instructions may involve storing the result of
an operation back to memory. This typically happens after the execution stage, where
the result might be stored in a specific memory location specified by the instruction.
Once the execution stage is complete, the PC register is typically updated based on
the instruction. This could involve:
o Incrementing the PC by 1 to fetch the next instruction in sequence.
o Jumping to a different instruction address based on a branching instruction.
This update to the PC initiates the fetch stage again, starting the cycle over for the
next instruction. This continuous loop allows the CPU to execute the entire program
one instruction at a time.
Additional Notes:
Pipelining: Modern processors often employ instruction pipelining, which overlaps
the fetch, decode, and execute stages of multiple instructions for improved
performance. By prefetching and partially decoding instructions while others are
being executed, pipelining can significantly reduce idle time and speed up program
execution.
Caches: Caches are used to store frequently accessed instructions and data closer to
the CPU, significantly reducing the average memory access time and boosting
performance. By keeping frequently used data readily available, caches can reduce the
number of times the CPU needs to access slower main memory, improving the overall
efficiency of the fetch-decode-execute cycle.
The hierarchy typically consists of the following levels, arranged from fastest to slowest and
smallest to largest capacity:
1. CPU Registers: These are the fastest and smallest memory locations within the CPU
itself. They are used to store frequently accessed data and temporary results during
program execution. Accessing registers is incredibly fast (measured in nanoseconds).
2. Cache Memory: This is a small, high-speed memory that sits between the CPU and
main memory. It stores frequently accessed data and instructions from main memory,
reducing the need to access the slower main memory as often. Cache sizes vary
depending on the system, but access times are typically in tens of nanoseconds.
3. Main Memory (RAM): This is the primary memory where programs and data are
loaded from storage devices for active use. It's faster than secondary storage but
slower than cache memory. Access times are in the range of tens to hundreds of
nanoseconds.
4. Secondary Storage (Hard Disk Drives, SSDs): This is non-volatile storage that
retains data even when the computer is powered off. It's much slower than main
memory but has a much larger capacity. Access times for HDDs are in milliseconds,
while SSDs offer faster access times closer to RAM.
5. Tertiary Storage (Optical Discs, Tape Drives): This is the slowest and has the
largest capacity among memory hierarchy levels. It's typically used for archival
purposes or storing data that is rarely accessed. Access times can range from seconds
to minutes.
3. lol
4. Difference between Risc and CISC.
Ans: Given Before
5. Parallel processing and its architecture. Give before.
6. Explain the concept of pipelining in CPU design.
Concept of Pipelining:
Benefits of Pipelining:
Challenges of Pipelining:
Stages in a Pipeline:
The specific stages in a pipeline can vary depending on the processor architecture, but a
common breakdown might include:
Consider a simple pipeline with fetch, decode, and execute stages. While one instruction is
being executed, the next instruction can be decoded, and the following instruction can be
fetched. This overlapping of stages can significantly improve performance compared to a
sequential execution model.
Types of Pipelines: