0% found this document useful (0 votes)
3 views

Architecture_Notes

The document compares Von Neumann and Harvard architectures, highlighting their differences in memory usage, instruction execution, and cost. It explains the roles of the system bus, program counter, instruction pipeline, and priority interrupts in computer systems. Additionally, it covers concepts of parallel processing, memory hierarchy, instruction set architecture, and the fetch-decode-execute cycle.

Uploaded by

Utpal Ghosh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Architecture_Notes

The document compares Von Neumann and Harvard architectures, highlighting their differences in memory usage, instruction execution, and cost. It explains the roles of the system bus, program counter, instruction pipeline, and priority interrupts in computer systems. Additionally, it covers concepts of parallel processing, memory hierarchy, instruction set architecture, and the fetch-decode-execute cycle.

Uploaded by

Utpal Ghosh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

1. Compare between Harvard Architecture and Von Neumann Architecture.

Ans: Difference between Von Neumann and Harvard Architecture :


VON NEUMANN
ARCHITECTURE HARVARD ARCHITECTURE

It is ancient computer architecture based It is modern computer architecture based


on stored program computer concept. on Harvard Mark I relay based model.

Same physical memory address is used Separate physical memory address is used
for instructions and data. for instructions and data.

There is common bus for data and Separate buses are used for transferring
instruction transfer. data and instruction.

Two clock cycles are required to execute


An instruction is executed in a single cycle.
single instruction.

It is costly than Von Neumann


It is cheaper in cost.
Architecture.

CPU can not access instructions and CPU can access instructions and read/write
read/write at the same time. at the same time.

It is used in personal computers and small It is used in micro controllers and signal
computers. processing.
2. Explain the role of system bus in a computer system?
Ans: Bus Notes Given Before. Check that.
3. Explain the role of program counter?
Ans: The PC plays a crucial role in coordinating the smooth progression of tasks within a
computing system's architecture.
All the instructions and data present in memory have a special address. As each instruction is
processed, the program counter is updated to the address of the next instruction to be
fetched. When a byte (machine code) is fetched, the PC is incremented by one. So that it can
fetch the next instruction. If the computer is reset or restarted, the program counter returns
to zero value. Following describes the roles of program counter.
 Task Sequencing: By holding the address of the current task, and incrementing to
point to the next task once the current one is completed, it ensures tasks are
executed in the correct sequence.
 Control Flow Maintenance: The Program Counter helps control the flow of execution
by holding the address of the next instruction. This is particularly useful for
implementing jumps and branches in the control flow.
 Synchronization: In multi-threading CPUs, a separate Program Counter per thread
ensures smooth synchronization of various operations.

4. Define instruction Pipeline and Arithmetic pipeline.


Ans:
Arithmetic Pipeline :

An arithmetic pipeline divides an arithmetic problem into various sub problems for
execution in various pipeline segments. It is used for floating point operations, multiplication
and various other computations.

Instruction Pipeline:

In this a stream of instructions can be executed by overlapping fetch, decode and execute
phases of an instruction cycle. This type of technique is used to increase the throughput of
the computer system. An instruction pipeline reads instruction from the memory while
previous instructions are being executed in other segments of the pipeline. Thus we can
execute multiple instructions simultaneously. The pipeline will be more efficient if the
instruction cycle is divided into segments of equal duration.
5. Explain priority interrupt and its working.
Ans: In computer architecture, priority interrupts are a mechanism for handling requests
from devices and internal components in a controlled and efficient manner. Here's a
breakdown of how they work:

 The Need for Priority Interrupts:

 Modern computer systems have many devices and internal components (e.g.,
keyboard, network card, disk drive, timers) that can interrupt the CPU to signal
events requiring attention.

Not all interrupts are equally important. For example, a network failure might
require more immediate attention than a key being pressed on the keyboard.
 How Priority Interrupts Work:

Interrupt Request (IRQ): When a device or component needs attention, it sends an


interrupt request (IRQ) signal to the processor's Interrupt Controller (IC).
Priority Check: The IC, a dedicated hardware component, compares the incoming
IRQ's priority level with the currently executing program's priority.
 Interrupt Handling:

Higher Priority: If the IRQ has a higher priority:


The processor saves the state of the current program (registers, program counter) to
allow resuming later.
 It starts servicing the higher-priority interrupt by executing an Interrupt Service
Routine (ISR) specifically designed to handle this type of interrupt.
 Lower Priority: If the IRQ has a lower priority:
 The processor might temporarily ignore the interrupt and continue executing the
current program. This is called interrupt masking.

 Interrupt Service Routine (ISR):

 The ISR is a small program snippet loaded into memory and designed to handle a
specific type of interrupt.
 It performs the necessary actions for the interrupt source (e.g., read data from a
network card, update disk status).
 After Interrupt Handling:

 Once the ISR finishes, the processor:


 Restores the state of the interrupted program (if possible).
 Resumes execution from where it was interrupted.
 Types of Priority Interrupt Schemes:

 Fixed Priority: Each interrupt source has a pre-assigned, static priority level. Simple
but less flexible.
 Dynamic Priority: Priority levels can be adjusted based on the situation. More
complex but allows for more fine-grained control. Often used in real-time systems.
 Benefits of Priority Interrupts:

 Improved responsiveness: Critical events are handled promptly, enhancing system


reliability and real-time performance.
 Efficient resource utilization: The processor focuses on high-priority tasks first,
optimizing processing time.
 Challenges of Priority Interrupts:

 Priority selection: Assigning appropriate priority levels to different interrupt sources


can be complex.
 Nested interrupts: Handling multiple high-priority interrupts within a lower-priority
one requires careful management to avoid deadlocks.
6. Describe the concept of parallel processing and its advantages.
Parallel processing is a computing technique when multiple streams of calculations or data
processing tasks co-occur through numerous central processing units (CPUs) working
concurrently.
Parallel processing uses two or more processors or CPUs simultaneously to handle various
components of a single activity. Systems can slash a program’s execution time by dividing a
task’s many parts among several processors. Multi-core processors, frequently found in
modern computers, and any system with more than one CPU are capable of performing
parallel processing.
When processing is done in parallel, a big job is broken down into several smaller jobs better
suited to the number, size, and type of available processing units. After the task is divided,
each processor starts working on its part without talking to the others. Instead, they use
software to stay in touch with each other and find out how their tasks are going.

After all the program parts have been processed, the result is a fully processed program
segment. This is true whether the number of processors and tasks and processors were
equal and they all finished simultaneously or one after the other.

There are two types of parallel processes: fine-grained and coarse-grained. Tasks
communicate with one another numerous times per second in fine-grained parallelism to
deliver results in real-time or very close to real-time. The slower speed of coarse-grained
parallel processes results from their infrequent communication.

Advantages

 Speed up.
 Better cost per performance in the long run.

7. Explain memory hierarchy.


Ans:
Memory Hierarchy Design
1. Registers
2. Cache memory
3. Main memory
4. Secondary Memory
5. Magnetic disk and tape
8. Compare RISC and CISC architechture.
Ans: Given before
9. Explain fetch, decode and execute cycle.
Ans: he fetch-decode-execute cycle, also known as the instruction cycle, is the fundamental
process by which a central processing unit (CPU) executes instructions in a computer system.
It's a continuous loop that the CPU repeats for every instruction in a program. Here's a
detailed explanation of each stage:

1. Fetch:

In this stage, the CPU retrieves an instruction from memory. It involves the following steps:
The Program Counter (PC) register stores the memory address of the next instruction to be
fetched.
The CPU sends the address from the PC to the Memory Address Register (MAR).
The Memory Data Register (MDR) fetches the instruction from the memory location
specified by the MAR.
The fetched instruction is loaded into the Instruction Register (IR).
2. Decode:

Once the instruction is in the IR, the CPU decodes it to understand what needs to be done.
Decoding involves:
Identifying the operation (opcode) specified in the instruction.
Determining the operands (data) required for the operation. These operands can be located
in registers, memory addresses, or immediate values included in the instruction itself.
3. Execute:

In this stage, the CPU executes the decoded instruction based on the opcode and operands.
This might involve:
Performing arithmetic operations (addition, subtraction, multiplication, division) using the
Arithmetic Logic Unit (ALU).
Accessing data from memory using the Memory Unit (MU).
Performing logical operations (AND, OR, NOT) using the ALU.
Branching to a different location in the program based on conditional statements.
4. Store (Optional):

Some instructions may involve storing the result of an operation back to memory. This
typically happens after the execution stage.
10. Explain Von Numen Architecture.
Ans: Given before.
11. Compare Static and Dynamic RAM.
Ans:

SRAM DRAM

It stores information as long as the power is


It stores information as long
supplied or a few milliseconds when the power
as the power is supplied.
is switched off.

Transistors are used to store


Capacitors are used to store data in DRAM.
information in SRAM.

Capacitors are not used To store information for a longer time, the
hence no refreshing is contents of the capacitor need to be refreshed
required. periodically.

SRAM is faster compared to


DRAM provides slow access speeds.
DRAM.

It does not have a refreshing


It has a refreshing unit.
unit.

These are expensive. These are cheaper.

SRAMs are low-density


DRAMs are high-density devices.
devices.

In this bits are stored in In this bits are stored in the form of electric
voltage form. energy.

These are used in cache


These are used in main memories.
memories.

Consumes less power and


Uses more power and generates more heat.
generates less heat.
SRAM DRAM

SRAMs has lower latency DRAM has more latency than SRAM

SRAMs are more resistant to DRAMs are less resistant to radiation than
radiation than DRAM SRAMs

SRAM has higher data


DRAM has lower data transfer rate
transfer rate

SRAM is used in high-speed


DRAM is used in lower-speed main memory
cache memory

SRAM is used in high


DRAM is used in general purpose applications
performance applications
12. Compare Serial and Parallel Processing?
Ans:

Sequential Processing Parallel Processing

All the instructions are executed All the instructions are executed
1. in a sequence, one at a time. parallelly.

2. It has a single processor. It is having multiple processors.

It has high performance and the


It has low performance and the
workload of the processor is low because
workload of the processor is
multiple processors are working
high due to the single processor.
3. simultaneously.

Bit-by-bit format is used for data


Data transfers are in bytes.
4. transfer.

It requires more time to It requires less time to complete the


5. complete the whole process. whole process.

6. Cost is low Cost is high


13. Definition of Instruction Set architecture.
Ans: An Instruction Set Architecture (ISA) is part of the abstract model of a computer that
defines how the CPU is controlled by the software. The ISA acts as an interface between the
hardware and the software, specifying both what the processor is capable of doing as well as
how it gets done.

The ISA provides the only way through which a user is able to interact with the hardware. It
can be viewed as a programmer’s manual because it’s the portion of the machine that’s
visible to the assembly language programmer, the compiler writer, and the application
programmer.

The ISA defines the supported data types, the registers, how the hardware manages main
memory, key features (such as virtual memory), which instructions a microprocessor can
execute, and the input/output model of multiple ISA implementations. The ISA can be
extended by adding instructions or other capabilities, or by adding support for larger
addresses and data values.

Section B
1. Describe fetch decode and execute cycle in details.

Ans: The fetch-decode-execute cycle, also known as the instruction cycle, is the fundamental
process by which a central processing unit (CPU) executes instructions in a computer system.
It's a continuous loop that the CPU repeats for every instruction in a program. Here's a
detailed breakdown of each stage:

1. Fetch:

 Retrieving the Instruction: In this stage, the CPU retrieves an instruction from
memory. It involves the following steps:
o The Program Counter (PC) register stores the memory address of the next
instruction to be fetched.
o The CPU sends the address from the PC to the Memory Address Register
(MAR).
o The Memory Data Register (MDR) fetches the instruction from the memory
location specified by the MAR. This memory location contains the binary
code representing the instruction.
o The fetched instruction is loaded into the Instruction Register (IR).

2. Decode:

 Understanding the Instruction: Once the instruction is in the IR, the CPU decodes it
to determine what needs to be done. Decoding involves:
o Identifying the operation (opcode) specified in the instruction. The opcode
tells the CPU what kind of operation to perform (e.g., addition, subtraction,
load from memory, store to memory, branch).
o Determining the operands (data) required for the operation. These operands
can be located in various places based on the instruction format:
 Registers: The CPU has a set of internal registers that can hold
temporary data. The instruction might specify operands by referring to
their register numbers.
 Memory addresses: The instruction might contain a memory address
where the operand is located.
 Immediate values: Some instructions include the operand value
directly within the instruction itself. This is known as immediate
addressing mode.

3. Execute:

 Taking Action: In this stage, the CPU executes the decoded instruction based on the
opcode and operands. This might involve:
o Performing arithmetic operations (addition, subtraction, multiplication,
division) using the Arithmetic Logic Unit (ALU).
o Accessing data from memory using the Memory Unit (MU) based on memory
addresses specified in the instruction.
o Performing logical operations (AND, OR, NOT) using the ALU.
o Branching to a different location in the program based on conditional
statements. For example, a jump instruction might change the value of the PC
to point to a different instruction.

4. Store (Optional):

 Storing Results (if applicable): Some instructions may involve storing the result of
an operation back to memory. This typically happens after the execution stage, where
the result might be stored in a specific memory location specified by the instruction.

The Cycle Repeats:

 Once the execution stage is complete, the PC register is typically updated based on
the instruction. This could involve:
o Incrementing the PC by 1 to fetch the next instruction in sequence.
o Jumping to a different instruction address based on a branching instruction.
 This update to the PC initiates the fetch stage again, starting the cycle over for the
next instruction. This continuous loop allows the CPU to execute the entire program
one instruction at a time.

Importance of the Fetch-Decode-Execute Cycle:

 The fetch-decode-execute cycle is the fundamental building block of program


execution. By efficiently processing instructions through these stages, the CPU carries
out the computations and tasks specified in a program.
 Understanding this cycle is crucial for programmers because it provides insights into
how instructions are translated into actions within the CPU. This knowledge can help
programmers write more efficient and optimized code.

Additional Notes:
 Pipelining: Modern processors often employ instruction pipelining, which overlaps
the fetch, decode, and execute stages of multiple instructions for improved
performance. By prefetching and partially decoding instructions while others are
being executed, pipelining can significantly reduce idle time and speed up program
execution.
 Caches: Caches are used to store frequently accessed instructions and data closer to
the CPU, significantly reducing the average memory access time and boosting
performance. By keeping frequently used data readily available, caches can reduce the
number of times the CPU needs to access slower main memory, improving the overall
efficiency of the fetch-decode-execute cycle.

2. Describe the role of memory hierarchy in details.

Ans: Levels of Memory Hierarchy:

The hierarchy typically consists of the following levels, arranged from fastest to slowest and
smallest to largest capacity:

1. CPU Registers: These are the fastest and smallest memory locations within the CPU
itself. They are used to store frequently accessed data and temporary results during
program execution. Accessing registers is incredibly fast (measured in nanoseconds).
2. Cache Memory: This is a small, high-speed memory that sits between the CPU and
main memory. It stores frequently accessed data and instructions from main memory,
reducing the need to access the slower main memory as often. Cache sizes vary
depending on the system, but access times are typically in tens of nanoseconds.
3. Main Memory (RAM): This is the primary memory where programs and data are
loaded from storage devices for active use. It's faster than secondary storage but
slower than cache memory. Access times are in the range of tens to hundreds of
nanoseconds.
4. Secondary Storage (Hard Disk Drives, SSDs): This is non-volatile storage that
retains data even when the computer is powered off. It's much slower than main
memory but has a much larger capacity. Access times for HDDs are in milliseconds,
while SSDs offer faster access times closer to RAM.
5. Tertiary Storage (Optical Discs, Tape Drives): This is the slowest and has the
largest capacity among memory hierarchy levels. It's typically used for archival
purposes or storing data that is rarely accessed. Access times can range from seconds
to minutes.

3. lol
4. Difference between Risc and CISC.
Ans: Given Before
5. Parallel processing and its architecture. Give before.
6. Explain the concept of pipelining in CPU design.

Ans: Pipelining in Computer Architecture: Overlapping


Instruction Execution
Pipelining is a technique used in computer architecture to improve instruction execution
performance by overlapping the execution of different stages of the instruction cycle (fetch,
decode, execute, memory access, write back) for multiple instructions. This approach allows
the processor to keep its functional units (e.g., ALU, memory units) busy for longer periods,
reducing idle time and potentially speeding up program execution.

Concept of Pipelining:

Imagine an assembly line in a factory. Each instruction can be thought of as a product


moving through the pipeline. The pipeline is divided into stages, each performing a specific
task on the instruction. Just like workers on an assembly line can be working on different
parts of multiple products simultaneously, pipelining allows the CPU to work on different
stages of multiple instructions concurrently.

Benefits of Pipelining:

 Improved Performance: By overlapping instruction execution, pipelining can


significantly increase throughput, especially for programs with high instruction-level
parallelism (where multiple instructions are independent and can be executed
concurrently).
 More Efficient Utilization of Hardware Resources: Functional units are kept busy
for longer periods, reducing idle time and improving overall resource utilization.

Challenges of Pipelining:

 Data Dependencies: If an instruction depends on the result of a previous instruction


(e.g., addition followed by multiplication using the result of the addition), the pipeline
may stall until the previous instruction finishes. This can be mitigated using
techniques like forwarding and hazard detection.
 Control Flow Changes: Branches and jumps can disrupt the pipeline flow, requiring
the pipeline to be flushed and refilled with the new instructions from the branch
target. Techniques like branch prediction can help alleviate this issue.

Stages in a Pipeline:

The specific stages in a pipeline can vary depending on the processor architecture, but a
common breakdown might include:

1. Fetch: The processor retrieves an instruction from memory.


2. Decode: The instruction is decoded to determine the operation to be performed and
the operands required.
3. Execute: The operation specified by the instruction is executed on the appropriate
functional unit (e.g., ALU for arithmetic operations, memory unit for load/store
operations).
4. Memory Access (optional): If the instruction requires data from memory (load
operation), this stage retrieves the data. Data may be available from a cache in some
cases.
5. Write Back (optional): If the instruction produces a result (store operation), this
stage writes the result back to memory.
Pipelined Instruction Execution Example:

Consider a simple pipeline with fetch, decode, and execute stages. While one instruction is
being executed, the next instruction can be decoded, and the following instruction can be
fetched. This overlapping of stages can significantly improve performance compared to a
sequential execution model.

Types of Pipelines:

 Instruction Pipeline: This is the most common type of pipeline, focusing on


overlapping the stages of the instruction cycle as described above.
 Arithmetic Pipeline: This type of pipeline specifically focuses on parallelizing the
execution of arithmetic operations (e.g., addition, subtraction, multiplication, division)
by breaking them down into smaller sub-operations and processing them
concurrently.

6. What are different sets of micro operations available in CPU?


Ans: Four types of micro-operations are Register micro-operations, Arithmetic micro-
operations, Logic micro-operations, and Shift micro-operations. Micro-operations
known as register transfers move binary data between registers. Arithmetic micro-
operations operate on the numerical data that is saved in registers. Bit manipulation
operations are performed on non-numeric data stored in registers via logic micro-
operations. Shift micro-operations operate on data in shift micro-operations.
7. Explain different types of registers in 8085.
Ans:

You might also like