0% found this document useful (0 votes)
41 views79 pages

COA 100 Important Question and Answers (Draft)

Coa questions

Uploaded by

sia12sia2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views79 pages

COA 100 Important Question and Answers (Draft)

Coa questions

Uploaded by

sia12sia2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

100*

IMPORTANT QUESTIONS

AND

ANSWERS OF

COMPUTER ORGANIZATION

AND

ARCHITECTURE

COA

BCS 302
UNIT 1 IMPORTANT QUESTION AND ANSWERS

Question 1: Explain the term computer architecture and computer organization.

Answer:

Computer Architecture

Computer architecture refers to the conceptual design and fundamental operational structure of a
computer system. It focuses on how a computer system is designed to perform tasks efficiently.
It deals with the following aspects:

1) Instruction Set Architecture (ISA): The part of the architecture related to programming,
including the machine language instructions that the processor can execute.
2) System Design: Includes hardware components like memory, input/output devices, and
how they interact.
3) Performance and Optimization: How the system is designed for better performance (e.g.,
pipelining, parallelism).
4) Design Principles: High-level abstractions like data formats, addressing methods, and
instruction formats.

Computer Organization

Computer organization deals with the operational aspects and implementation of a computer
system. It focuses on the physical components and their interconnections to execute the
architecture. It is more hardware-oriented than architecture and answers questions about how
tasks are carried out.

1) Components: Deals with how hardware components like the CPU, memory, and I/O
devices are connected and managed.
2) Implementation Details: Covers data paths, control signals, and timing.
3) Micro architecture: Details like the design of the processor pipeline, cache subsystems,
and bus organization.
4) Assembly and Control: Low-level details that allow the hardware to execute instructions.
Question 2: List the difference between computer organization and architecture.

Answer:

Differences between computer organization and computer architecture:

Aspect Computer Organization Computer Architecture

Definition Deals with the physical Refers to the conceptual


implementation of a computer design and operational
system. structure of a computer
system.

Focus Focuses on hardware Focuses on the overall design


components and their principles and functionality.
interconnections
Abstraction Level Low-level; concerned with the High-level; concerned with
implementation of how the system performs tasks
components conceptually.

Examples of Topics ALU design, memory Instruction set architecture


interface, control signals, (ISA), addressing modes, and
buses, and registers data formats.
Objective Focused on how the system Focused on ensuring that the
works and performs at the system meets functional and
hardware level. performance requirements.

Design Perspective Describes how the system will Describes what the system
execute tasks should do.
Relevance Important for engineers Important for system
designing hardware and architects and software
circuits developers.

Components Involves detailed circuitry, Involves instruction sets, data


memory hierarchy, and control formats, and system-level
unit design
Dependency Depends on the specifications Independent of the underlying
provided by the architecture hardware implementation

Question 3: Explain the functional units of digital system and their interconnections.
Answer: A digital system is composed of several functional units that work together to perform
computations, process data, and control operations. These units are interconnected to facilitate
the flow of data and control signals within the system.

Functional Units of a Digital System:

1) Input Unit: Responsible for receiving data and instructions from the external environment
(e.g., keyboards, mice, scanners). Converts input data into a digital format understandable
by the system.
2) Output Unit: Provides processed data to the external environment (e.g., monitors,
printers, speakers). Converts digital data into a human-readable or usable form.
3) Memory Unit: Primary Memory: Stores data and instructions temporarily during
processing (e.g., RAM). Secondary Memory: Stores data permanently (e.g., HDD, SSD).
4) Cache Memory: High-speed memory used for frequently accessed data to enhance
performance.
5) Arithmetic and Logic Unit (ALU): Performs all arithmetic computations (e.g., addition,
subtraction) and logical operations (e.g., comparisons, AND, OR). Works as the "brain"
for mathematical and logical decision-making.
6) Control Unit (CU): Directs the flow of data between other functional units by interpreting
and executing instructions from memory. Ensures synchronization and coordination
between units.
7) Registers: Small, high-speed storage locations within the CPU that temporarily hold data,
instructions, or addresses during execution.
8) Interconnecting System (Buses): Data Bus: Transfers actual data between memory, CPU,
and I/O units. Address Bus: Carries memory addresses specifying where data should be
read or written. Control Bus: Carries control signals (e.g., read/write commands, interrupt
signals) to coordinate operations.

Interconnections of Functional Units:

The interconnection of these units allows data, instructions, and control signals to flow
seamlessly through the system. The three primary mechanisms for interconnection are:

1) Bus Systems: A bus is a common communication pathway shared by multiple


components.

Three types of buses: Data Bus: Transfers data between components. Address Bus: Specifies
memory or I/O locations. Control Bus: Transmits commands, timing signals, and control
information.
2) Direct Connections: Specific components, like the CPU and memory, may have direct
interfaces for speed and efficiency.
3) Memory-Mapped I/O: Uses a unified address space to access both memory and I/O
devices, simplifying control.

Functional Flow

1) Input Unit takes raw data and converts it into a digital form.
2) Memory Unit stores this data and the program's instructions.
3) Control Unit retrieves instructions from memory, decodes them, and sends signals to the
ALU or other units.
4) ALU performs computations and logical operations on the data.
5) Processed data is stored back in the Memory Unit or sent to the Output Unit.
6) Registers temporarily store data during processing to ensure efficient execution.

Question 4: What is a bus in digital system? Also explain its type and architecture?

Answer: A bus in a digital system is a communication pathway or a set of parallel lines used to
transfer data, addresses, and control signals between different components, such as the CPU,
memory, and I/O devices. It facilitates efficient communication and coordination within the
system by acting as a shared medium.

A bus can be classified into three main types based on the type of data it carries:

1) Data Bus: Transfers actual data between system components (e.g., CPU, memory, and
peripherals). It is bi-directional, meaning data can flow both to and from components.
The width (number of lines) determines how much data can be transferred simultaneously
(e.g., 32-bit or 64-bit).
2) Address Bus: Carries the memory or I/O addresses of the data that the CPU wants to
read or write. It is unidirectional because addresses are always sent from the CPU to other
components. The width of the address bus determines the addressable memory space.
3) Control Bus: Carries control signals to coordinate and manage system operations (e.g.,
read/write signals, clock signals, interrupts). It is bi-directional and ensures
synchronization between components.

Bus architecture refers to how buses are structured and how they connect components within a
digital system. The most common bus architectures include:

Single-Bus Architecture: All components share a single communication bus for data, addresses,
and control signals.

Advantages: Simple and cost-effective. Requires fewer physical lines.

Disadvantages: Performance bottleneck due to shared communication. Limited scalability.


Multiple-Bus Architecture: Uses multiple buses to connect components, such as separate buses
for memory, I/O, and CPU communication.

Advantages: Improves performance by reducing contention on a single bus. Supports parallel


data transfers.

Disadvantages: More complex and expensive. Requires more hardware and synchronization
mechanisms.

Hierarchical Bus Architecture: A hybrid structure that includes multiple levels of buses, such
as high-speed buses for CPU-memory communication and slower buses for I/O devices.

Advantages: Optimized performance and scalability. Allows different subsystems to operate


independently.

Disadvantages: Higher complexity. Requires bridge devices to interconnect buses.

Buses follow specific communication protocols to manage data transfer and ensure proper
operation. Two common protocols include:

1) Synchronous Bus Protocol: Data transfer occurs in synchronization with a clock signal.
Advantages: High-speed operation due to precise timing. Disadvantages: Limited
flexibility; all components must operate at the same clock rate.
2) Asynchronous Bus Protocol: Data transfer occurs without a clock signal, using handshake
signals for coordination. Advantages: Flexible; components can operate at different
speeds. Disadvantages: Slower than synchronous communication due to handshake
overhead.

Question 5: Discuss the bus arbitration?

Answer: Bus arbitration is a mechanism used in computer systems to manage access to a shared
communication bus among multiple devices or processors. Since only one device can
communicate on the bus at a time, an arbitration process is essential to avoid conflicts and ensure
orderly access.

Key Components of Bus Arbitration

1) Bus Arbiter: A dedicated hardware or logic circuit that controls access to the bus.
2) Request Lines: Lines used by devices to request access to the bus.
3) Grant Lines: Lines used by the arbiter to grant access to a specific device.
4) Priority Mechanism: Determines which device gets access when multiple devices request
simultaneously.

Types of Bus Arbitration Schemes


1. Centralized Arbitration: A single bus arbiter controls access to the bus.

Common methods:

Daisy-Chaining: Devices are connected in series. The arbiter grants the bus to the highest-
priority device in the chain.

Pros: Simple and cost-effective.

Cons: High-priority devices can dominate; longer delay for lower-priority devices.

Polling: The arbiter sequentially checks devices to see if they need the bus.

Pros: Fair; avoids bus monopolization.

Cons: Slower due to sequential checks.

Fixed Priority Arbitration: Devices are assigned fixed priorities.

Pros: Quick decision-making.

Cons: Risk of starvation for low-priority devices.

2. Distributed Arbitration: No single arbiter; all devices participate in deciding who gets
access.

Common methods:

Self-Selection: Devices decide their priority based on predefined criteria.

Collision Detection: Devices transmit simultaneously; conflicts are detected, and a resolution
process follows.

Example: Ethernet's CSMA/CD protocol.

3. Dynamic Arbitration: Priorities are not fixed and may change dynamically based on system
conditions, like workload or time elapsed.

Pros: Adaptive and fair.

Cons: Higher complexity in implementation.

Question 6 : Explain daisy changing method. Write its advantages and disadvantages?

Answer:

The daisy-chaining method is a centralized arbitration technique used to control access to a


shared bus among multiple devices. In this method, devices are connected in a linear series,
forming a "chain." The bus arbiter grants access by sending a grant signal that propagates
through the chain. Each device checks the signal and decides whether to take control of the bus
or pass the grant to the next device in line.

How Daisy-Chaining Works

Requesting Access: A device that needs access to the bus sends a request signal to the arbiter.

Grant Signal Propagation: The arbiter sends a grant signal to the first device in the chain.

Each device in the chain: Checks if it has requested the bus. If yes, it captures the grant signal
and takes control of the bus. If no, it passes the grant signal to the next device in the chain.

Bus Access: The granted device uses the bus for its operation. Once finished, the device releases
the bus, and the arbiter sends a new grant signal if other requests are pending.

Advantages of Daisy-Chaining

Simplicity: Easy to implement with minimal hardware complexity.

Cost-Effective: Requires fewer control lines compared to more complex arbitration schemes.

Deterministic: The sequence of priority is predefined, making the system predictable.

Efficient for Small Systems: Works well in systems with a small number of devices and low
contention.

Disadvantages of Daisy-Chaining

Priority Bias: Devices closer to the arbiter have higher priority, potentially leading to starvation
of devices farther down the chain.

Scalability Issues: As the number of devices increases, the time for the grant signal to propagate
grows, increasing latency.

Single Point of Failure: If a device in the chain fails, it can disrupt the entire arbitration process.

Limited Fairness: The fixed priority based on chain position does not adapt to dynamic
conditions or system workload.

Applications

Daisy-chaining is commonly used in: Small-scale systems with low contention. Peripheral
arbitration for simple microcontroller setups. Older bus protocols or systems where cost and
simplicity are priorities. While effective in straightforward scenarios, daisy-chaining's limitations
make it less suitable for modern systems requiring fairness, high throughput, and scalability.

Question 7: What is memory transfer? What are different registers associated for memory
transfer?

Answer: Memory transfer refers to the process of transferring data between the memory unit and
other parts of a computer system, such as the CPU or I/O devices. This operation is fundamental
to a computer's functionality, as it enables fetching instructions, reading data from memory, and
writing data back to memory.

Types of Memory Transfer

Read Operation: Data is transferred from memory to a processor or device. Example: Fetching
instructions or data for execution.

Write Operation: Data is transferred from a processor or device to memory. Example: Storing
results of computations.

Direct Memory Access (DMA): High-speed data transfer between memory and peripherals
without CPU involvement.

Registers Associated with Memory Transfer: Several special-purpose registers facilitate memory
transfer. These include:

1. Memory Address Register (MAR)

Function: Holds the address of the memory location to be accessed. Specifies where data should
be read from or written to.

Role in Transfer: During a read/write operation, the CPU places the address of the target memory
location in the MAR.

2. Memory Data Register (MDR)

Function: Temporarily holds data being transferred to or from memory. Also called the Memory
Buffer Register (MBR).

Role in Transfer: During a read operation, the data fetched from memory is stored in the MDR
before being sent to the CPU.

During a write operation, the MDR holds the data to be written into memory.

3. Program Counter (PC)


Function: Holds the address of the next instruction to be executed.

Role in Transfer: Used to fetch instructions from memory during program execution.

4. Instruction Register (IR)

Function: Holds the instruction fetched from memory for decoding and execution.

Role in Transfer: After the instruction is fetched from memory, it is stored in the IR.

5. Stack Pointer (SP)

Function: Points to the top of the stack in memory.

Role in Transfer: Used during memory transfers involving stack operations like push and pop.

6. Index Register (IR or IX)

Function: Used for address modification in indexed addressing modes.

Role in Transfer: Holds an index value added to a base address for memory access.

7. Base Register

Function: Holds the base address for a memory block.

Role in Transfer: Used in relative or segmented addressing to calculate the actual memory
address.

Memory Transfer Process

Memory Read: The CPU places the address in the MAR. A read signal is sent to memory. Data
is fetched from the memory location and placed into the MDR. The CPU processes the data from
the MDR.

Memory Write: The CPU places the address in the MAR and the data in the MDR. A write
signal is sent to memory. The data in the MDR is written to the specified memory location.

Question8: Explain the operation of three state bus buffers and show its use in design of
common bus?

Answer: A three-state bus buffer is a logic circuit used to control data flow on a shared
communication bus. The term "three-state" refers to the three possible states of the buffer's
output:

Logic HIGH (1): The buffer outputs a high logic level.

Logic LOW (0): The buffer outputs a low logic level.


High Impedance (Z): The buffer disconnects its output, effectively isolating itself from the bus.

The high-impedance state is crucial for allowing multiple devices to share a common bus without
interference, as only one device can drive the bus at a time.

Operation of a Three-State Bus Buffer

A three-state bus buffer typically has:

Input (D): The data to be passed to the output.

Output (Q): The output connected to the bus.

Enable Control (E): A control signal that determines whether the buffer is active or in high-
impedance state.

Enable (E) Input (D) Output (Q)


0 X Z
1 0 0
1 1 1

When E = 0, the buffer output is in high-impedance state (Z), disconnecting from the bus.

When E = 1, the buffer acts as a normal buffer, passing the input (D) to the output (Q).

Use of Three-State Buffers in a Common Bus Design

A common bus allows multiple devices (e.g., processors, memory units, or I/O devices) to share
a single communication path. Three-state buffers are used to control which device can drive the
bus at any given time.

Basic Design of a Common Bus Using Three-State Buffers

Bus Lines: Data lines (to carry data between devices). Control lines (to manage read/write
operations and enable signals). Address lines (to specify memory or device addresses).

Three-State Buffers: Each device is connected to the bus via a three-state buffer. The enable
signal for each buffer is controlled by a bus arbiter or control logic.

Bus Arbiter: Ensures that only one device drives the bus at any time. Activates the enable signal
for the appropriate buffer.

Advantages of Using Three-State Buffers

Prevents Bus Contention: Only one device drives the bus at a time, avoiding conflicts.
Scalability: Easily adds more devices by connecting them through three-state buffers.

Efficient Bus Sharing: Devices can dynamically connect or disconnect from the bus as needed.

Question9: Explain general-purpose register based organization?

Answer: A general-purpose register (GPR) based organization is a type of computer


architecture where the central processing unit (CPU) has a set of registers that can be used to
store data, addresses, or intermediate results during the execution of instructions. These registers
are versatile and serve as the primary storage locations for quick access by the processor.

Key Components and Features

1) General-Purpose Registers (GPRs):Registers are small, fast storage units within the CPU.
In a GPR-based system, registers are not specialized and can hold: Data values (operands
for arithmetic/logic operations). Memory addresses (for load/store instructions).
Temporary results of computations.
2) Instruction Format: Instructions in this organization typically specify registers as
operands. For example: ADD R1, R2, R3
This instruction adds the values in R2 and R3 and stores the result in R1.

3) Load-Store Architecture: Memory access is minimized by using registers for


computations. Data is first loaded into registers, processed, and stored back in memory
only when needed.
4) Efficient Execution: Operations performed on registers are faster because accessing
registers is much quicker than accessing main memory.

Working Example

Load Values into Registers:

LOAD R1, [1000] ; Load the value from memory address 1000 into register R1

LOAD R2, [1004] ; Load the value from memory address 1004 into register R2

Perform Operations:

ADD R3, R1, R2 ; Add the values in R1 and R2, store the result in R3

Store Result Back:

STORE [1008], R3 ; Store the result in R3 to memory address 1008

Advantages

Speed: Registers are faster than memory, making computations quicker.


Efficiency: Reduces memory access, which is often a bottleneck in computation.

Flexibility: Registers can hold any type of data, offering greater flexibility compared to
architectures with specialized registers.

Compact Instructions: Register-based instructions require fewer bits to encode, leading to


smaller instruction sizes.

Disadvantages

Limited Register Count: Hardware constraints limit the number of registers, which can restrict
performance for programs requiring many variables.

Programming Complexity: Efficient use of registers often requires skilled programmers or


advanced compiler optimization.

Question 10: What is stack ? Give the organization of register stack with all necessary
elements and explain the working of push and pop operations?

Answer: A stack is a special kind of data structure used in computer systems where data is stored
and accessed in a last-in, first-out (LIFO) order. This means that the most recently added data is
the first one to be removed. In many computer architectures, a stack is used for temporary
storage, especially for managing function calls, local variables, and return addresses. In the
context of register-based organization, a register stack is a stack that is implemented using
registers in the CPU. These registers act as the storage locations for the stack, and special
operations like push and pop allow data to be added to or removed from the stack.

Elements of a Register Stack

Stack Pointer (SP): The stack pointer is a special register that points to the current top of the
stack. It keeps track of the memory location where the last data item was pushed or popped. The
SP is automatically updated with each push or pop operation.

Stack Registers: These are a set of general-purpose or specialized registers used to store data
pushed onto the stack. The number of stack registers varies based on the architecture but is
typically small (e.g., 8, 16 registers).

Base Pointer (BP) (optional): In some architectures, a base pointer is used alongside the stack
pointer to manage the stack frame, particularly for function calls. The base pointer points to the
start of the current function’s stack frame.

Memory Address: The stack is often implemented in memory (in some architectures), with the
stack pointer and registers being used to manage this memory space.

Organization of the Register Stack


Consider a simple register stack with the following components:

SP (Stack Pointer): Points to the top of the stack. Stack Registers: Registers that hold the stack's
values. For example, assume the following register configuration:

Stack Registers: R0, R1, R2, R3, ..., Rn

Stack Pointer: SP

Base Pointer: BP (optional)

Push Operation: Push refers to the operation of adding data to the stack.

Step 1: The stack pointer (SP) is decremented to point to the next empty location on the stack.

Step 2: The data (e.g., a register value or a value to be stored) is copied into the location pointed
to by the stack pointer (SP).

Step 3: The stack pointer is updated to point to the new top of the stack (i.e., after the data is
pushed).

Question11: Differentiate between Register Stack and Memory Stack.

Answer:

Feature Register Stack Memory Stack


Storage Location Stored in CPU registers Stored in system memory
(RAM)

Access Speed Faster (registers are part of the Slower (memory access
CPU) involves overhead)

Capacity Limited (by number of Larger (limited by available


registers) memory)
Flexibility Less flexible (fixed by number More flexible (size grows with
of registers) available memory)

Complexity Simpler to implement More complex (involves


memory management)
Use Cases Embedded systems, low-level General-purpose computers,
operations, quick data storage function calls, recursion
Example Architectures ARM, MIPS x86, Intel/AMD

Question 12: Explain an accumulator based central processing unit organization with block
diagram?

Answer:

Accumulator-Based Central Processing Unit (CPU) Organization

An accumulator-based CPU organization is one where the accumulator (A) is the central register
used for performing arithmetic and logic operations. This means that most operations in the CPU
use the accumulator register as one of the operands and typically store the result in the
accumulator itself. The accumulator simplifies the design and operation of the CPU by reducing
the number of registers needed for calculations. \

Key Components of an Accumulator-Based CPU:

Accumulator (A): A special register that is used for arithmetic and logical operations. Most
operations (such as ADD, SUBTRACT, etc.) use the accumulator as one of the operands, and the
result is stored back into the accumulator.

Arithmetic and Logic Unit (ALU): The ALU performs all arithmetic (e.g., addition, subtraction)
and logical (e.g., AND, OR) operations. The ALU typically operates with the accumulator,
where it reads one operand from the accumulator, and the second operand comes from either
another register or memory.

Program Counter (PC): Holds the address of the next instruction to be executed in memory. It is
automatically incremented after fetching each instruction.

Instruction Register (IR): Stores the current instruction that has been fetched from memory and is
being executed. The instruction is decoded, and the necessary operations are performed based on
the instruction type.

Memory: This holds both the program instructions and data. Memory can be accessed by the
CPU for reading and writing operations.

Control Unit (CU): The control unit manages and directs the operations of the CPU by
interpreting the instructions in the instruction register and issuing control signals to other
components like the ALU, registers, and memory.
Registers: Aside from the accumulator, other registers may be present, but in accumulator-based
systems, these are fewer in number, with the accumulator taking the primary role for data
processing.

Bus: A set of lines used to transfer data between various components, such as between the
accumulator, memory, and ALU.

Working of the Accumulator-Based CPU:

Here’s how the components interact during the execution of a typical instruction:

Fetch: The Control Unit (CU) retrieves the next instruction to be executed by reading it from
memory. The Program Counter (PC) holds the memory address of the next instruction.

The instruction is then transferred to the Instruction Register (IR).

Decode: The Control Unit (CU) decodes the instruction in the IR. The type of operation (e.g.,
addition, subtraction) and the source operand locations are identified. If an operand needs to be
fetched from memory, the memory address is sent to the Memory unit. Otherwise, the operand
may be stored in the accumulator or another register.

Execute: The ALU performs the operation using the Accumulator and any other required
operands (which might come from memory or a register). The result of the operation is stored
back into the Accumulator.

Update: After executing the instruction, the Program Counter (PC) is updated to point to the next
instruction in memory.

Repeat: The process continues with the CPU fetching the next instruction, decoding it, executing
the operation, and updating the program counter.

. . .

. .
UNIT 2IMPORTANT QUESTION AND ANSWERS

Arithmetic and Logic Unit -02


1. A*B+C*D+E*F convert into infix to postfix

INFIX OPERATION POSTFIX

A ( A

* (* A

B (* AB

+ (+ AB*

C (+ AB*C

* (+* AB*C

D (+* AB*CD

+ (++ AB*CD*

E (++ AB*CD*E

* (++* AB*CD*E

F (++* AB*CD*EF

) () AB*CD*EF*++
2. A*[B+C*(D+E)]/F*(G+H) Convert into Infix to Postfix

INFIX OPERATION POSTFIX

A ( A

* (* A

[ (*[ A

B (*[ AB

+ (*[+ AB

C (*[+ ABC

* (*[+* ABC

( (*[+*( ABC

D (*[+*( ABCD

+ (*[+*(+ ABCD

E (*[+*(+ ABCDE

) (*[+* ABCDE+

] (* ABCDE+*+

/ (/ ABCDE+*+*

F (/ ABCDE+*+*F

* (/* ABCDE+*+*F

( (/*( ABCDE+*+*F
G (/*( ABCDE+*+*FG

+ (/*(+ ABCDE+*+*FG

H (/*(+ ABCDE+*+*FGH

) (/* ABCDE+*+*FGH+

) () ABCDE+*+*FGH+*/

Q3. Represent the following decimal number in IEEE standard floating point format in a
single precision method (32 bits) representation method.

3a) (65.175)10

BINARY:(1000001.00101100)2

NORMALISATION: 1.00000100101100 X 26

e=6 bias = 127

E = e + bias

= 6 + 127

= (133)10

= (10000101)2

MANTISSA = 00000100101100

0 10000101 00000100101100000000000
(sign bit) E (8 bit) M (23 bit)

3b) (-307.1875)10

BINARY:(100110011.00110)2

NORMALISATION: 1.0011001100110 X 28

e=8 bias = 127

E = e + bias
= 8 + 127

= (135)10

= (10000111)2

MANTISSA = 0011001100110

1 10000111 00110011001100000000000
(sign bit) E (8 bit) M (23 bit)

Q4: Show the contents of the registers A, Q, and Q₋₁ during the process of multiplication of
two binary numbers 1111 (multiplicand) and 10101 (multiplier). The signs are not
included.

Solution:

We solve this using Booth’s Algorithm for binary multiplication. The steps are:

1. Initialization:
o Set A = 0 (Accumulator), Q = 10101 (Multiplier), Q₋₁ = 0, and M = 1111
(Multiplicand).
o The counter is initialized to the bit length of the multiplier (5 in this case).
2. Steps:
o Check the condition of Q₀ (Least significant bit of Q) and Q₋₁.
o Perform addition, subtraction, or no operation based on the condition:
 Q₀ = 1 and Q₋₁ = 0: Subtract M from A.
 Q₀ = 0 and Q₋₁ = 1: Add M to A.
 Otherwise, no operation is performed.
o Perform an arithmetic right shift on A, Q, and Q₋₁.
o Decrement the counter by 1.
o Repeat until the counter reaches 0.
3. Result: After all iterations, the product is stored in the combined registers A and Q.

Q4. Draw the flowchart of Booth's algorithm for multiplication of signed numbers
in 2's complement form.

Solution:
The flowchart includes the following steps:

1. Start:
o Initialize the registers: A = 0, Q = multiplier, Q₋₁ = 0, and M = multiplicand.
o Set the counter to the bit size of the numbers.
2. Check Booth's condition:
o If Q₀ = 1 and Q₋₁ = 0: Subtract M from A.
o If Q₀ = 0 and Q₋₁ = 1: Add M to A.
3. Arithmetic Shift:
o Perform a right arithmetic shift on A, Q, and Q₋₁.
o Decrement the counter by 1.
4. Repeat:
o Go back to check Booth's condition until the counter reaches 0.
5. Output:
o The result is stored in A and Q.

Q5. Show step by step the multiplication process of two 2’s complement numbers (-13 and
7) using Booth’s algorithm.

Solution:

1. Represent the numbers in 5-bit 2’s complement:


o -13 → 10011
o 7 → 00111
2. Initial Setup:
o A = 00000, Q = 00111, Q₋₁ = 0, and M = 10011.
3. Steps:
o For each iteration, follow Booth's rules:
 Check Q₀ and Q₋₁.
 Perform addition or subtraction between A and M if necessary.
 Perform an arithmetic shift.
o Document the values of A, Q, and Q₋₁ after each step.
4. Result:
o After 5 iterations, the final product is the combination of A and Q.

Q6. Draw the data path of sequential 16-bit binary multiplier.

Solution:

A sequential 16-bit binary multiplier data path includes:

1. Registers:
o A (Accumulator) to store intermediate results.
o Q (Multiplier) for the multiplier value.
o M (Multiplicand) for the multiplicand value.
2. Arithmetic Unit:
o Performs addition and subtraction.
3. Shift Register:
o Handles the arithmetic right shift for A and Q.
4. Counter:
o Tracks the number of iterations.

Q7. Draw a flowchart for addition and subtraction of signed binary numbers using 1's
complement and 2's complement representation.

Solution:

1. 1’s Complement Addition:


o Convert the negative numbers to 1’s complement.
o Add the two binary numbers.
o If there is a carry, add it back to the least significant bit (end-around carry).
o Output the result.
2. 2’s Complement Addition:
o Convert the negative numbers to 2’s complement.
o Add the two binary numbers directly.
o Ignore any carry generated during the addition.
o Output the result.

Q8. Perform addition and subtraction of two fixed-point binary numbers where negative
numbers are signed in 1’s complement presentation.

Example:

1. Fixed-point numbers: 1100.110 (−3.25) and 0101.101 (+5.625).


2. Addition:
o Convert −3.25 to 1’s complement: 0011.010.
o Add the two numbers.
o If there is an end-around carry, add it back to the least significant bit.
o Normalize the result.
3. Subtraction:
o Convert subtraction into addition by taking the 1’s complement of the subtrahend.
o Add the numbers and handle any carry.

Q9. Evaluate the block diagram of floating-point addition and subtraction operations.
Explain in detail with control timing and diagrams.

Solution:

1. Components:
o Alignment Unit: Aligns the exponents of the two floating-point numbers by
shifting the smaller number’s mantissa.
o Arithmetic Unit: Performs addition or subtraction on the aligned mantissas.
o Normalization Unit: Ensures the result is in normalized form by adjusting the
mantissa and exponent.
o Control Unit: Manages the timing and sequence of operations.
2. Stages:
o Stage 1: Compare exponents and shift the smaller number’s mantissa.
o Stage 2: Add or subtract the mantissas.
o Stage 3: Normalize the result.
o Stage 4: Output the final floating-point result.
Q10.showthemultiplicationprocessusingboothalgorithmwhenthe
followingbinarynumber(+13)X(-15)aremultiplied

Answer-
BoothAlgorithmTable

Step A Q(Multiplier) Q_0 Action


(Accumulator) (Previousbit)

0 00000000 11110001 0 Initialvalues

1 00001101 11110001 0 Q_0 = 0,Q_1 =


1, subtract A
=A - M (A = A -
13)

2 11110001 11110001 1 Arithmetic


shiftleft

3 00001101 11110010 1 Q_0=1,Q_1=0,


add A = A +
M(A=A+13)

4 11110011 11110010 1 Arithmetic


shiftleft

5 00001101 11110101 1 Q_0 = 1,Q_1 =


1, subtract A
=A - M (A = A -
13)

6 11110001 11110101 1 Arithmetic


shiftleft

7 00001101 11110110 1 Q_0=1,Q_1=0,


add A = A +
M(A=A+13)

8 11110011 11110110 0 Arithmetic


shiftleft

The final answer line of Booth's algorithm for multiplying +13 and -15 is Final
Product:1111001111110110.

This represents the binary result of multiplying +13×−15+13 \times -15+13×−15, which equals -
195in decimal.
Q11. Perform Division Restoring Algorithm Dividend = 11 Divisor = 3

Dividend = 11 → 1011 (binary)

Divisor = 3 → 0011 (binary)

n M A Q Operation

4 00011 00000 1011 initialize

00011 00001 011_ shift left AQ


n M A Q Operation

00011 11110 011_ A=A-M

00011 00001 0110 Q[0]=0 And restore A

3 00011 00010 110_ shift left AQ

00011 11111 110_ A=A-M

00011 00010 1100 Q[0]=0

2 00011 00101 100_ shift left AQ

00011 00010 100_ A=A-M

00011 00010 1001 Q[0]=1

1 00011 00101 001_ shift left AQ

00011 00010 001_ A=A-M

00011 00010 0011 Q[0]=1


Q12. Analyze the number1460.125 in IEEE floating point system and represents in single
precision and double precision formats.

Step 1: Convert the number to binary

We'll first convert 1460.125 (a decimal number) to its binary equivalent.

 Integer Part (1460):


o Divide 1460 by 2 repeatedly and record the remainders:
 1460 ÷ 2 = 730 remainder 0
 730 ÷ 2 = 365 remainder 0
 365 ÷ 2 = 182 remainder 1
 182 ÷ 2 = 91 remainder 0
 91 ÷ 2 = 45 remainder 1
 45 ÷ 2 = 22 remainder 1
 22 ÷ 2 = 11 remainder 0
 11 ÷ 2 = 5 remainder 1
 5 ÷ 2 = 2 remainder 1
 2 ÷ 2 = 1 remainder 0
 1 ÷ 2 = 0 remainder 1
o So, 1460 in binary is: 10110110100
 Fractional Part (0.125):
o Multiply 0.125 by 2:
 0.125 × 2 = 0.25 → integer part = 0
 0.25 × 2 = 0.5 → integer part = 0
 0.5 × 2 = 1.0 → integer part = 1
o So, 0.125 in binary is: 0.001
 Combining both parts, the number 1460.125 in binary is:
10110110100.001
Step 2: Normalize the number

Next, we normalize the binary number so that there is one non-zero digit to the left of the
decimal point.

 1460.125 becomes 1.0110110100001 × 2¹¹


Step 3: Represent in IEEE 754 formats

Single Precision (32 bits)

 Sign bit (S): Since 1460.125 is positive, the sign bit is 0.


 Exponent (E): The exponent is 11 (from the normalized form). IEEE 754 single precision
uses a bias of 127, so the exponent is:
11+127=13811 + 127 = 13811+127=138, which in binary is: 10001010.
 Mantissa (M): The normalized binary fraction is 0110110100001. In IEEE 754, we discard
the leading 1 (it's implicit), so the mantissa is:
01101101000010000000000 (padded to 23 bits).
 Thus, the single precision representation of 1460.125 is:
 S | Exponent | Mantissa
 0 | 10001010 | 01101101000010000000000
 In 32-bit format:
0 10001010 01101101000010000000000
 Double Precision (64 bits)
 Sign bit (S): Still 0 (since the number is positive).
 Exponent (E): The exponent is still 11. IEEE 754 double precision uses a bias of 1023, so
the exponent is:
11+1023=103411 + 1023 = 103411+1023=1034, which in binary is: 10000001010.
 Mantissa (M): The normalized binary fraction is the same as before: 0110110100001.
Again, we discard the leading 1, so the mantissa is:
0110110100001000000000000000000000000000000000000000 (padded to 52 bits).
 Thus, the double precision representation of 1460.125
 S | Exponent | Mantissa
 0 | 10000001010 | 0110110100001000000000000000000000000000000000000000
 In 64-bit format:
0 10000001010 0110110100001000000000000000000000000000000000000000
 Final Answer:
 Single Precision (32 bits):
0 10001010 01101101000010000000000
 Double Precision (64 bits):
0 10000001010 0110110100001000000000000000000000000000000000000000
Q14. Draw the data path of sequential n-bit binary divider. Give the non-restoring division
algorithm for unsigned integers. Also illustrate algorithm for unsigned integer with a suitable
example.

Ans:-Data Path of Sequential N-bit Binary Divider


The data path for a sequential n-bit binary divider consists of several components that work
together to perform the division operation. These components include registers, arithmetic
units, and control logic. Here's the high-level view of how a typical binary divider might work:

Components of the Data Path:

1. Dividend Register (A):


o Holds the dividend.
2. Divisor Register (B):
o Holds the divisor.
3. Quotient Register (Q):
o Stores the quotient during the division process.
4. Remainder Register (R):
o Holds the remainder after each subtraction.
5. Arithmetic Unit (AU):
o Performs subtraction and shift operations. Specifically, the subtraction compares
the current remainder with the divisor and produces the new remainder.
6. Control Logic:
o Directs the sequential steps, ensuring the right operations are performed at each
clock cycle. It controls the shifting and subtraction operations.
7. Shift Register (S):
o The shift register is used to shift the dividend and quotient.
Sequential Division Operation:

In a sequential division, the algorithm proceeds step by step, performing a combination of


shifting and subtraction. It works in iterations until the quotient is obtained. The data path
needs to handle the shifting of the divisor and the quotient, as well as the subtraction of the
remainder.

Non-Restoring Division Algorithm for Unsigned Integers

The non-restoring division algorithm is an efficient method for performing unsigned division. It
involves a combination of subtraction and addition, depending on the comparison between the
partial remainder and the divisor. The steps of the non-restoring division algorithm are as
follows:

Algorithm (Non-Restoring Division):

1. Initialize the quotient and remainder registers:


o R = 0 (initial remainder)
o Q = dividend (initial quotient)
2. Set the number of bits:
o n = number of bits in the dividend.
3. For each bit of the dividend (starting from left to right):
o Shift the remainder and quotient left by 1 bit.
o Subtract the divisor from the remainder and check if the result is positive or
negative.
 If R >= 0: no need to restore, just continue.
 If R < 0: add the divisor back to restore.
4. Repeat until all bits have been processed.
Steps in Detail:

1. Initialization:
o R = 0 (remainder)
o Q = Dividend (initial quotient)
2. Shift the remainder and quotient left by one bit:
o R = (R << 1) | (current bit of Q)
o Shift the quotient Q left.
3. Subtract or Add the divisor:
o R = R - B (where B is the divisor)
 If R >= 0, no restoration needed. Proceed to the next iteration.
 If R < 0, restore by adding the divisor: R = R + B.
4. Store the quotient bits as you go.
5. Repeat for all bits in the dividend.

Illustration of Non-Restoring Division for an Example

Let's consider an example where we divide 13 (1101 in binary) by 4 (100 in binary).

Initial Setup:

 Dividend: 1101 (13 in decimal)


 Divisor: 100 (4 in decimal)
 Quotient: Q = 0000 (initialized to zero)
 Remainder: R = 0000 (initialized to zero)
 Number of bits: n = 4
Step-by-Step Division:

1. First Step:
o Dividend: 1101
o Divisor: 100
o R = 0000, Q = 0000
o Shift left R and Q: R = 0000, Q = 0001
o Subtract divisor: R = 0000 - 0100 = -0100
o Since R < 0, we restore: R = -0100 + 0100 = 0000
2. Second Step:
o Dividend: 1101
o Divisor: 100
o R = 0000, Q = 0001
o Shift left R and Q: R = 0000, Q = 0010
o Subtract divisor: R = 0000 - 0100 = -0100
o Since R < 0, we restore: R = -0100 + 0100 = 0000
3. Third Step:
o Dividend: 1101
o Divisor: 100
o R = 0000, Q = 0010
o Shift left R and Q: R = 0000, Q = 0100
o Subtract divisor: R = 0000 - 0100 = -0100
o Since R < 0, we restore: R = -0100 + 0100 = 0000
4. Fourth Step:
o Dividend: 1101
o Divisor: 100
o R = 0000, Q = 0100
o Shift left R and Q: R = 0000, Q = 1000
o Subtract divisor: R = 0000 - 0100 = -0100
o Since R < 0, we restore: R = -0100 + 0100 = 0000
Final Results:

 Quotient: Q = 0010 (2 in decimal)


 Remainder: R = 0000 (0 in decimal)

Thus, 13 ÷ 4 = 2 with a remainder of 1. The non-restoring division method works by alternating


between subtraction and restoration, ensuring efficient unsigned division.

Q15. Perform the division process of 00001111 by 0011 (use a dividend of 8 bits).

Dividend: 00001111 (15 in decimal)


Divisor: 0011 (3 in decimal)

Steps

Step 1: Align the divisor with the leftmost bits of the dividend.

Initially, we consider the first 4 bits of the dividend (0000). Since 0000 < 0011, the quotient bit
here is 0.
Step 2: Move to the next bit.

Now take the next bit from the dividend to make the partial dividend 00001. Again, 00001 <
0011, so the quotient bit here is 0.

Step 3: Repeat the process.

Now take the next bit from the dividend to make the partial dividend 000011. Since 000011 (3)
is equal to 0011 (3), the quotient bit is 1. Perform subtraction:

diff

Copy code

000011

- 0011

-----

000000

The remainder is 0.

Step 4: Continue the division.

Bring down the next bit from the dividend to make the new partial dividend 00001. Repeat the
comparison:

 00001 < 0011, so the next quotient bit is 0.

Bring down the next bit to make 000011. As before, 000011 = 0011, so the quotient bit is 1.
Perform subtraction again:

diff

Copy code

000011

- 0011

-----

000000

The remainder is 0.
Final Quotient and Remainder

Quotient: 000101
Remainder: 0000

Verification

 Divisor: 0011 (3 in decimal)


 Quotient: 000101 (5 in decimal)
 Product: 3 × 5 = 15
 Remainder: 0
Hence, the division process is correct.
Q16. Draw a flowchart for adding and subtracting two fixed point binary numbers where
negative numbers are signed 1’s complement presentation.

Explanation Before Drawing

1. 1’s Complement Representation:


o A negative number is represented by flipping all bits of its positive counterpart.
o For example:
 +5 (4-bit): 0101
 -5 (1’s complement): 1010
2. Addition of 1’s Complement:
o Add the binary numbers normally.
o If there’s a carry-out from the most significant bit (MSB), add it back to the
result (this is called the "end-around carry").
3. Subtraction of 1’s Complement:
o To subtract A−BA - BA−B:
 Take the 1’s complement of BBB (flipping all bits).
 Add AAA and 111's complement of BBB using the same process as
addition.
Flowchart Steps

Inputs

1. Input two fixed-point binary numbers: AAA and BBB.


2. Choose the operation: Addition or Subtraction.

Flowchart Outline

1. Start
o Begin the process.
2. Input Numbers
o Read AAA (binary number 1).
o Read BBB (binary number 2).
o Select operation: Addition or Subtraction.
3. Check Operation Type
o If Addition, proceed to step 4.
o If Subtraction, proceed to step 5.
4. Perform Addition
o Add AAA and BBB.
o Check for a carry-out from the MSB.
 If carry exists, add it back to the least significant bit (end-around carry).
o Go to step 6.
5. Perform Subtraction
o Take the 1’s complement of BBB.
o Add AAA to 111's complement of BBB.
o Check for a carry-out from the MSB.
 If carry exists, add it back to the least significant bit (end-around carry).
6. Check Result for Negative
o If the result is negative (MSB = 1), take its 1’s complement to represent it
correctly.
o If not, leave it as is.
7. Output Result
o Display the final result.
8. End
Q17. Describe Sequential Arithmetic & Logic unit (ALU) using proper diagram.

Sequential Arithmetic and Logic Unit (ALU)

A Sequential Arithmetic and Logic Unit (ALU) is a fundamental component of a CPU that
performs arithmetic, logic, and sometimes bitwise operations on binary data. Unlike
combinational ALUs, sequential ALUs utilize clock cycles and memory elements (like flip-flops)
to perform operations sequentially, step-by-step.

Key Features of Sequential ALU

1. Clock Dependency:
o Operations are carried out in steps synchronized by a clock signal.
o Each clock pulse triggers a specific operation or step in the computation.
2. Control Unit Integration:
o The control unit provides instructions that dictate the ALU's operations (add,
subtract, AND, OR, etc.).
o A control signal specifies the operation to perform.
3. Registers and Feedback:
o The ALU interacts with temporary storage (registers) to hold intermediate
results.
o Feedback loops enable iterative operations (e.g., shifting in division or
multiplication).
4. Support for Complex Operations:
o Can handle operations like multiplication, division, and iterative logic functions,
which require sequential processing.

Functions of Sequential ALU

1. Arithmetic Operations:
o Addition, subtraction (with overflow/underflow handling).
o Multiplication and division (using sequential methods like Booth's Algorithm for
multiplication or restoring/non-restoring division for division).
2. Logic Operations:
o AND, OR, XOR, NOT.
o Shifting (logical or arithmetic).
3. Comparison:
o Equality, greater-than, less-than checks.
4. Bitwise Operations:
o Manipulation of individual bits.

Components of Sequential ALU

1. Arithmetic Unit:
o Executes arithmetic operations using adder-subtractors and sequential
multipliers or dividers.
2. Logic Unit:
o Executes bitwise operations (AND, OR, XOR, NOT).
3. Shift Registers:
o Handles bit shifting and rotation operations, often used in division and
multiplication.
4. Control Unit:
o Generates control signals based on the opcode to guide the ALU operation.
5. Accumulator Register:
o Stores intermediate results during sequential operations.
6. Clock Generator:
o Provides clock pulses to synchronize the sequential operations.

Steps in Sequential ALU Operations

1. Instruction Decoding:
o The control unit decodes the opcode to determine the operation type.
2. Register Loading:
o Load operands into input registers.
3. Operation Execution:
o Perform the operation step-by-step, depending on the clock cycles.
4. Result Storage:
o Store the final result in the destination register.

Q18. Using Booth Algorithm perform the multiplication on the following 6-bit unsigned integer
10112211* 11010101.

Ans:-Let A=101011A = 101011A=101011 (−21-21−21 in 2's complement representation) and B=110101B


= 110101B=110101 (−11-11−11 in 2's complement representation).

Steps in Booth's Algorithm

Initialization

 Multiplicand (MMM): A=101011A = 101011A=101011

 Multiplier (QQQ): B=110101B = 110101B=110101

 Extra bit (Q−1Q_{-1}Q−1): 000

 Product (Accumulator, AAA): 000000000000000000

 Number of bits (nnn): 6


After niterations, the final result is the concatenation of A and Q:
A=110100,Q=010101A = 110100, Q = 010101A=110100,Q=010101.

The product in 2's complement is the binary representation of 231231231 in decimal.

Q19. Draw the Data path of 2’s compliment multiplier. Give the Robertson multiplication
algorithm for 2’s compliment fractions. Also illustrate the algorithm for 2’s compliment
fraction by a suitable example.

Ans:-Data Path of 2's Complement Multiplier

A 2’s complement multiplier is designed to multiply signed numbers. The data path includes
components such as registers, an adder-subtractor unit, a control unit, and a partial product
generator. Here's a step-by-step description and a diagram of the data path.

Components of the Data Path

1. Multiplicand Register (MMM):


o Stores the multiplicand.
2. Multiplier Register (QQQ):
o Stores the multiplier.
o Used for generating partial products.
3. Accumulator Register (AAA):
o Stores intermediate results (partial products).
4. Control Unit:
o Tracks the number of steps and coordinates operations.
5. Adder-Subtractor Unit:
o Performs addition or subtraction based on the algorithm (e.g., adding or
subtracting the multiplicand).
6. Shift Logic:
o Handles arithmetic shifts for the partial products and multiplier.

Q20. Explain IEEE-754 standard for floating point representation Express (314.175) 10 in all
the IEEE-754 models.

Ans:-IEEE-754 Standard for Floating-Point Representation

The IEEE-754 standard is widely used for representing real numbers in binary. It provides a
way to represent floating-point numbers (fractional and large integers) efficiently in binary
form. There are three main formats in IEEE-754:

1. Single Precision (32-bit)


2. Double Precision (64-bit)
3. Extended Precision (128-bit)

Structure of IEEE-754 Representation

Each IEEE-754 floating-point number consists of three parts:

1. Sign bit (S):


o S=0S = 0S=0 for positive numbers.
o S=1S = 1S=1 for negative numbers.
2. Exponent (E):

Stores the exponent using a biased representation: E=e+bias


E = e + \text{bias}E=e+bias

 For single precision, bias = 127.


 For double precision, bias = 1023.
3. Mantissa (M):
o Represents the fractional part of the number in normalized form
(1.fraction1.\text{fraction}1.fraction).
o The leading 111 is implicit in normalized numbers.
UNIT 3 IMPORTANT QUESTION AND ANSWERS

Question 1. Reduced Instruction Set Computer (RISC) Architecture

Answer:Reduced Instruction Set Computer (RISC) is a type of computer architecture that focuses on a
small, highly optimized set of instructions that are executed very quickly. The RISC design philosophy
contrasts with the Complex Instruction Set Computer (CISC) approach, which includes a more extensive
set of complex instructions.

Key Features of RISC

1. Small Instruction Set:

o RISC processors have fewer instructions compared to CISC processors.

o Each instruction is designed to perform a simple operation.

2. Fixed Instruction Length:

o Most instructions are of a fixed size, simplifying decoding and execution.

3. Load-Store Architecture:

o Operations are performed only on CPU registers.

o Data must be loaded from memory into registers before operations and stored back
afterward.

4. Single-Cycle Execution:

o Most instructions complete in a single clock cycle, making the processor faster.

5. Pipelining:

o RISC architectures are optimized for pipelining, where multiple instructions are
overlapped during execution to improve throughput.

6. Few Addressing Modes:

o RISC processors support fewer addressing modes, which simplifies instruction decoding.

7. Emphasis on Software:

o The RISC philosophy shifts complexity to software. Complex operations are achieved by
combining simpler instructions in the compiler.

Advantages of RISC

1. Performance:
o Simplified instructions allow for faster execution and improved performance.

2. Simpler Hardware:

o Smaller instruction sets and fewer addressing modes reduce processor complexity,
making it easier to design and manufacture.

3. Pipelining Efficiency:

o Fixed-length instructions and reduced complexity make pipelining more effective,


increasing instruction throughput.

4. Lower Power Consumption:

o Simplified instructions and operations contribute to reduced power usage.

5. Scalability:

o Easier to scale and improve RISC processors by increasing clock speed or adding cores.

Disadvantages of RISC

1. Increased Code Size:

o More instructions are often needed to perform a task compared to CISC architectures,
which may increase memory usage.

2. Compiler Dependence:

o A good compiler is essential to optimize and translate high-level code efficiently into the
limited instruction set.

3. Less Support for Complex Operations:

o Tasks that require complex instructions can take more time and effort to implement in
software.

Examples of RISC Architectures

1. ARM (used in smartphones and embedded systems)

2. MIPS (used in embedded systems and routers)

3. RISC-V (open-source RISC architecture)

4. SPARC (used in enterprise servers)

Question 2: Comparison RISC and CISC


Answer:

Feature RISC CISC

Instruction Set Small and simple Large and complex

Instruction Execution Single clock cycle (mostly) Multiple clock cycles

Pipelining Highly efficient Less efficient

Memory Access Separate load/store Operands directly from memory

Hardware Complexity Simple Complex

RISC architectures are widely adopted in modern computing due to their efficiency and simplicity,
especially in applications requiring high performance and low power, such as mobile and embedded
devices.

Question 3: Pipelining in Computer Organization and Architecture

Pipelining is a technique in computer architecture used to improve the instruction throughput (the
number of instructions executed per unit of time) by overlapping the execution of multiple instructions.
It is analogous to an assembly line in a factory, where different stages of production are carried out
simultaneously on different parts.

Key Concepts of Pipelining

1. Stages of Pipelining: A pipeline divides the execution of an instruction into multiple stages. Each
stage performs a specific part of the instruction cycle:

o Fetch (F): Retrieve the instruction from memory.

o Decode (D): Interpret the instruction and prepare for execution.

o Execute (E): Perform the operation specified by the instruction.

o Memory Access (M): Access memory if required (e.g., load/store operations).

o Write Back (WB): Store the result back into the register file.

2. Parallel Execution:

o Multiple instructions are processed simultaneously, each at a different stage in the


pipeline.

3. Instruction Throughput:
o Pipelining does not reduce the time it takes to execute a single instruction but increases
the number of instructions completed in a given period.

4. Pipeline Depth:

o The number of stages in the pipeline determines its depth. A deeper pipeline allows for
more parallelism but can increase complexity.

Example of Pipelining

Consider an instruction pipeline with 5 stages:

Cycle Instruction 1 (I1) Instruction 2 (I2) Instruction 3 (I3) Instruction 4 (I4)

1 Fetch

2 Decode Fetch

3 Execute Decode Fetch

4 Memory Access Execute Decode Fetch

5 Write Back Memory Access Execute Decode

6 Write Back Memory Access Execute

 In the 5th cycle, five instructions are simultaneously in different stages.

Advantages of Pipelining

1. Increased Throughput:

o Multiple instructions are processed at once, resulting in higher instruction throughput.

2. Efficient Resource Utilization:

o Each stage of the CPU is utilized in parallel, reducing idle times.

3. Faster Program Execution:

o While a single instruction doesn't execute faster, the overall program finishes quicker
due to overlapping instruction execution.
Challenges in Pipelining

1. Pipeline Hazards: These are issues that disrupt the smooth flow of instructions through the
pipeline:

o Structural Hazards: Occur when hardware resources are insufficient to support all
instructions in the pipeline.

o Data Hazards: Happen when instructions depend on the results of previous instructions.

 Example: A subsequent instruction requires a value that has not yet been
written back.

o Control Hazards: Occur due to branch or jump instructions, causing uncertainty about
which instruction to fetch next.

2. Pipeline Stalling:

o The pipeline may need to pause or stall to resolve hazards, reducing performance.

3. Increased Complexity:

o Managing and coordinating the stages of a pipeline adds complexity to the CPU design.

Solutions to Pipeline Challenges

1. Forwarding (Data Hazard Mitigation):

o Passing the output of one stage directly to a previous stage that needs it, avoiding
delays.

2. Branch Prediction (Control Hazard Mitigation):

o Using algorithms to predict the outcome of branches (e.g., if-else conditions) to


minimize stalls.

3. Pipeline Flushing:

o Clearing the pipeline when a misprediction or hazard occurs and restarting it with the
correct instructions.

4. Multiple Execution Units:

o Reducing structural hazards by adding more resources, such as multiple arithmetic logic
units (ALUs).

Applications of Pipelining

 Used in RISC architectures for efficient instruction execution.

 Found in modern CPUs, GPUs, and signal processors to enhance performance.


 Critical for achieving high performance in superscalar processors and parallel processing
systems.

Pipelining is fundamental to modern computer architecture, enabling CPUs to execute instructions more
efficiently and achieve greater performance without increasing clock speed significantly.

Question 4: Hardwire and micro programmed control

Ans. Hardwired Control is a control unit design method in computer architecture where the control
signals required to execute instructions are generated using fixed hardware circuits. This approach uses
combinational logic (e.g., gates, flip-flops, and multiplexers) to directly implement the control logic.

Key Features of Hardwired Control

1. Fixed Design:

o The control logic is embedded into the hardware and cannot be modified without
redesigning the hardware.

2. Fast Execution:

o Since control signals are generated through direct hardware logic, the execution speed
is faster compared to microprogrammed control units.

3. Deterministic Behavior:

o Hardwired control units operate with a fixed delay, leading to consistent performance.

4. Simple for RISC Architectures:

o Works well for processors with a small and simple instruction set, like RISC.

Components of a Hardwired Control Unit

1. Instruction Decoder:

o Decodes the current instruction into its components (operation code, operands, etc.).

2. Control Logic Generator:

o A combinational logic circuit generates the appropriate control signals based on the
current instruction and the state of the system.

3. Timing and Sequencing Circuit:

o Ensures that control signals are issued in the correct sequence and at the right time.

Working of a Hardwired Control Unit

1. Fetch the Instruction:

o The control unit initiates fetching an instruction from memory.


2. Decode the Instruction:

o The instruction decoder interprets the instruction to identify the operation and
operands.

3. Generate Control Signals:

o The control logic generator produces the necessary control signals to drive the datapath
components (ALU, registers, memory, etc.) for the instruction execution.

4. Execute and Update:

o The datapath executes the operation, and the control unit updates the program counter
or other relevant registers.

Advantages of Hardwired Control

1. High Speed:

o The direct hardware implementation leads to faster control signal generation and
instruction execution.

2. Efficient for Simple Processors:

o Ideal for systems with a limited and straightforward instruction set, such as RISC
processors.

3. Compact Design for Specific Applications:

o Highly optimized for specific tasks in embedded systems or dedicated hardware.

Question 5 Comparison: Hardwired vs. Microprogrammed Control

Aspect Hardwired Control Microprogrammed Control


Design Fixed combinational logic Programmable microinstructions
Speed Faster due to hardware execution Slower due to memory access
Flexibility Inflexible Highly flexible
Complexity Difficult for complex instruction sets Easier to design for complex ISAs
Use Case RISC processors, embedded systems CISC processors, general-purpose CPUs

Question 6. Explain Microprogrammed Control

Anns. Microprogrammed Control is a method of designing the control unit in a computer where the
control signals needed to execute an instruction are generated by a program-like sequence of
instructions called microinstructions stored in a control memory (CM). This approach contrasts with the
hardwired control, which relies on fixed hardware logic circuits.
Key Components of a Microprogrammed Control Unit

1. Control Memory (CM):

o A special memory that stores microinstructions. These microinstructions define the


control signals for each operation.

2. Microinstruction:

o A low-level instruction in the control memory specifying which control signals to


activate for each step in the instruction cycle.

3. Control Address Register (CAR):

o Holds the address of the next microinstruction to be executed.

4. Control Data Register (CDR):

o Holds the microinstruction fetched from control memory.

5. Sequencer:

o Determines the sequence of microinstructions to execute based on the current


microinstruction and the external inputs.

6. Control Signals:

o Outputs generated by decoding the microinstructions, which control the various


components of the CPU.

orking of a Microprogrammed Control Unit

1. Instruction Fetch:

o The control unit fetches an instruction from the main memory.

2. Microinstruction Fetch:

o Based on the opcode, the address of the first microinstruction for the instruction is
loaded into the CAR.

3. Execution of Microinstructions:

o Microinstructions are sequentially fetched from control memory and executed.

o Each microinstruction generates specific control signals to control the datapath


components (ALU, registers, memory, etc.).

4. Sequencing:

o The sequencer determines whether to fetch the next microinstruction in sequence,


jump to another microinstruction, or fetch a new instruction.

Example of a Microinstruction Format


Field Description
Control Signals Specifies the control lines to activate for a specific operation.
Next Address Specifies the address of the next microinstruction to execute.
Condition Codes Indicates branching or sequencing conditions.

Question 7. Explain Microprogram Sequencing

Microprogram Sequencing refers to the process of determining the order in which microinstructions are
fetched and executed from the Control Memory (CM) to generate the control signals needed for
instruction execution in a microprogrammed control unit.

The microprogram sequence is critical for coordinating the flow of instructions and ensuring the correct
execution of the overall machine-level instructions.

Key Components of Microprogram Sequencing

1. Control Address Register (CAR):

o Holds the address of the current microinstruction being executed.

2. Control Memory (CM):

o Stores the microinstructions for each machine instruction.

3. Control Data Register (CDR):

o Temporarily holds the microinstruction fetched from the control memory.

4. Sequencer:

o Generates the address of the next microinstruction based on the current


microinstruction, external inputs, and conditions.

5. Next Address Field:

o A field in the microinstruction that specifies the address of the next microinstruction.

Types of Microprogram Sequencing

Microprogram sequencing can be classified based on how the address of the next microinstruction is
determined:

1. Sequential Sequencing

 The microinstructions are executed in a sequential order.


 Use Case: Simple instructions that don’t require branching.

 Mechanism:

o The CAR is incremented by 1 after each microinstruction fetch.

o Example: CAR←CAR+1\text{CAR} \gets \text{CAR} + 1CAR←CAR+1.

2. Conditional Branching

 A branch in the microprogram is taken based on a condition (e.g., zero flag, carry flag).

 Use Case: Required for decision-making instructions or complex operations.

 Mechanism:

o The condition field in the microinstruction is evaluated.

o If the condition is true, the CAR is loaded with the branch address.

o Example: If Zero Flag=1\text{Zero Flag} = 1Zero Flag=1, CAR←Branch Address\text{CAR}


\gets \text{Branch Address}CAR←Branch Address.

3. Unconditional Branching

 The microprogram jumps to a specific microinstruction without any condition.

 Use Case: Transferring control to a subroutine or handling special instructions.

 Mechanism:

o The CAR is directly updated with the branch address specified in the microinstruction.

o Example: CAR←Branch Address\text{CAR} \gets \text{Branch


Address}CAR←Branch Address.

4. Subroutine Control

 A group of microinstructions is reused for common operations.

 Use Case: Efficient use of control memory and modular design.

 Mechanism:

o Use a microprogram counter (similar to a stack) to store the return address.

o After the subroutine is executed, the CAR is restored to the saved return address.

Sequencing Techniques

1. Incremental Sequencing:

o The CAR is incremented to point to the next microinstruction in sequence.


2. Explicit Jump Sequencing:

o A specific address is loaded into the CAR, often specified in the current microinstruction.

3. Mapping Logic:

o Maps the opcode of the machine-level instruction to the starting address of the
corresponding microprogram in the control memory.

Question 8. Explain concept of horizontal microprogramming.

Ans. Horizontal microprogramming is a microprogramming approach in which control signals are


specified explicitly in a microinstruction. Each bit in a horizontal microinstruction corresponds directly to
a specific control signal. This approach provides high control parallelism, allowing multiple operations to
occur simultaneously during a single microinstruction cycle.

Key Features of Horizontal Microprogramming

1. Wide Microinstructions:

o Each microinstruction contains many bits, with each bit representing a specific control
signal.

2. Direct Control:

o Microinstructions directly activate control signals without additional decoding.

3. Parallelism:

o Multiple control signals can be activated in a single microinstruction cycle, enabling


simultaneous operations (e.g., register transfer and arithmetic operations).

4. Control Memory Size:

o Requires a larger control memory to store wide microinstructions.

5. Structure of a Horizontal Microinstruction


6. A horizontal microinstruction typically has the following fields:

Field Purpose
Control Each bit corresponds to a control line for datapath elements (e.g., ALU,
Signals registers, buses).
Condition
Specifies conditional branching logic based on flags or status bits.
Code
Next Address Contains the address of the next microinstruction for sequencing.

Question9. Explain Vertical microprogramming


Vertical microprogramming is a microprogramming approach where microinstructions are compact and
encoded. Each microinstruction consists of a small number of bits, and these bits are interpreted
(decoded) to generate the actual control signals. This approach reduces the size of control memory but
introduces decoding overhead.

Key Features of Vertical Microprogramming

1. Compact Microinstructions:

o Microinstructions use fewer bits, as control signals are encoded.

o Typically, each microinstruction contains fields that represent operations in an encoded


format.

2. Requires Decoding:

o The encoded fields in a microinstruction need to be decoded to produce the actual


control signals.

3. Sequential Control:

o Often used for sequential execution with minimal parallelism.

4. Reduced Control Memory:

o Due to the compact size of microinstructions, the control memory requirements are
significantly lower than horizontal microprogramming.

Structure of a Vertical Microinstruction

A vertical microinstruction consists of multiple fields, each representing an operation or condition in an


encoded form.

Typical Fields in a Vertical Microinstruction:

Field Purpose

Opcode Field Encodes the operation to perform (e.g., ALU operation, register access).

Source/Destination Fields Specifies the source and destination registers or memory locations.

Condition Code Encodes conditions for branching or decision-making logic.

Next Address Field Specifies the address of the next microinstruction.


dvantages of Vertical Microprogramming

1. Compact Microinstructions:

o Reduces the memory size required to store control instructions.

2. Easier to Modify:

o Changes in control logic affect fewer bits, simplifying updates.

3. Lower Cost:

o Smaller memory size reduces hardware costs.

4. Simplified Design:

o Encoded control simplifies hardware implementation for general-purpose processors.

Question 10 Comparison with Horizontal Microprogramming

Ans.

Aspect Vertical Microprogramming Horizontal Microprogramming


Microinstruction Narrow (fewer bits, encoded
Wide (many bits, explicit signals).
Width signals).
Control Signal Encoded, requiring decoding to
Direct, with each bit corresponding
Generation produce signals. to a signal.
Larger, as all signals are stored
Control Memory Size Smaller, due to compact encoding.
explicitly.
Faster, due to direct activation of
Speed Slower, due to decoding overhead.
control signals.
Low, limited simultaneous signal High, enabling multiple operations
Parallelism
activation. per cycle.
UNIT 4 IMPORTANT QUESTION AND ANSWERS

Computer Organization and Architecture


Question and answers
Q1. Draw the Memory Hierarchy and explain it.
Ans. The memory hierarchy is an enhancement that organizes the memory so that it can
minimize access time. The Memory Hierarchy was developed based on a program behavior
known as locality of references. The figure below clearly demonstrates the different levels of the
memory hierarchy

This Memory Hierarchy Design is divided into 2 main types:


1. External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage devices which
are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is directly accessible by the
processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
1. Capacity:
It is the global volume of information the memory can store. As we move from top to bottom in
the Hierarchy, the capacity increases.
2. Access Time:
It is the time interval between the read/write request and the availability of the data. As we move
from top to bottom in the Hierarchy, the access time increases.
3.Performance:
Earlier when the computer system was designed without Memory Hierarchy design, the speed
gap increases between the CPU registers and Main Memory due to large difference in access
time. This results in lower performance of the system and thus, enhancement was required. This
enhancement was made in the form of Memory Hierarchy Design because of which the
performance of the system increases. One of the most significant ways to increase system
performance is minimizing how far down the memory hierarchy one has to go to manipulate
data.
4.Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal Memory
is costlier than External Memory
Q2. Differentiate between SRAM and DRAM.
Ans. Static RAM: The static RAM consists essentially of internal flip-flops that store
thebinary information. The stored information remains valid as long as power is
appliedtotheunit.
DynamicRAM:ThedynamicRAMstoresthebinaryinformationintheformofelectric charges that are
applied to capacitors. The capacitors are provided inside thechip by MOS transistors. The stored
charge on the capacitors tends to discharge withtime and the capacitors must be periodically
recharged by refreshing the dynamicmemory.
Q3. What is auxiliary memory?
Ans. An auxiliary memory is referred to as the lowest-cost, highest-space, and slowest-approach
storage in a computer system. It is where programs and information are preserved
for long-term storage or when not in direct use. The most typical auxiliary memory devices
used in computer systems are magnetic disks and tapes.
Q4. What is page fault?
Ans. If CPU needs page that is not present in main memory then it is called as page fault.
Then, the page has to be loaded from Virtual Memory to main memory.
Q5. What do you mean by virtual memory?
Ans. Virtual Memory is a way of using the secondary memory in such a way that it feels like
we are using the main memory. Programs and data are first stored in auxillary memory. Portions
of a program or data are brought into main memory as they are needed by CPU. Virtual memory
is a concept used in some large computer systems that permit the user to construct the programs
as though a large memory space is available, equal to total auxillary memory. Virtual Memory is
a storage allocation scheme in which secondary memory can be addressed as though it was a part
of the main memory.
Q6. Draw and explain the pins of the RAM chip.
Ans. The block diagram of a RAM chip is shown in Fig given below.

The capacity of the memory is 128 words of eight bits (one byte) per word. Thisrequires a7-
bitaddressandan8-bitbidirectionaldatabus.
The read and write inputs specify the memory operation and the two chips
select(CS)controlinputsareforenablingthechiponlywhenitisselectedbythemicroprocessor.
The availability of more than one control input to select the chip facilitates
thedecodingoftheaddresslineswhen multiple chipsareusedinthe microcomputer.
The read and write inputs are sometimes combined into one line labeled R/W. Whenthe chip is
selected, the two binary states in this line specify the two operations ofreadorwrite.
The operationoftheRAMchip is as follows:
The unit is in operation only when CS1 = 1and CS2 = 0. The bar on top of the secondselect
variable indicatesthat thisinputisenabledwhenit isequalto0.
If the chip select inputs are not enabled, or if they are enabled but the read or
writeinputsarenotenabled,thememoryisinhibitedanditsdatabusisinahigh-impedance state.
WhenCS1 =1andCS2 =0,the memorycanbeplaced inawrite orreadmode.
When the WR input is enabled, the memory stores a byte from the data bus into
alocationspecified bytheaddressinputlines.
When the RD input is enabled, the content of the selected byte is placed into thedata bus. The
RD and WR signals control the memory operation as well as the
busbuffersassociatedwiththebidirectional databus.
Q7. How many128 x8RAMchipsareneededtoprovide amemorycapacityof 2048bytes?How
manylinesoftheaddressbusmustbeusedtoaccess2048bytesofmemory?Howmanyoftheselinesw
illbecommon 10allchips? Howmany
linesmustbedecodedforchipselect?Specifythesizeofthedecoders.
Ans.

Q8. What is Cache Memory? Explain cache mapping techniques.


Ans.The data or contents of the main memory that are used frequently by CPU are stored in the
cache memory so that the processor can easily access that data in a shorter time. Whenever the
CPU needs to access memory, it first checks the cache memory. If the data is not found in cache
memory, then the CPU moves into the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache
memory can be represented as.

The performance of cache memory is frequently measured in terms of a quantity called hit ratio.
When the CPU refers to memory and finds the word in cache, it is said to produce a hit. If the
word is not found in cache, it is in main memory and it counts as a miss. The ratio of the number
of hits divided by the total CPU references to memory (hits plus misses) is the hit ratio.
Three types of mapping procedures are:
1. Associative mapping
2. Direct mapping
3. Set-associative mapping
The main memory can store 32K words of 12 bits each. The cache is capable of storing 512 of
these words at any given time. The CPU communicates with both memories. It first sends a 15-
bit address to cache. If there is a hit, the CPU accepts the 12-bit data from cache. If there is a
miss, the CPU reads the word from main memory and the word is then transferred to cache.
Associative Mapping:
The fastest and most flexible cache organization uses an associative memory. The associative
memory stores both the address and content (data) of the memory word. This permits any
location in cache to store any word from main memory. The address value of 15 bits is shown as
a five-digit octal number and its corresponding 12 -bit word is shown as a four-digit octal
number.
A CPU address of 15 bits is placed in the argument register and the associative memory is
searched for a matching address. If the address is found, the corresponding 12-bit data is read
and sent to the CPU. If no match occurs, the main memory is accessed for the word. The
address-data pair is then transferred to the associative cache memory. If the cache is full, an
address-data pair must be displaced to make room for a pair that is needed and not presently in
the cache. The decision as to what pair is replaced is determined from the replacement algorithm
that the designer chooses for the cache.

Direct Mapping:
Associative memories are expensive compared to random-access memories because of the added
logic associated with each cell. The possibility of using a random-access memory for the cache is
investigated in figure.The CPU address of 15 bits is divided into two fields. The nine least
significant bits constitute the index field and the remaining six bits form the tag field.
In the general case, there are 2k words in cache memory and 2n words in main memory. The n-
bit memory address is divided into two fields: k bits for the index field and n - k bits for the tag
field. The direct mapping cache organization uses the n-bit address to access the main memory
and the k-bit index to access the cache.
Each word in cache consists of the data word and its associated tag. When a new word is first
brought into the cache, the tag bits are stored alongside the data bits. When the CPU generates a
memory request, the index field is used for the address to access the cache. The tag field of the
CPU address is compared with the tag in the word read from the cache. If the two tags match,
there is a hit and the desired data word is in cache. If there is no match, there is a miss and the
required word is read from main memory. It is then stored in the cache together with the new tag,
replacing the previous value. The disadvantage of direct mapping is that the hit ratio can drop
considerably if two or more words whose addresses have the same index but different tags are
accessed repeatedly.
Set-Associative Mapping:
The disadvantage of direct mapping is that two words with the same index in their address but
with different tag values cannot reside in cache memory at the same time. A third type of cache
organization, called set-associative mapping, is an improvement over the direct mapping
organization in that each word of cache can store two or more words of memory under the same
index address. Each data word is stored together with its tag and the number of tag-data items in
one word of cache is said to form a set.
Q9. What do you mean by 2.5 D memory organization? Explain with example.
Ans.The conventional memory organization used for RAMs and ROMs suffers from a problem
of scale: it works fine when the number of words in the memory is relatively small but quickly
mushrooms as the memory is scaled up or increased in size. This happens because the number of
word select wires is an exponential function of the size of the address. Suppose that the MAR is
10 bits wide, which means there are 1024 words in the memory. The decoder will need to output
1024 separate lines. While this is not necessarily terrible, increasing the MAR to 15 bits means
there will be 32,768 wires, and 20 bits would be over a million.
One way to tackle the exponential explosion of growth in the decoder and word select wires is to
organize memory cells into a two-dimension grid of words instead of a one- dimensional
arrangement. Then the MAR is broken into two halves, which are fed separately into smaller
decoders. One decoder addresses the rows of the grid while the other decoder addresses the
columns. Figure given below shows a 2.5D memory of 16 words, each word having 5 bits:
Each memory cell has an AND gate that represents the intersection of a vertical wire from one
decoder and a horizontal wire from the other. The output of this AND gate is the line select
wire.In the above example, the total number of word select lines goes down from 16 to 8. (There
are four wires coming from each of two decoders.) If the MAR had 10 bits, there would be 1024
word select wires in the traditional organization, but only 64 in the 2.5D organization because
each half the MAR contributes 5 address bits, and 25 = 32.
The usual terminology for a 2.5D memory is 21/2 memory, but this is hard to write. Nobody is
sure why it is called a two and a half dimensional thing, unless it is perhaps because an ordinary
memory is obviously two dimensional and this one is not quite three dimensional.
In a real circuit, the wires are cleverly laid out so that they go around, not through, flip- flops,
unlike our schematic diagram.
2.5D memory organization is almost always used on real memory chips today because the
savings in wiring and gates is so dramatic. Real computers use a combination of banks of
memory units, and each memory unit uses 2.5D organization.
Q10 What is auxiliary memory? Write short notes on magnetic disks and magnetic tapes.
Ans.Auxiliary memory is known as the lowest-cost, highest-capacity, and slowest-access storage
in a computer system. It is where programs and data are kept for long-term storage or when not
in immediate use. The most common examples of auxiliary memories are magnetic tapes and
magnetic disks.

Magnetic Disks
A magnetic disk is a type of memory constructed using a circular plate of metal or plastic coated
with magnetized materials. Usually, both sides of the disks are used to carry out read/write
operations.
However, several disks may be stacked on one spindle with a read/write head available on each
surface.The following image shows the structural representation of a magnetic disk.

o The memory bits are stored in the magnetized surface in spots along the concentric
circles called tracks.
o The concentric circles (tracks) are commonly divided into sections called sectors.
Magnetic Tape
Magnetic tape is a storage medium that allows data archiving, collection, and backup for
different kinds of data. The magnetic tape is constructed using a plastic strip coated with a
magnetic recording medium.
The bits are recorded as magnetic spots on the tape along several tracks. Usually, seven or nine
bits are recorded simultaneously to form a character together with a parity bit.
Magnetic tape units can be halted, started to move forward or in reverse, or can be rewound.
However, they cannot be started or stopped fast enough between individual characters. For this
reason, information is recorded in blocks referred to as records.
Q11. What is Virtual Memory? Explain the concept of address space and memory space.
Ans.In a memory hierarchy system, programs and data are first stored in auxiliary
memory.Portions of a program or data are brought into main memory as they are needed by the
CPU.
Virtual memory is a concept used in some large computer systems that permit the user to
construct programs as though a large memory space were available, equal to the totality of
auxiliary memory.

Each address that is referenced by the CPU goes through an address mapping from the so- called
virtual address to a physical address in main memory. Virtual memory is used to give
programmers the illusion that they have a very large memory at their disposal, even though the
computer actually has a relatively small main memory. A virtual memory system provides a
mechanism for translating program- generated addresses into correct main memory
locations.This is done dynamically, while programs are being executed in the CPU. The
translation or mapping is handled automatically by the hardware by means of a mapping table.
Address Space and Memory Space
An address used by a programmer will be called a virtual address, and the set of such addresses
the address space. An address in main memory is called a location or physical address. The set of
such locations is called the memory space. In most computers the address and memory spaces
are identical. The address space is allowed to be larger than the memory space in computers with
virtual memory.

Consider a computer with a main-memory capacity of 32K words (K = 1024). Fifteen bits are
needed to specify a physical address in memory since 32K = 215. Suppose that the computer has
available auxiliary memory for storing 220 = 1024K words. Thus auxiliary memory has a
capacity for storing information equivalent to the capacity of 32 main memories. Denoting the
address space by N and the memory space by M, we then have for this example N = 1024K and
M = 32K.
In a multiprogram computer system, programs and data are transferred to and from auxiliary
memory and main memory based on demands imposed by the CPU. Suppose that program 1 is
currently being executed in the CPU. Program 1 and a portion of its associated data are moved
from auxiliary memory into main memory, as shown in the figure.

Portions of programs and data need not be in contiguous locations in memory since information
is being moved in and out, and empty spaces may be available in scattered location in memory.
In our example, the address field of an instruction code will consist of 20 bits but physical
memoryaddresses must be specified with only 15 bits. Thus CPU will reference instructions and
data with a 20-bit address, but the information at this address must be taken from physical
memory because access to auxiliary storage for individual words will be prohibitively long.
A table is then needed, as shown in figure, to map a virtual address of 20 bits to a physical
address of 15 bits. The mapping is a dynamic operation, which means that every address is
translated immediately as a word is referenced by CPU.

Q12. An address space is specified by 24 bits and the corresponding memory space by 16
bits.
a. How many words are there in the address space?
b. How many words are there in the memory space?
c. If a page consists of 2K words, how many pages and blocks are there in the system?
Ans.
Q13. Explain the different methods of writing in Cache.
Ans.When the CPU finds a word in cache during a read operation, the main memory is not
involved in the transfer. However, if the operation is a write, there are two ways that the system
can proceed.
Write-Through:
The simplest and most commonly used procedure is to update main memory with every memory
write operation, with cache memory being updated in parallel if it contains the word at the
specified address. This is called the write-through method. This method has the advantage that
the main memory always contains the same data as the cache.
Write-Back:
The second procedure is called the write-back method. In this method, only the cache location is
updated during a write operation. The location is then marked by a flag so that later when the
word is removed from the cache it is copied into the main memory. The reason for the write-back
method is that during the time a word resides in the cache, it may be updated several times;
however, as long as the word remains in the cache, it does not matter whether the copy in the
main memory is out of date since requests from the word are filled from the cache. It is only
when the word is displaced from the cache that an accurate copy needsto be rewritten into the
main memory
UNIT 5 IMPORTANT QUESTION AND ANSWERS

QUESTION BANK

1. Explain the role of peripheral devices in a computer system.

Answer: Peripheral devices are external hardware components that are connected to the
computer system to expand its functionality. These devices can be classified into input
devices (e.g., keyboard, mouse), output devices (e.g., printer, monitor), and storage devices
(e.g., hard drives, flash drives). They allow users to interact with the computer and store data.
Communication between the computer and peripheral devices occurs via ports and interfaces.

2. What is a port in computer systems, and what are the types of ports commonly
used?

Answer: A port is a physical or logical connection interface through which data is transferred
between the computer and peripheral devices. Common types of ports include:

Serial Ports: Used for data transmission one bit at a time (e.g., RS-232).

Parallel Ports: Transfers multiple bits at once (e.g., printer port).

USB (Universal Serial Bus): A versatile port used for data transfer and charging.

Ethernet Ports: Used for network communication.

3. Describe the function of an interrupt in a computer system.

Answer: An interrupt is a mechanism by which a peripheral device can notify the CPU that it
requires attention. When an interrupt occurs, the CPU temporarily halts its current operations
and jumps to a special interrupt service routine (ISR) to handle the interrupt. After the
interrupt is serviced, control is returned to the CPU's original task. Interrupts are used to
manage real-time events, like keyboard presses or data availability from peripherals.

4. What is Direct Memory Access (DMA), and how does it work?

Answer: Direct Memory Access (DMA) is a technique that allows peripheral devices to
directly transfer data to and from the system's memory, bypassing the CPU. This improves
the system's efficiency by freeing up the CPU from handling large data transfers. DMA
involves a DMA controller that manages memory addresses and controls the data transfer
between the peripheral and memory.

5. What is the difference between polling and interrupts in handling peripheral


devices?
Answer: Polling and interrupts are two methods for handling communication between the
CPU and peripheral devices.

Polling: The CPU periodically checks the status of the peripheral to see if it requires
attention. It is inefficient as it wastes CPU time.

Interrupts: The peripheral device interrupts the CPU to request attention, allowing the CPU
to perform other tasks until an interrupt occurs. Interrupts are more efficient and responsive.

Long Answer Questions

6 the various types of peripheral devices, their functions, and provide examples for each.

Answer: Peripheral devices are hardware components connected to the central processing
unit (CPU) of a computer system, which enhance its capabilities by enabling input, output,
storage, and communication. They are essential for the computer to interact with the user and
other systems.

Input Devices: These devices allow users to input data or commands into the computer
system. Examples include:

Keyboard: One of the most essential input devices, the keyboard allows the user to type text
and execute commands through various keys. It is used for tasks like typing documents,
browsing the web, and controlling computer programs.

Mouse: A pointing device that translates physical movement into screen cursor movement. It
is used for interacting with graphical user interfaces (GUIs) to select items, drag files, and
interact with applications.

Scanner: A device that converts physical documents (like images and text) into digital
format for easy editing, storing, or sharing.

Microphone: Used for capturing audio, the microphone records sound and converts it into
digital signals for processing in audio applications, video conferencing, or voice recognition
systems.

Output Devices: These devices display or produce information from the computer for user
consumption. Examples include:

Monitor: The monitor displays visual output, including text, images, and videos, allowing
users to interact with and visualize data generated by the computer.

Printer: Printers convert digital data into physical documents, including text, images, or
photographs. Common types include inkjet, laser, and 3D printers.
Speakers: Output audio signals, converting digital sound data into audible sound. Speakers
are crucial for media playback, system notifications, and communication.

Storage Devices: Storage devices are used to save data for long-term or temporary use. They
store operating system files, application software, user data, and other digital information.
Examples include:

Hard Disk Drive (HDD): A traditional storage device that uses spinning disks coated with
magnetic material to read and write data. It provides large storage capacity at a lower cost but
is slower compared to newer technologies.

Solid-State Drive (SSD): A newer form of storage that uses flash memory chips to store
data. SSDs are faster, more durable, and consume less power than HDDs, making them a
preferred choice for high-performance computing.

USB Flash Drive: A portable device that connects via USB ports, providing easy and fast
data transfer and storage. USB flash drives are widely used for transferring files between
computers and devices.

Communication Devices: Communication devices facilitate data exchange between


computers and other systems, enabling networking and internet access. Examples include:

Network Interface Card (NIC): A hardware component that connects a computer to a


network, allowing data to flow between the computer and other devices on the network, such
as servers, printers, and other workstations.

Modem: A device that converts digital signals from a computer into analog signals for
transmission over telephone lines and vice versa, enabling internet connectivity via
broadband or dial-up connections.

Bluetooth Adapter: This device enables wireless communication between the computer and
other Bluetooth-enabled devices, such as headphones, smartphones, or wireless mice.

Peripheral devices are typically connected to the computer via various ports such as USB,
HDMI, Ethernet, and audio jacks. These ports manage the communication between the
devices and the computer, ensuring smooth operation and data transfer.

7 Explain the process of handling interrupts, including the different types of interrupts
and the interrupt handling mechanism in a computer system.

Answer: Interrupts are critical for efficient multitasking in computer systems. They allow
peripheral devices, hardware components, or software to temporarily interrupt the CPU’s
current operations and request attention. Interrupt handling enables the CPU to respond to
real-time events promptly without constantly checking the status of all devices. The
mechanism of interrupt handling is carefully designed to ensure the system operates
smoothly.
Interrupt Handling Process:

Interrupt Request (IRQ): When a peripheral device (e.g., keyboard, disk drive) requires
attention from the CPU, it sends an interrupt request (IRQ). This request can be triggered by
an event like a button press, data arrival, or completion of a task.

Interrupt Acknowledgment: Upon receiving an interrupt, the CPU temporarily halts its
current task and acknowledges the interrupt. The CPU then identifies which device or event
generated the interrupt through an interrupt vector.

Interrupt Service Routine (ISR): After identifying the source, the CPU transfers control to
the interrupt service routine (ISR), a specific block of code designed to handle the interrupt.
The ISR executes the necessary operations, such as processing data, sending a signal, or
updating system states.

Context Saving and Restoration: Before executing the ISR, the CPU saves its current
execution context, such as register values and the program counter, to ensure it can resume
its previous task after the interrupt is handled. Once the ISR completes, the context is
restored, and the CPU resumes its interrupted task.

Types of Interrupts:

Hardware Interrupts: Generated by hardware devices like the keyboard or network


interface card. These interrupts are often triggered by external events, such as a key press or
completion of a disk read operation.

Maskable Interrupts (IRQ): These interrupts can be delayed or ignored by the CPU if
necessary. This allows the CPU to prioritize critical tasks and ignore non-urgent interrupts.

Non-Maskable Interrupts (NMI): These interrupts cannot be ignored or delayed and are
typically used for critical hardware errors, such as power failures or memory errors. NMIs
demand immediate attention to prevent system failures.

Software Interrupts: Generated by software programs when they need to request a system
service, such as file I/O or memory allocation. These are typically used for system calls and
other high-level operations.

External Interrupts: These occur due to external factors, like a power failure, system
overheat, or hardware malfunction. External interrupts may require urgent intervention to
protect data integrity.

The interrupt system enables the CPU to manage multiple tasks concurrently, ensuring that
time-sensitive operations, such as responding to user input or handling real-time data, can be
executed without delay.
8 Direct Memory Access (DMA) and its advantages over traditional data transfer
methods. Include the types of DMA and real-world applications.

Answer: Direct Memory Access (DMA) is a technique that allows peripherals to directly
transfer data to and from memory, bypassing the CPU. This improves efficiency, as the CPU
is freed from managing each data transfer, allowing it to perform other tasks while data is
being transferred.

In traditional data transfer methods, such as programmed I/O (PIO), the CPU is responsible
for reading and writing data from peripheral devices, which can be inefficient. DMA
optimizes this process by allowing peripherals to directly access the system’s memory
through a DMA controller.

DMA Process:

The DMA controller manages the data transfer process, which involves specifying source
and destination addresses, the amount of data to transfer, and other parameters. Once the
transfer is complete, the DMA controller sends an interrupt to notify the CPU.

Advantages of DMA:

Efficiency: DMA reduces the CPU's workload by offloading data transfer tasks. This enables
the CPU to focus on other computational tasks, improving overall system performance.

Speed: Since DMA transfers data directly between the peripheral and memory, it is faster
than traditional methods, which involve CPU intervention at each step.
Lower Latency: DMA allows for faster data transfers, minimizing delays and ensuring quick
response times, which is especially important for real-time applications like multimedia or
data acquisition systems.

Reduced CPU Overhead: By handling data transfer autonomously, DMA significantly


reduces the number of instructions the CPU needs to execute for peripheral communication.

Types of DMA:

Burst Mode DMA: In this mode, the DMA controller takes control of the system bus for a
brief period and transfers a block of data at once. The CPU is paused during this time.

Cycle Stealing DMA: The DMA controller transfers one word of data at a time, releasing
control of the system bus to the CPU after each transfer. The CPU is interrupted frequently
but for a very short duration.

Block Mode DMA: The DMA controller transfers data in blocks and only releases control of
the system bus after completing a block. The CPU can resume normal operation during block
transfers.

Demand Mode DMA: DMA accesses the bus only when necessary, allowing the CPU to
control the bus during idle periods.

Real-World Applications of DMA:

Multimedia Systems: Video and audio data transfers require high-speed data movement.
DMA is used in video streaming, audio recording, and multimedia playback to move large
amounts of data efficiently.

Networking: Network interface cards (NICs) use DMA to transfer packets of data from the
network directly to memory, enabling faster network communication without overwhelming
the CPU.

Data Acquisition Systems: Devices such as sensors or oscilloscopes rely on DMA to


transfer real-time data directly to memory for analysis, ensuring minimal delay and high
throughput.

9 Discuss the different types of ports used in modern computer systems, their purposes,
and data transfer capabilities.

Answer: Ports are essential connectors used in modern computer systems to link various
peripherals and allow data exchange between devices. Each type of port has specific
functions and varying data transfer speeds. As technology evolves, newer ports offer faster
speeds and more versatile functionalities.

Types of Ports:
USB Ports: The Universal Serial Bus (USB) is one of the most common types of ports used
for connecting peripheral devices such as keyboards, mice, printers, storage devices, and
smartphones.

USB 2.0: Provides data transfer speeds of up to 480 Mbps. It is widely used for devices that
do not require high data transfer rates.

USB 3.0/3.1: Offers much higher speeds, up to 5–10 Gbps, allowing for faster data transfers,
especially for external storage devices like hard drives and SSDs.

USB-C: The newest version of USB, featuring a reversible connector and support for fast
data transfer up to 40 Gbps (with Thunderbolt 3). USB-C is also used for charging devices,
video output, and connecting external displays.

HDMI Ports: The High-Definition Multimedia Interface (HDMI) is commonly used for
connecting monitors, televisions, projectors, and other displays.

HDMI 2.0: Supports 4K video at 60 Hz with bandwidth up to 18 Gbps.

HDMI 2.1: An updated version supporting 8K resolution and higher data rates, up to 48
Gbps, to accommodate high-definition video and audio.

Ethernet Ports: Used for network connections, Ethernet ports allow wired internet and LAN
connections. They are commonly found in computers, routers, and switches. Ethernet speeds
range from 10 Mbps (old standards) to 100 Gbps (modern standards).

Audio Jacks: These ports are used to connect audio devices such as speakers, headphones,
microphones, and other sound-related equipment. They typically include a 3.5mm jack for
analog audio, optical audio ports, and digital audio interfaces for high-fidelity sound
output.

Thunderbolt Ports: Thunderbolt is a high-speed connection standard primarily used for


high-performance peripherals like external hard drives, displays, and docking stations.
Thunderbolt 3 and 4 can transfer data at up to 40 Gbps, offering fast speeds for both data and
video output.

These ports connect a variety of devices to the computer, with each port having different data
transfer capabilities based on its intended use. The choice of port depends on factors like the
type of device, required speed, and compatibility with other devices.

10 Explain the concept of Ports and Addressing in Direct Memory Access (DMA), and
how it facilitates efficient data transfer in modern computer systems.

Answer: Direct Memory Access (DMA) is a method that allows peripherals to transfer data
directly to memory, bypassing the CPU to increase the efficiency of data transfers. A crucial
part of this system is the DMA controller (DMA controller), which manages the data transfer
process by controlling the bus and memory addressing.

In the DMA system, both ports and addressing are essential components for facilitating data
transfers. Ports refer to the physical or logical connectors used by devices to communicate
with the system, while addressing refers to how the memory locations are identified for data
transfer. Together, they streamline data exchange by minimizing CPU intervention and
enabling higher performance in data processing.

Ports in DMA:

In DMA systems, ports act as interfaces between the CPU, the memory, and external
peripherals. These ports manage communication, allowing data to flow in and out of the
system. For example, the DMA controller interfaces with memory through specific system
ports that are configured for direct communication, such as the memory bus or specific I/O
ports for the DMA channel. These channels are set up to transmit data from a peripheral
device directly to memory, and vice versa, using these ports.

I/O Ports: DMA uses I/O ports for communication with devices like hard drives, network
adapters, or sound cards. For example, when a network interface card (NIC) is transferring
data to memory, it uses specific I/O ports for data transfer. DMA-controlled ports reduce
CPU overhead by managing these transfers autonomously, freeing the CPU for other tasks.

Addressing in DMA:

Memory Addressing: Addressing in DMA refers to how memory locations are specified for
data transfer. The DMA controller, when initiated, knows the source address (where the data
is located) and the destination address (where the data is to be stored). The DMA controller
manages the memory addresses involved in the transfer without involving the CPU, which
significantly speeds up the data transfer process.

The address bus and data bus play crucial roles in DMA. The DMA controller sets up the
memory addresses using these buses to directly write or read data from memory. It accesses
the system’s memory, selects the appropriate addresses, and transfers data efficiently.

How DMA Facilitates Efficient Data Transfer:

Minimizes CPU Involvement: By offloading the data transfer task to the DMA controller,
the CPU is free to perform other operations, thus improving the overall efficiency of the
system.

High-Speed Data Transfer: Since DMA allows direct access to memory, data can be
transferred faster than traditional methods, which require the CPU to handle every read/write
operation.
Real-Time Data Handling: In scenarios like video streaming, network data transfers, or
sensor data collection, DMA enables faster and more efficient data handling, which is crucial
for maintaining real-time performance.

Types of DMA (Burst Mode, Cycle Stealing, etc.): Depending on the nature of the data
transfer (high priority or low priority), different types of DMA modes are employed. For
example, burst mode allows quick data transfer in large chunks, while cycle stealing
permits the CPU to take over the bus after each data cycle, enabling more balanced CPU
usage.

Real-World Applications:

Networking: DMA is extensively used in network cards to directly transfer incoming and
outgoing data packets to memory, bypassing the CPU. This is essential for high-speed
network communication in servers, routers, and other devices.

Multimedia Processing: In multimedia systems, such as video streaming or audio


processing, DMA allows for the direct transfer of large video or audio files from storage
devices to memory, ensuring high data throughput and smooth playback.

Embedded Systems: Many embedded systems, such as sensors, medical devices, and
industrial equipment, rely on DMA for fast, real-time data collection and transfer.

In conclusion, ports and addressing in DMA work in tandem to provide efficient and high-
speed data transfers in modern computer systems. This leads to optimized CPU performance,
reduced data transfer times, and the ability to handle more complex operations, making DMA
an essential component in high-performance computing systems.

*(REST OF THE QUESTION WILL BE PROVIDED SOON )

You might also like