Coa 2
Coa 2
Interrupts are signals sent to the processor by hardware devices (e.g., keyboard, printer) when they require attention. Unlike program-controlled I/O, where the processor
continuously checks the status of a device (wasting resources), interrupts allow the processor to perform other tasks while waiting for the device to be ready.
1. Interrupt Request (IRQ): An I/O device sends a signal to the processor (via the interrupt-request line) when it needs service.
2. Interrupt Service Routine (ISR): When an interrupt occurs, the processor stops executing its current task and jumps to a special piece of code called the Interrupt
Service Routine (ISR) to handle the interrupt.
3. Saving Program State: Before transferring control to the ISR, the processor saves the current state (Program Counter, register values) to memory or a stack. This is
important because once the ISR is completed, the processor will need to resume its previous task.
4. Return from Interrupt: After the ISR completes, the processor restores the saved state and continues from where it left off.
o An interrupt request (IRQ) is a signal sent by an I/O device to the processor when it requires attention. The processor can continue performing other tasks until
an interrupt request is received, at which point it needs to handle the request.
o This allows the processor to avoid continuously checking the device status, freeing it up to perform other operations.
o The Interrupt Service Routine (ISR) is a special piece of code that handles the interrupt once it occurs. When an interrupt is triggered, the processor stops
executing its current instructions and transfers control to the ISR.
o The ISR is specific to the type of interrupt and handles the appropriate task (e.g., reading data from an I/O device, handling an error, etc.).
o Before the processor transfers control to the ISR, it saves the program state (including the Program Counter, register values, and condition flags) to memory or a
stack. This is critical because when the ISR completes, the processor must resume the task it was executing prior to the interrupt.
o By saving the program state, the processor ensures that it can return to its exact state after interrupt handling, maintaining the integrity of the original program
execution.
o After the ISR completes its task, the processor retrieves the saved state from memory or the stack and restores it. This allows the processor to resume
execution from the exact point where it left off before the interrupt occurred.
o The processor typically uses a Return from Interrupt (RTI) instruction to complete the process and return to normal program execution.
In an interrupt system, hardware is used to notify the processor when a device needs attention. Here's how it works:
• Interrupt-Request Line:
o A single line is used to communicate interrupt requests from multiple devices to the processor. This line is called the interrupt-request line.
o Each device is connected to this line via a switch that can connect the line to ground (low voltage).
o Normally, the interrupt-request line is high (Vdd) when no device is requesting an interrupt.
o When a device needs attention, it closes its switch, pulling the line to low (0). This triggers an interrupt signal, alerting the processor.
o The interrupt line works like an OR function: if any device sends a request, the line becomes low.
o To control the interrupt line, open-drain (for MOS circuits) or open-collector (for bipolar circuits) gates are used.
o These gates act like a switch: when the output is low (0), the gate closes the switch, pulling the interrupt line low. When the output is high (1), the switch is
open, allowing the pull-up resistor to keep the line high.
• Pull-Up Resistor:
o The pull-up resistor ensures that the interrupt-request line is high (Vdd) when no devices are requesting an interrupt.
o When no device is pulling the line low, the pull-up resistor pulls the line back to its normal high state (Vdd).
Summary:
• Devices pull the line low when they need attention, triggering an interrupt.
• The pull-up resistor ensures the line is high when no device is requesting an interrupt.
1. Polling
Concept:
Polling is a technique where the processor repeatedly checks each device to see if it has an interrupt request (IRQ). It is a straightforward method but can be inefficient, especially
when dealing with a large number of devices or when interrupts are rare.
Process:
• Each device in the system has a status register with an IRQ bit.
• When a device needs attention, it sets its IRQ bit to "1," signaling an interrupt request.
• The processor regularly checks the IRQ bit for each device, one by one, in a loop.
• Once the processor finds that a device has raised an interrupt (i.e., its IRQ bit is set), it invokes the corresponding interrupt service routine (ISR) to handle the interrupt.
• After the ISR completes, the processor continues checking other devices for interrupts.
Example:
If there are three devices connected to the system, the processor will check the IRQ status of each device in a loop:
2. If Device 1’s IRQ bit is set, service the interrupt and call its ISR.
4. If Device 2’s IRQ bit is set, service the interrupt and call its ISR.
Pros:
• Low Hardware Requirements: Does not require complex hardware or special configurations.
Cons:
• Inefficient: The processor wastes cycles checking devices that do not need attention, leading to wasted processing time.
• Slow Response Time: If interrupts are frequent, the system could become slow in responding to urgent interrupts, as the processor is busy polling other devices.
• Waste of Resources: The processor is tied up checking the status of devices even when no interrupts are pending.
2. Vectored Interrupts
Concept:
In a vectored interrupt system, each device has a unique identifier, often referred to as an interrupt vector, which allows the processor to directly jump to the appropriate
interrupt service routine (ISR) when an interrupt occurs. The interrupt vector is typically a number that points to the starting address of the ISR.
• Vectored Interrupts:
• • Device Identification: The device identifies itself to the processor.
• • Special Code: Sends a special code to the processor over the bus.
• • Single Interrupt Line: Multiple devices can share one interrupt line and still be recognized.
• • ISR Address: The code may indicate the starting address of its interrupt service routine (ISR)
• . • Code Length: Typically 4 to 8 bits long.
• • Processor's Role: The processor adds the remaining address from its memory area for ISRs.
Process:
• When a device raises an interrupt, it also sends an interrupt vector to the processor.
• The interrupt vector identifies which device has raised the interrupt.
• The processor uses the vector to look up the address of the ISR associated with that device.
• The processor then immediately jumps to the ISR, handling the interrupt.
• There is no need for the processor to poll each device. The interrupt vector allows the processor to directly execute the correct ISR for the interrupting device.
Example:
If Device 1 raises an interrupt, it might send the vector 0x01 to the processor, indicating that the ISR for Device 1 is located at address 0x1000. The processor will then jump to
address 0x1000 to handle the interrupt.
Advantages:
• Faster Response Time: Since the processor directly jumps to the appropriate ISR, it avoids the overhead of polling each device.
• Efficient Handling: The processor doesn't need to check each device, making interrupt handling more efficient.
• Flexibility: Devices can identify themselves, and the processor can service interrupts from different devices more effectively.
Process Details:
• When the interrupt request is raised, the processor sends an interrupt-acknowledge (INTA) signal to the interrupting device.
• The device responds by sending the interrupt vector over the data bus to the processor.
• The processor uses this vector to jump directly to the correct ISR.
• The interrupt vector typically includes the device ID or some form of addressing scheme to directly access the ISR location.
Interrupt Nesting
Concept: Interrupt nesting allows an interrupt service routine (ISR) for a higher-priority interrupt to preempt the execution of a lower-priority ISR. This technique ensures that
critical tasks are processed immediately, even if the processor is already handling another interrupt.
How It Works:
o When an interrupt occurs, the processor stops executing its current instructions and jumps to the ISR for the interrupting device.
Interrupt nesting refers to a mechanism that allows a system to handle new interrupt requests even while it is already processing an interrupt-service routine (ISR). Typically,
during ISR execution, interrupts are disabled to avoid simultaneous interruptions. However, with interrupt nesting, the processor can temporarily suspend the current ISR and
service higher-priority interrupts.
Implementation of Interrupt Priority Using Individual Interrupt Request and Acknowledgment Lines
In systems with interrupt nesting, a priority scheme is used to organize I/O devices based on priority levels. The processor can adjust its priority dynamically while executing an
ISR. This mechanism helps avoid delays in servicing time-sensitive devices, like real-time clocks, while another interrupt is being processed.
Key Concepts:
o The processor acknowledges the interrupt by asserting a corresponding IAK line for each device.
3. Priority Arbitration:
o The processor adjusts its priority during ISR execution based on the interrupt being processed.
o Interrupts from devices with higher priority than the current processor priority can preempt the ongoing ISR.
1. Interrupt Request:
2. Priority Check:
o The processor checks the priority of the incoming interrupt request against its current priority level.
o If the request has a higher priority than the processor's current level, the interrupt is accepted and the processor’s priority is raised.
3. Interrupt Acknowledgment:
o The processor acknowledges the interrupt by sending an interrupt acknowledgment (IAK) signal to the requesting device.
o The processor executes the ISR for the device with the highest priority.
o If another higher-priority interrupt occurs during the ISR, the processor suspends the current ISR and starts servicing the new interrupt.
o Once the ISR for the higher-priority device is complete, the processor returns to the previous ISR.
6. End of Interrupt:
o The processor deactivates the interrupt acknowledgment line and the interrupt request line for the device that initiated the interrupt, signaling the end of the
interrupt.
• Interrupt Nesting:
• • Execution Continuity: Once an ISR starts, it runs to completion before accepting another interrupt.
• • Preventing Delays: Avoids long delays that could lead to errors. o Example: Important for accurate timekeeping by a real-time clock
• . • Priority Structure: Devices are organized by priority levels.
• oHigh-Priority Handling: High-priority interrupts can interrupt lower-priority tasks.
Advantages:
• Critical Task Handling: Interrupt nesting ensures that high-priority tasks are not delayed by lower-priority tasks. It guarantees that critical operations are handled as soon
as they arise.
• Efficient Processor Use: It allows the processor to focus on more urgent tasks while still completing lower-priority tasks once the higher-priority ones are finished.
Considerations:
• State Preservation: The processor must save its state (like registers and program counter) before switching between ISRs to avoid losing data.
• Interrupt Disabling: Interrupts may be temporarily disabled during the execution of an ISR to prevent multiple interruptions. However, interrupt nesting allows for
urgent interrupts (e.g., from time-sensitive devices) to be handled without delay.
Example:
3. The processor pauses ISR 1, saves its state, and runs ISR 2 for Device 2.
4. Once ISR 2 finishes, the processor resumes ISR 1 from where it was interrupted.
Daisy-Chaining is a method used to manage interrupt requests from multiple devices. In this scheme, devices are connected in series (like a chain), and they pass the interrupt
signal along the chain to the processor.
The Daisy-Chaining Scheme is a method used to handle interrupt requests from multiple devices by connecting them in series, similar to a chain. The devices pass the interrupt
signal to the processor in an orderly manner. This system allows the processor to prioritize and manage the interrupt requests efficiently.
How It Works:
• Devices are arranged in a sequence, with each device connected to the next.
• The processor starts by checking the device closest to it (the highest priority).
• If this device does not require service, the interrupt signal is passed along to the next device in the chain.
• When a device signals that it needs service (i.e., an interrupt request), the processor handles the request and stops checking the remaining devices.
• Devices are grouped into categories based on their priority level (e.g., high, medium, low).
• Each group of devices is connected in a daisy chain, ensuring that devices within the same group are processed sequentially.
• The processor can then handle interrupts based on the priority of the device groups, ensuring that higher-priority devices are addressed before lower-priority ones.
• This approach allows for a more organized and efficient way to manage interrupts, preventing lower-priority devices from interrupting critical tasks.
Key Points:
• Efficiency: Devices that don't need service pass the interrupt along, reducing unnecessary processing for the processor.
• Priority Management: The daisy-chaining scheme allows devices to be prioritized, ensuring critical devices are handled first.
• Polling and Passing: The interrupt signal is passed through the chain until it reaches a device that needs service.
. Daisy-Chaining Technique:
In a daisy-chaining setup:
o If the device does not need service, it passes the interrupt signal to the next device in the chain.
o If the device needs service, it holds the signal and informs the processor to service it.
• This process continues until the processor finds the device that requires service.
Key Features:
• Devices closer to the processor have higher priority because they are checked first.
• The chain stops at the first device needing service, making the system efficient.
2. Priority Groups in Daisy-Chaining:
To handle simultaneous interrupt requests from devices with varying importance, devices are grouped based on priority levels.
1. Grouping Devices:
o If no device in the high-priority group needs service, the processor moves to the next group (lower-priority).
o Within a group, devices follow the standard daisy-chaining process (checked sequentially).
3. Arbitration Mechanism:
o Priority arbitration ensures that high-priority interrupts are serviced before lower-priority ones, regardless of their position in the chain.
Example Scenario:
• If both groups send interrupt requests simultaneously, the processor first checks and handles the high-priority group before moving to the low-priority
• Concept: Devices are connected in a physical chain, with the priority determined by their position in the chain.
• Process:
1. When an interrupt occurs, the processor sends an Interrupt Acknowledge (INTA) signal down the chain.
▪ If yes, it responds to the INTA signal and identifies itself to the processor.
▪ If no, it passes the INTA signal to the next device in the chain.
3. The first device in the chain with a pending interrupt gets serviced.
• Advantages:
o No need for additional software routines to determine the source of the interrupt.
• Disadvantages:
• Concept: The processor queries devices on the shared interrupt line in a predefined order to identify the source of the interrupt.
• Process:
3. The first device to respond positively is serviced, and the rest are ignored for that interrupt cycle.
• Advantages:
• Disadvantages:
o Slower response time as the processor must query each device sequentially.
Interrupts allow devices to alert the processor for attention asynchronously. However, during certain critical operations, interrupts might need to be disabled temporarily to
maintain system stability and avoid repeated or conflicting requests. Here's a detailed explanation of enabling and disabling interrupts, along with the associated steps:
1. Enabling Interrupts
Interrupts are enabled when the system needs to handle requests from devices. The processor is then ready to receive and process interrupt requests.
• Processor Readiness:
o The interrupt-enable bit in the Processor Status Register (PSR) is set to 1. This allows the processor to respond to interrupt requests.
• Device Request:
o Devices connected to the system can raise interrupt requests when they need attention.
o When an interrupt occurs, the processor suspends its current task and jumps to the ISR to service the interrupt.
• Post-ISR Execution:
o Once the ISR completes its task, interrupts remain enabled, and the processor can handle subsequent requests.
2. Processor Response: The processor saves the current execution state and starts executing the ISR.
2. Disabling Interrupts
Interrupts are disabled when the processor must perform critical tasks without interruptions or when an ISR is already in progress.
o The first instruction in the ISR disables further interrupts using an Interrupt-disable command.
o This prevents new interrupts from interfering while the current ISR is being executed.
o The processor disables interrupts automatically when it starts executing an ISR. This is done by clearing the interrupt-enable bit in the PSR.
• Re-Enabling:
o Interrupts are re-enabled after the ISR completes execution using an Interrupt-enable command or automatically during the return-from-interrupt instruction.
2. Processor Temporarily Disables Further Interrupts: The processor ensures no additional interrupts interfere with the current ISR execution.
4. Interrupts Re-Enabled: The processor re-enables interrupts to allow handling of new requests.
• Edge-triggered interrupts simplify handling by responding only to the leading edge of the signal, eliminating repeated requests.
• Proper state saving and restoration ensure that the interrupted task resumes without data loss or corruption.
• Interrupts are disabled temporarily to prevent conflicts during ISR execution but are always re-enabled afterward to maintain system responsiveness.
Interrupt Enabling:
• Ensures that external devices or internal events can notify the CPU.
• After handling the interrupt, the CPU resumes the original task.
Interrupt Disabling:
• Once the critical task is complete, interrupts are re-enabled to allow normal processing.
An exception is an event that disrupts normal program execution, requiring the processor to execute a special routine (exception-service routine). Exceptions are essential for
handling errors, debugging, and system management.
Types of Exceptions
An exception is any event that causes an interruption in the normal flow of program execution. Interrupts are a subset of exceptions. Exceptions can occur for several reasons,
including I/O requests, errors, debugging, or privilege violations. Below are the key types of exceptions:
• When an error is detected (e.g., illegal OP-code, division by zero), the control hardware signals the processor with an interrupt.
• The processor suspends the current program and executes an exception-service routine to handle or report the error.
2. Debugging
• Debugging tools like debuggers rely on exceptions to identify and fix program errors. There are two debugging facilities:
o Trace Mode:
o Breakpoints:
▪ The debugging routine modifies the next instruction to a software interrupt, allowing the program to stop at the desired point.
3. Privilege Exception
• Certain instructions, called privileged instructions, are restricted to the supervisor mode to prevent user programs from corrupting the system.
• Examples include altering processor priority or accessing memory allocated to other users.
• Attempting to execute a privileged instruction in user mode causes a privilege exception, switching the processor to supervisor mode to execute a corrective routine.
1. I/O Interrupts
• Handling: Processor completes the current instruction, saves its state, and executes an interrupt-service routine.
2. Error Recovery
• Examples:
• Handling Steps:
2. Processor saves the program state and switches to the exception-service routine.
3. Routine attempts recovery (e.g., data correction) or informs the user if recovery is not possible.
Note: The interrupted instruction often cannot be completed.
3. Debugging Exceptions
• Trace Exception:
o Enables inspection of registers, memory, and program flow after each step.
• Breakpoint Exception:
o A trap instruction pauses execution, allowing inspection and debugging before resuming.
4. Privilege Exceptions
• Cause:
o Occurs when a user-mode program attempts to execute privileged instructions (e.g., modifying priority levels, accessing protected memory).
• Handling:
o The OS executes an exception-service routine to address the violation and protect the system.
4. Service Routine: Processor executes the exception-service routine to resolve the issue.
Direct Memory Access (DMA) is a system that allows certain hardware components, like I/O devices, to transfer data directly to or from main memory without involving the
processor. This reduces the workload on the processor and speeds up data transfers, especially for large blocks of data.
o It takes over data transfer tasks from the processor, managing memory access and bus control.
2. System Bus:
o The DMA controller uses the bus to transfer data directly between memory and I/O devices.
Direct Memory Access (DMA) is a technique used in computer systems to transfer data between an external device and the main memory without requiring continuous
involvement from the processor. This method is highly efficient for transferring large blocks of data at high speeds, as it reduces the overhead typically caused by the processor
managing each data transfer step.
The DMA process is managed by a specialized hardware component called the DMA Controller (DMAC), which is embedded in the I/O device interface. The DMA controller takes
over the responsibility of data transfer, which would otherwise require the processor’s attention. Below is a step-by-step explanation of how DMA operates:
▪ The direction of the transfer: whether data should be read from memory or written to memory.
o Once these parameters are set, the DMA controller takes over the task.
o The DMA controller manages the actual data transfer between the device and the main memory. It performs the following:
▪ Automatically increments memory addresses to ensure data is stored or read in the correct sequence.
▪ Keeps track of the transfer count to ensure all specified data blocks are processed.
3. Interrupt Notification
o After the entire block of data is successfully transferred, the DMA controller signals the processor by raising an interrupt.
o This interrupt notifies the processor that the transfer is complete, allowing it to continue or resume its tasks.
Advantages of DMA
o Since the DMA controller handles the data transfer, the processor is free to execute other programs or tasks during the transfer.
o The processor can focus on more complex computations or tasks, improving the overall performance of the system.
o DMA is particularly beneficial for transferring large blocks of data, such as file copying, multimedia streaming, or buffering.
o Sending the starting memory address where data will be read or written.
o Manages:
▪ Memory Addressing: Supplies memory addresses for the data being transferred.
3. Interrupt Notification
o The processor can then resume the program that requested the transfer.
o The program that initiated the transfer remains in a blocked state until the transfer is complete.
1. Starting Address Register: Holds the starting memory address for the transfer.
2. Word Count Register: Keeps track of how many words or bytes need to be transferred.
3. Status and Control Register: Manages and monitors the DMA operation:
o R/W Bit: Specifies the transfer direction (1 = read from memory, 0 = write to memory).
o Interrupt Enable (IE) Bit: Enables the DMA controller to send an interrupt after completion.
DMA Controller Registers see the diagram in the module questions pdf
DMA controllers use a set of registers to manage and monitor data transfers between memory and I/O devices efficiently.
o The DMA controller updates this address during the transfer to ensure the correct memory location is accessed.
o The count decreases as the transfer progresses, and the transfer completes when it reaches zero.
3. Status and Control Register
o Contains status flags and control bits to manage and monitor the DMA process.
• Done Bit:
o When set to 1, the DMA controller raises an interrupt to notify the processor after completing the transfer.
o Record whether the transfer was successful or if any errors occurred during the operation.
o The DMA controller temporarily "steals" memory cycles from the processor.
o The processor's operations may slow down slightly, but it can still work in parallel.
o The DMA controller takes exclusive control of the system bus for a short time.
o This is faster than cycle stealing but temporarily halts processor access to memory.
Advantages of DMA
• Reduces Processor Overhead: Frees the processor from directly managing data transfers.
• Increases Speed: Transfers large blocks of data more efficiently than processor-driven transfers.
• Parallel Processing: Allows the processor to perform other tasks during data transfers.
Bus arbitration is the process that determines which device can gain control of the system bus to initiate data transfers when multiple devices (like the processor and DMA
controllers) require access to the bus simultaneously. The device that is granted control is called the bus master, and once it completes its task, the bus mastership is transferred.
1. Centralized Arbitration
In centralized arbitration, a single bus arbiter (which could be the processor or a separate unit) decides which device gets access to the bus.Types of Bus Arbitration
Working Mechanism:
• Bus Grant (BG): The bus arbiter responds by sending a BG signal, passed through a daisy-chain configuration.
• Bus-Busy (BBSY): The current bus master activates the BBSY signal to indicate bus usage. Other devices must wait until this signal is deactivated.
• Priority Handling:
o Fixed Priority: Devices are given a predetermined priority (e.g., BR1 has the highest priority).
o Rotating Priority: The priority rotates after each bus access, ensuring fairness over time.
Centralized Arbitration
• Bus Arbiter: Can be the processor or a separate unit connected to the bus.
• Processor: Normally the bus master, but it can give mastership to a DMA controller when needed.
1. Bus-Request (BR):
2. Bus-Grant (BG):
o If DMA controller 1 is requesting the bus, it blocks the signal to other controllers.
o Other DMA controllers can receive the BG signal if controller 1 is not requesting the bus.
3. Bus-Busy (BBSY):
o Once a DMA controller gets the BG signal, it waits for BBSY to become inactive.
o Afterward, the DMA controller takes control and activates BBSY to prevent other devices from using the bus.
o DMA controller may perform data transfer operations when it has control of the bus.
o After finishing its tasks, it releases the bus and the processor resumes control.
5. Priority Schemes:
o Fixed Priority: Devices have a set priority order (e.g., BRI gets highest, BR4 gets lowest).
o Rotating Priority: Priority rotates after each bus grant. For example, after BRI is granted, the order becomes 2, 3, 4, 1.
Advantages:
Disadvantages:
• Single point of failure (if the arbiter fails, the whole system is impacted).
2. Distributed Arbitration
In distributed arbitration, there is no central arbiter. All devices involved participate equally in the arbitration process.
Working Mechanism:
• Start-Arbitration Signal: Devices wishing to access the bus assert this signal.
• Logical OR Mechanism: The arbitration lines carry a logical OR of the requesting devices' IDs.
• Arbitration Process:
o Devices compare their ID with the pattern on the bus, starting from the most significant bit (MSB).
o If a mismatch is found, the device outputs 0s for that bit and all lower bits, effectively withdrawing from the contest.
Distributed Arbitration
• In distributed arbitration, all devices waiting to use the bus share the responsibility for the arbitration process, without relying on a central arbiter.
1. Arbitration Process:
o Each device places its 4-bit ID number on the open-collector lines ARBO through ARB3.
2. Open-Collector Drivers:
o If one device outputs a 1 and another outputs a 0 on the same line, the bus will be in a low-voltage state (logic OR function).
o This causes the arbitration pattern to change to 0110, meaning B wins the bus.
o After detecting the difference, the losing device disables its drivers for the conflicting bits and lower-order bits.
o In our example, Device A detects the difference on ARBI and disables its drivers, allowing B to win.
o Higher reliability: The bus operation is not dependent on a single device, so if one device fails, others can still function.
o Multiple schemes: There are many proposed schemes to implement distributed arbitration in practice.
Example:
• Device A (ID = 5, 0101) and Device B (ID = 6, 0110) request the bus.
• Device A detects a mismatch at the second bit and ceases contention, leaving Device B (ID = 6) as the winner.
Advantages:
• Fairer Access: All devices have equal opportunity to acquire bus access based on their IDs.
Disadvantages:
Reliability Dependent on the arbiter (single point of failure) Higher reliability (no central point of failure)
Bus Request Signal Handled by a single arbiter All devices assert and compare signals
• Centralized Arbitration:
• Distributed Arbitration:
o Fairness: All devices are treated equally, and access is granted based on the device’s ID, ensuring a balanced allocation of bus time.
Conclusion
Bus arbitration is essential to ensure that multiple devices can access the bus without conflict. The decision between centralized and distributed arbitration depends on factors
such as:
• System Reliability: Distributed arbitration is more reliable, especially in systems where device failure is a concern.
• Complexity: Centralized arbitration is simpler to implement and manage, whereas distributed arbitration requires additional logic for comparison and arbitration.
• Fairness: Distributed arbitration provides a fairer distribution of bus access among devices, based on their IDs.
Both approaches have their place depending on system requirements, and understanding these trade-offs helps in choosing the most suitable method for a given system.
Your explanation outlines the essential components of the bus system in computer architecture. Here’s a more detailed and structured explanation based on your points:
A bus is a communication pathway that interconnects the processor, main memory, and I/O devices in a computer system. It serves as the primary medium for transferring data
between these components.
The bus consists of various lines that perform distinct functions, which can be categorized as:
1. Data Lines:
o Purpose: These are used to transfer the actual data between devices on the bus (e.g., from memory to the processor or from an I/O device to memory).
o Role: These lines carry the data values being transmitted during read or write operations.
2. Address Lines:
o Purpose: These lines specify the address of the data's source or destination in memory or I/O devices.
o Role: They direct the data to the correct location (either in memory or in an I/O device).
3. Control Lines:
o Role: Control signals specify when and how the data transfer occurs, such as read/write operations and synchronization timing.
Synchronous Bus:
A synchronous bus is a bus system where all devices involved in data transfer operate based on a common clock signal. The devices derive their timing information from this
clock, ensuring that the data transfers and other operations are synchronized across the bus. Here's an overview of how a synchronous bus operates:
Basic Operation:
1. Clock Signal:
o The devices on the bus operate in sync with a shared clock signal.
o The clock defines equally spaced pulses that represent discrete time intervals (clock cycles).
o Each clock cycle represents one bus cycle, during which one data transfer can occur.
2. Bus Cycle:
o During each cycle, data is transferred across the bus, and the address and control signals are handled.
o The address and data lines show the specific values being transmitted (high or low signals), and these can change at specific times during the cycle.
o The data and address patterns on the bus are typically synchronized with the clock pulse, which helps avoid timing conflicts.
In a read operation, the master device (typically the processor) requests data from a slave device (such as memory or an I/O device).
o The master places the address of the slave device and control signals (e.g., "read" operation) on the bus.
o The control lines may also specify the length of the operand to be read.
o The slave device receives and decodes the address to identify the requested data.
o The slave device places the requested data onto the data bus.
o The master "strobes" or captures the data from the data bus into its input buffer.
o This ensures that the data is securely stored in the master’s register for further use.
o Strobing means that the master captures the data at the end of the clock cycle.
4. Data Validity:
o The data must remain valid long enough for the master’s input buffer to capture it (this time is longer than the setup time of the buffer).
o This ensures that the data is correctly loaded into the master’s buffer without timing errors.
The write operation is similar to the read operation, but with the following differences:
• The master places the data on the bus instead of requesting it.
• The slave device receives the data at the appropriate time, as determined by the clock.
o In synchronous buses, the entire bus operation must fit within a single clock cycle.
o The clock period (t2 - t0) must accommodate the longest delays and slowest device interfaces, which can slow down the overall system if one device is much
slower than others.
2. Device Synchronization:
o Devices must synchronize with the clock signal, meaning that all devices must operate at the same speed or within a compatible range. If a device is slower, it
will limit the speed of the entire bus.
3. Error Handling:
o The processor assumes that data is available at the end of each clock cycle (t2) and doesn’t have a way to check whether the addressed device responded
correctly.
o If the slave device fails to respond or malfunctions, the error may go undetected unless special error detection mechanisms are in place.
Multiple-Cycle Transfers:
To overcome some limitations of the synchronous bus (such as device speed disparities), multiple-cycle transfers are introduced:
1. Slave-Ready Signal:
o Instead of transferring data in one clock cycle, a Slave-ready signal allows devices to take more time for data transfer. This signal acknowledges that the slave
device is ready to participate in the data transfer.
o The number of clock cycles involved in a data transfer can vary from one device to another, making the bus more adaptable to devices with different speeds.
3. Error Detection:
o If the slave device does not respond within a predefined number of clock cycles, the master may abort the operation after waiting for the maximum number of
clock cycles.
Example:
1. Clock Cycle 1: The master sends the address and control information to the bus.
2. Clock Cycle 2: The slave receives the address and starts decoding it.
3. Clock Cycle 3: The slave places the data on the bus and asserts the Slave-ready signal to indicate that the data is valid.
4. Clock Cycle 3 (end): The master strobe the data into its buffer.
5. Clock Cycle 4: The master may begin a new data transfer with a new address.
Conclusion:
A synchronous bus is efficient and simple but can be limited by the need for synchronization between devices and the fixed clock period. The introduction of control signals like
Slave-ready and the use of multiple clock cycles allow for more flexible data transfers, especially in systems with devices operating at different speeds.
Asynchronous Bus:
An asynchronous bus operates without relying on a common clock signal. Instead, it uses a handshake mechanism between the master and the slave devices to control the
timing of data transfers. The master and slave devices communicate readiness through control lines, making this approach more flexible than synchronous buses.
Handshake Mechanism:
1. Master-Ready (MReady):
o This line is asserted by the master to indicate that it is ready to start a data transfer.
2. Slave-Ready (SReady):
o This line is asserted by the slave to signal that it is ready to respond, either by providing data (in a read operation) or accepting data (in a write operation).
Consider the sequence of events for a read operation (input data transfer) from the perspective of the master and slave devices:
o The master places the address and command information on the bus.
o The master asserts the Master-ready (MReady) line, signaling that the address and command information is ready for the slave to decode.
o The time between t0 and t1 accounts for bus skew, where signals arrive at different times due to varying propagation speeds along the bus lines.
o The slave that decoded the address and command places the required data on the data bus.
o The slave also asserts the Slave-ready (SReady) signal to indicate that data is available.
o If there’s a delay in placing data, the slave adjusts the SReady signal accordingly.
o The master receives the SReady signal, indicating that the data is ready.
o Due to bus skew, the master waits for the SReady signal to propagate. After this, it waits for the setup time, ensuring data stability.
o The master then captures (or "strobes") the data into its input buffer and drops the MReady signal, signaling that it has received the data.
5. Time t4 (Master Removes Address and Command):
o The master removes the address and command information from the bus, completing the transfer.
o The delay between t3 and t4 accounts for any additional bus skew.
o The slave detects the MReady signal transition from 1 to 0, indicating that the master has completed the transaction and is no longer expecting data.
o The slave removes the data and SReady signal from the bus, marking the end of the transfer.
o The master places the output data on the bus at the same time as the address and command information.
o The slave receives the address and command and "strobes" the data into its output buffer when it receives the Master-ready (MReady) signal.
o The slave asserts the Slave-ready (SReady) signal to notify the master that the data has been successfully written.
4. Signal Removal:
o The rest of the process (removal of signals from the bus) follows the same sequence as the read operation.
1. No Common Clock:
o In contrast to a synchronous bus, there is no common clock to synchronize data transfers. Instead, the handshake mechanism ensures that the devices are
ready before any data is transmitted.
2. Control Lines:
o The timing control is managed by the Master-ready and Slave-ready lines, allowing more flexible data transfer durations.
3. Flexible Timing:
o Since data transfers occur based on readiness signals rather than fixed clock cycles, an asynchronous bus can accommodate devices with varying speeds and
timing requirements, leading to more efficient data transfers.
• Flexibility: Devices can operate at different speeds without being constrained by a common clock cycle.
• Error Recovery: The handshake mechanism allows for easier handling of data transfer errors and ensures that each device is ready before transmitting data.
• Reduced Timing Issues: Asynchronous buses avoid the need for all devices to be synchronized to the same clock, which can be a limitation in synchronous systems.
• Complexity: The handshake protocol adds complexity to the bus design, as both the master and slave must manage timing and synchronization through additional
control signals.
• Slower Data Transfers: Without the rigid timing of a clock signal, data transfer speed can be slower compared to a synchronous bus in certain scenarios.
Conclusion:
The asynchronous bus is more adaptable and efficient for systems where devices have varying speeds or timing requirements. However, it introduces complexity with the
handshake mechanism and requires careful management of control signals to ensure reliable data transfers.