0% found this document useful (0 votes)
17 views19 pages

CC-04 Unit5

Uploaded by

hksingh7061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views19 pages

CC-04 Unit5

Uploaded by

hksingh7061
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computer System

Architecture

Unit 5: Bus Architecture


Bus width
Clock Pulse Generator
DMA controller
PIC (Programmed Interrupt Controller)
Memory
Ports
Communications
Architecture – ISA, EISA (Brief Description)
#Bus Width-
In computer architecture, bus width refers to the number of parallel lines or
wires that can transmit data simultaneously within the computer's system bus.
It is typically measured in bits. A wider bus allows more data to be transferred
in a single clock cycle, thus increasing the overall throughput and speed of data
transfer between different components of the computer, such as the CPU,
memory, and peripherals.

For example, a 32-bit bus can transmit 32 bits of data at once, while a 64-bit
bus can transmit 64 bits of data simultaneously. The bus width directly impacts
the maximum amount of data that can be transferred between components in
a single operation, influencing the overall performance and efficiency of the
computer system.
The Significance of Bus Width in Computer Systems
Now that we understand bus width and its practical applications, let's explore
its significance in computer systems. The bus width directly impacts the speed
and efficiency of data transfer between various components within a computer.
A wider bus width allows for transmitting larger data chunks, reducing the
number of bus cycles required to transfer a given amount of data.
Consequently, systems with wider bus widths can achieve faster data transfer
rates and more efficient processing.
Moreover, the bus width affects the compatibility between different hardware
components. For example, if a peripheral device utilizes a bus width more
expansive than a motherboard can support, the device will not function
optimally. Therefore, it is crucial to ensure compatibility between the bus
widths of various components to avoid performance bottlenecks.

#Clock Pulse Generator-


In computer architecture, a clock pulse generator is a crucial component
responsible for generating regular and precise timing signals, known as clock
pulses or clock signals. These signals are used to synchronize the operations of
various components within the computer system, such as the CPU, memory,
and input/output devices.
The clock pulse generator typically consists of a crystal oscillator or an
electronic circuit that generates a stable periodic signal. This signal serves as
the system's reference clock, providing a consistent timing mechanism for
coordinating the execution of instructions and the transfer of data between
different components.

The frequency of the clock pulses determines the speed at which the computer
system operates, measured in Hertz (Hz). Modern computer systems operate at
frequencies ranging from a few megahertz (MHz) to several gigahertz (GHz),
with higher frequencies corresponding to faster processing speeds.

The clock pulses produced by the generator are distributed throughout the
system via the system bus or other dedicated lines. Components within the
system are designed to respond to these pulses, initiating their operations at
specific intervals synchronized with the clock signal.

Overall, the clock pulse generator plays a fundamental role in ensuring the
proper functioning and coordination of the various components within a
computer system by providing a reliable timing reference for their operations.

#DMA and DMA controller-


DMA- DMA stands for Direct Memory Access. It's a feature of computer
systems that allows certain hardware subsystems to access system memory
independently of the central processing unit (CPU). This capability is
particularly useful for high-speed data transfer between peripheral devices
(such as disk drives, network adapters, and graphics cards) and system memory
without burdening the CPU with managing the data transfer directly.
DMA Controller- A DMA controller is a specialized hardware component
responsible for orchestrating DMA operations within a computer system. It acts
as an intermediary between peripheral devices and the system memory,
facilitating the efficient transfer of data between them.

Here's how DMA and DMA controllers work together in a typical scenario:

1. Initiation: When a peripheral device (e.g., a disk drive) needs to transfer data
to or from system memory, it sends a request to the DMA controller.

2. Configuration: The CPU configures the DMA controller with parameters such
as the starting memory address, the amount of data to transfer, and the
direction of the transfer (read from device to memory or write from memory to
device).

3. Arbitration: If multiple devices are competing for access to system memory


through DMA, the DMA controller arbitrates between them based on priority
or a predefined scheduling algorithm.

4. Transfer: Once the DMA controller gains control of the system bus, it
initiates the data transfer between the peripheral device and system memory
directly, without involving the CPU. This allows the CPU to continue executing
other tasks without being interrupted by the data transfer process.

5. Completion: After the data transfer is complete, the DMA controller typically
generates an interrupt to notify the CPU, allowing it to perform any necessary
post-processing tasks or to handle errors that may have occurred during the
transfer.

By offloading data transfer tasks from the CPU to the DMA controller, DMA
significantly improves system performance and efficiency, especially for I/O-
bound operations involving large volumes of data. It reduces CPU overhead,
minimizes latency, and enables concurrent processing of tasks, thereby
enhancing overall system throughput.

DMA and DMA controllers are critical components of computer architecture


that enable efficient and high-speed data transfer between peripheral devices
and system memory, freeing up the CPU to focus on computation-intensive
tasks.

Direct Memory Access Advantages and Disadvantages


Q. What are the modes of transfer in DMA?
Answer: There are three modes of transfer in DMA,
1.Burst Mode (In this mode the DMA controller takes the control over the
memory and releases it after the completion of data transfer )
2.cycle stealing Mode (In this mode the DMA controller force the processor to
stop the operation and leave the control over the bus for some short period of
time)
3.Transparent Mode (In this mode the DMA controller takes only the system
bus if the processor does require it actually)

The internal registers of a Direct Memory Access (DMA) Controller are:-


1. Base Address Register (16 bit)
2. Base Word Count Register (16 bit)
3. Current Address Register (16 bit)
4. Current Word Count Register (16 bit
5. Temporary Address Register (16 bit)
6. Temporary Word Count Register (16 bit)
7. Status Register (8 bit)
8. Command Register (8 bit)
9. Temporary Register (8 bit)
10. Mode Register (8 bit)
11. Mask Register (4 bit)
12. Request Register (4 bit)

Advantages of DMA Controller


• Data Memory Access speeds up memory operations and data transfer.
• CPU is not involved while transferring data.
• DMA requires very few clock cycles while transferring data.
• DMA distributes workload very appropriately.
• DMA helps the CPU in decreasing its load.
Disadvantages of DMA Controller
• Direct Memory Access is a costly operation because of additional operations.
• DMA suffers from Cache-Coherence Problems.
• DMA Controller increases the overall cost of the system.
• DMA Controller increases the complexity of the software.

#PIC (Programmed Interrupt Controller)-


Peripheral Interface Controller (PIC) / Programmable Interrupt Controller
(PIC)/ 8259 Programmable Interrupt Controller /8259 Programmable
Interrupt Controller / priority interrupt controller

In computer architecture, a Programmable Interrupt Controller (PIC) is a


hardware component responsible for managing and prioritizing interrupt
requests from various peripheral devices in a computer system. It acts as an
intermediary between these devices and the central processing unit (CPU),
facilitating the handling of interrupts efficiently.

Here's how a Programmable Interrupt Controller typically operates:

1. Interrupt Requests: When a peripheral device requires attention from the


CPU, it sends an interrupt request (IRQ) to the Programmable Interrupt
Controller.
2. Interrupt Prioritization: The PIC prioritizes the interrupt requests based on
their assigned priority levels. Each interrupt request has a specific priority level,
allowing the PIC to determine the order in which interrupts are serviced.

3. Interrupt Masking: The PIC can be programmed to mask (ignore) certain


interrupt requests temporarily. This feature allows the CPU to focus on critical
tasks without being interrupted by lower-priority interrupts.

4. Interrupt Vectoring: Once the PIC selects the highest-priority interrupt, it


provides the CPU with an interrupt vector. An interrupt vector is a unique
identifier that points to the memory location of the interrupt service routine
(ISR) associated with the interrupting device.

5. Interrupt Handling: The CPU then executes the ISR corresponding to the
interrupt vector provided by the PIC. The ISR performs the necessary actions to
handle the interrupt, such as servicing the device that triggered the interrupt
and saving the CPU's state before returning to the interrupted program.

6. Acknowledgment: After the CPU completes the ISR, it sends an


acknowledgment signal to the PIC to indicate that the interrupt has been
serviced successfully.

7. Cascade Mode: In systems with multiple PICs, a cascade mode can be used
to chain multiple PICs together. This allows the system to support a larger
number of peripheral devices while maintaining efficient interrupt handling.

The Programmable Interrupt Controller plays a crucial role in managing


interrupt-driven I/O operations in computer systems, ensuring that interrupt
requests are handled in a timely and orderly manner, and facilitating efficient
communication between the CPU and peripheral devices.
Applications of PICs
PICs are used in a wide range of applications, including:
• Personal Computers: PICs are used in personal computers to manage to
interrupt requests from various devices, such as keyboards, mice, network
adapters, and storage devices.
• Industrial Control Systems: PICs are used in industrial control systems to
manage and control various processes and devices, such as sensors, motors,
and valves.
• Medical Devices: PICs are used in medical devices to monitor vital signs,
control pumps, and motors, and perform other functions.
• Automotive Systems: PICs are used in automotive systems to manage and
control various functions, such as engine management, climate control, and
entertainment systems.

Advantages: PIC
 Interrupt Management: The 8259 PIC is designed to handle interrupts
efficiently and effectively, allowing for faster and more reliable processing of
interrupts in a system.
 Flexibility: The 8259 PIC is programmable, meaning that it can be
customized to suit the specific needs of a given system, including the number
and type of interrupts that need to be managed.
 Compatibility: The 8259 PIC is compatible with a wide range of
microprocessors, making it a popular choice for managing interrupts in many
different systems.
 Multiple Interrupt Inputs: The 8259 PIC can manage up to 8 interrupt
inputs, allowing for the management of complex systems with multiple devices.
 Ease of Use: The 8259 PIC includes simple interface pins and registers,
making it relatively easy to use and program.

Disadvantages: PIC
 Cost: While the 8259 PIC is relatively affordable, it does add cost to a
system, particularly if multiple PICs are required.
 Limited Number of Interrupts: The 8259 PIC can manage up to 8 interrupt
inputs, which may be insufficient for some applications.
 Complex Programming: Although the interface pins and registers of the
8259 PIC are relatively simple, programming the 8259 can be complex,
requiring careful attention to interrupt prioritization and other parameters.
 Limited Functionality: While the 8259 PIC is a useful peripheral for interrupt
management, it does not include more advanced features, such as DMA (direct
memory access) or advanced error correction.

#Memory-
In computer architecture, memory refers to the electronic components used to
store data and instructions that are actively being processed or awaiting
processing by the CPU (Central Processing Unit) of a computer system. Memory
is essential for the proper functioning of a computer, as it provides the CPU
with fast access to data and instructions needed for executing programs and
performing various tasks.

Here's an overview of the different types of memory commonly found in


computer architecture:

1. Primary Memory:
- RAM (Random Access Memory): RAM is volatile memory used by the CPU
to store data and instructions temporarily during program execution. It allows
for fast read and write operations, making it suitable for storing actively used
data. RAM is typically cleared when the computer is powered off.
- Cache Memory: Cache memory is a smaller, faster type of memory located
closer to the CPU. It serves as a buffer between the CPU and main memory
(RAM), storing frequently accessed data and instructions to speed up access
times.
-Registers: Registers are small, high-speed memory units located within the
CPU itself. They hold data and instructions directly accessible by the CPU for
immediate processing. Registers are the fastest form of memory in a computer
system.

2. Secondary Memory:
- ROM (Read-Only Memory): ROM is non-volatile memory that stores
firmware and essential system instructions required for booting up the
computer. Unlike RAM, ROM retains its contents even when the power is
turned off.
- Hard Disk Drives (HDDs): HDDs are non-volatile storage devices used for
long-term data storage. They provide high-capacity storage at a relatively low
cost but have slower access times compared to RAM.
- Solid State Drives (SSDs): SSDs are storage devices that use flash memory
technology to store data persistently. They offer faster read and write speeds
compared to HDDs, making them ideal for improving overall system
performance.
- Optical Drives: Optical drives, such as CD-ROMs and DVDs, use optical
technology to read and write data onto optical discs for storage and retrieval.

Memory in computer architecture is organized hierarchically, with faster and


smaller memory types (such as registers and cache) located closer to the CPU,
while larger and slower memory types (such as RAM and secondary storage)
are located farther away. The memory hierarchy is designed to optimize
performance by balancing speed, capacity, and cost considerations to meet the
demands of modern computing tasks.
#Ports-
In computer architecture, a port typically refers to a physical or virtual interface
through which data is transferred between a computer and external devices.
Ports serve as connection points that allow peripherals, such as keyboards,
mice, monitors, printers, storage devices, and networking equipment, to
communicate with the computer system.

Here are a few key aspects of ports in computer architecture:

1. Physical Ports: Physical ports are hardware interfaces located on the exterior
of a computer or device. These ports come in various shapes and sizes, each
designed for specific types of connections. Common physical ports include USB
ports, HDMI ports, Ethernet ports, audio jacks, VGA ports, and serial ports.
Each type of port serves a particular purpose, such as data transfer,
audio/video output, or network connectivity.

2. Virtual Ports: Virtual ports, also known as logical ports or software ports, are
software-based communication endpoints used within the operating system to
facilitate inter-process communication (IPC) or network communication. These
ports are typically represented by numeric values or names and are used by
software applications to send and receive data streams. Examples of virtual
ports include TCP/IP ports used for network communication and inter-process
communication (IPC) mechanisms like pipes and sockets.

3. Data Transfer: Ports enable the bidirectional transfer of data between the
computer system and external devices. Data can be transmitted in various
forms, such as digital signals, analog signals, or wireless signals, depending on
the type of port and the nature of the connected devices.

4. Protocol Support: Each port typically supports one or more communication


protocols that dictate how data is formatted, transmitted, and interpreted. For
example, USB ports support the Universal Serial Bus (USB) protocol, which
defines standards for data transfer and device connectivity. Similarly, Ethernet
ports support networking protocols like TCP/IP for communication over local
area networks (LANs) or the internet.

5. Plug and Play: Many modern physical ports support plug-and-play


functionality, allowing devices to be hot-swapped or connected and
disconnected from the computer system without requiring a system reboot.
This feature enhances convenience and ease of use for users when connecting
peripherals to their computers.

Ports play a crucial role in computer architecture by providing the necessary


interfaces for connecting external devices and enabling data transfer between
the computer system and the outside world. They facilitate the interoperability
of hardware components and support various types of communication
protocols to meet the diverse needs of users and applications.

#Communications-
In computer architecture, communications refer to the exchange of data and
information between different components within a computer system or
between multiple computer systems. This exchange of data can occur through
various communication channels and protocols, enabling collaboration,
coordination, and information sharing among interconnected devices.

Here are key aspects of communications in computer architecture:


1. Inter-Component Communication: Within a single computer system,
communications involve the exchange of data between different hardware
components, such as the CPU, memory, storage devices, input/output (I/O)
devices, and peripheral devices. These components communicate with each
other using buses, registers, and other communication pathways to perform
tasks and execute instructions.
2. Inter-Process Communication (IPC): IPC involves communication between
different processes or programs running concurrently within the same
computer system. IPC mechanisms allow processes to share data, synchronize
their activities, and coordinate their interactions. Common IPC mechanisms
include pipes, sockets, shared memory, message passing, and remote
procedure calls (RPC).

3. Networking Communication: Networking communication involves the


exchange of data between multiple computer systems over a network. This
includes communication between devices connected within a local area
network (LAN), wide area network (WAN), or the internet. Networking
communication relies on networking protocols such as TCP/IP, UDP, HTTP, FTP,
and SMTP to establish connections, transmit data packets, and manage
network traffic.

4. Client-Server Communication: In client-server architectures, communication


occurs between client devices (such as computers, smartphones, or IoT
devices) and server systems that provide resources or services. Clients send
requests to servers, which process the requests and return responses
accordingly. This communication is typically facilitated through client-server
protocols such as HTTP, HTTPS, and SSH.

5. Parallel and Distributed Computing: In parallel and distributed computing


environments, communication plays a critical role in coordinating tasks and
sharing data among multiple computing nodes or processors. These systems
use communication mechanisms such as message passing interfaces (MPI),
remote procedure calls (RPC), and distributed shared memory (DSM) to
facilitate collaboration and data exchange among distributed components.

6. Real-Time Communication: Real-time systems require communication


mechanisms that guarantee timely and predictable delivery of data. Real-time
communication protocols, such as Real-Time Transport Protocol (RTP) and
Message Queuing Telemetry Transport (MQTT), prioritize low latency, high
reliability, and deterministic behaviour to support applications such as
industrial control systems, telecommunications, and multimedia streaming.

Communications in computer architecture encompass a wide range of


interactions and protocols that enable data exchange and collaboration among
hardware components, processes, and networked systems. Effective
communication mechanisms are essential for the efficient operation and
connectivity of modern computer systems.

#Architecture – ISA , EISA (Brief Description)-

ISA-
In computer architecture, ISA stands for "Instruction Set Architecture," which
defines the set of instructions that a processor can execute and the behaviour
of those instructions. It serves as an interface between software and hardware,
allowing software developers to write programs that can run on a particular
processor architecture. The ISA specifies the instruction format, addressing
modes, registers, and other architectural features that software interacts with,
while hiding the underlying hardware implementation details.

There are different types of ISAs, including:

1. Complex Instruction Set Computer (CISC): CISC ISAs include a large and
diverse set of instructions that can perform complex operations in a single
instruction. Examples include the x86 architecture used in many desktop and
laptop processors.

2. Reduced Instruction Set Computer (RISC): RISC ISAs focus on a smaller set of
simple and frequently used instructions, aiming for simplicity and efficiency in
instruction execution. Examples include the ARM architecture used in
smartphones, tablets, and embedded systems.

3. Very Long Instruction Word (VLIW): VLIW ISAs pack multiple instructions
into a single long instruction word, allowing for parallel execution of
instructions. This architecture is often used in specialized computing
environments.

EISA- EISA stands for "Extended Industry Standard Architecture." It is an


enhanced version of the ISA bus architecture commonly used in IBM-
compatible personal computers during the late 1980s and early 1990s. EISA
extended the original 16-bit ISA bus to 32 bits, allowing for faster data transfer
rates and supporting larger memory capacities and more peripherals.

Key features of EISA include:

- Increased data throughput: EISA's 32-bit bus width doubled the data
throughput compared to the original 16-bit ISA bus, improving system
performance.
- Backward compatibility: EISA maintained compatibility with existing ISA
devices, allowing users to continue using their older peripherals while taking
advantage of the enhanced capabilities of EISA-compatible devices.
- Support for more peripherals: EISA supported a greater number of expansion
slots compared to the original ISA architecture, accommodating more
peripheral devices such as network cards, SCSI controllers, and graphics cards.
- Enhanced system configuration and management: EISA introduced features
for automatic configuration of expansion cards and enhanced system
management capabilities, making it easier to install and maintain peripherals in
a computer system.
Despite its improvements, EISA ultimately lost popularity with the emergence
of other bus standards such as PCI (Peripheral Component Interconnect), which
offered even higher performance and greater flexibility. However, EISA played a
significant role in the evolution of PC architecture by pushing the boundaries of
system expansion and performance during its time.

You might also like