0% found this document useful (0 votes)
12 views20 pages

COA-Unit 1 (Intoduction)

Unit 1 notes

Uploaded by

Pragati Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

COA-Unit 1 (Intoduction)

Unit 1 notes

Uploaded by

Pragati Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Computer Organization And Architecture

Unit 1
Computer organization is about understanding how different parts of a digital
system work together. It describes how various components interact to
process data and complete tasks. Let’s break down the parts of a typical
computer system in simple terms.

What is a Digital System?


A digital system is any device that processes information in digital form. The
most common example is a **computer**, but other examples include **digital
calculators, digital displays, telephone exchanges**, and even **digital
voltmeters**. All these systems use digital signals to perform their functions.

Computer Architecture
Computer architecture is a bit like the blueprint for the computer. It specifies
the instructions a computer can perform and the hardware that enables these
instructions. Think of it as the "plan" that guides the hardware design.

Computer Hardware
Hardware is the physical stuff inside a computer. It includes things like
electronic circuits, screens, memory storage (like hard drives and CDs), and
even communication parts (like network cards).

Functional Units in a Computer


A computer is made up of several functional units—parts that carry out the
tasks of a program. Here’s a look at the five main components of a computer
system:

1. Input Unit
- The input unit helps a computer get data. Input devices include keyboards,
mice, microphones, and other gadgets that let us interact with a computer.
* The keyboard is the most commonly used input device. When you press a
key, the computer converts it into a code that it can understand and sends it
to the memory or processor.

2. Central Processing Unit (CPU)


The CPU is the "brain" of the computer. It reads and carries out instructions
given by a program. It can perform calculations, make logical decisions, and
control other parts of the computer. Inside the CPU, there are smaller units
like the Arithmetic and Logic Unit (ALU) and Control Unit (CU), which help
with specific tasks.

3. Memory Unit
The memory unit is where the computer stores programs and data that are
in use. Memory comes in two types: primary memory and secondary memory.
Primary Memory: This includes RAM (Random Access Memory) and is the
fastest type of memory, but it's temporary (volatile), meaning it loses its data
when the computer turns off. Cache memory is also part of primary memory,
which helps the CPU access data quickly.
Secondary Memory: This is used for long-term storage and includes hard
drives, USB drives, and CDs. This memory is non-volatile, so it keeps data
even when the computer is turned off.

4. Arithmetic and Logic Unit (ALU)


The ALU is a part of the CPU that handles all the calculations and logical
operations. It can perform basic math (like addition and subtraction) and
logical tasks (like AND, OR, and NOT operations).

5. Control Unit (CU)


The Control Unit directs how the CPU, memory, and input/output devices
should work together. It "instructs" each part to perform its tasks, acting as
the "manager" for the CPU. For example, if you want to add two numbers, the
CU will guide the ALU to do the calculation.

6. Output Unit
The output unit sends information to the user in a readable form. Output
devices include monitors, printers, and speakers, and they display or produce
the results of the computer’s work.

Each part of the computer has a specific job, and together they make it
possible for us to interact with and use digital systems in our everyday lives.
Buses :
In computer architecture, a bus is a system of electrical pathways that
connects different parts of a computer, allowing data transfer and
communication between them. It’s a crucial component for connecting the
CPU, memory, and other peripherals, as it enables these parts to
communicate efficiently.

What is a Bus?
- Bus is a communication pathway made up of wires or lines that carry signals
representing data, addresses, or control commands.
- It connects multiple devices and allows for data transfer between the main
components, such as the CPU, memory, and I/O devices.
- System Bus: A bus that connects the major components (like CPU, memory,
and I/O) is called a system bus.

Types of Buses
Buses are categorized based on the type of signals they carry. These
categories include the Data Bus, Address Bus, and Control Bus.

1. Data Bus
- Purpose: Transmits data between the CPU, memory, and I/O devices.
- Bidirectional: The data bus can send data in both directions (from CPU to
memory or from memory to CPU).
- Width: The number of lines in the data bus determines its width, which
affects how much data can be transferred simultaneously. Each line in the
data bus can carry one bit at a time.

2. Address Bus
- Purpose: Carries the address of the data location in memory where the
CPU wants to read from or write to.
- Unidirectional: The address bus is one-way, with addresses typically going
from the CPU to memory or an I/O device.
- Width: The width of the address bus determines the maximum amount of
memory a system can address. For example, a 32-bit address bus can access
up to 4 GB of memory.

3. Control Bus
- Purpose: Carries control signals that manage the use and access of the
data and address buses.
- Bidirectional: Control signals flow in both directions between the CPU and
other components.
- Signals: Control lines handle timing and command signals to ensure data
is sent, received, and processed correctly.
- Timing Signals: Indicate when data and addresses on the bus are valid.
- Command Signals: Direct specific operations like Memory Read, Memory
Write, I/O Read, and I/O Write.

Common control lines include:


- Memory Read and Memory Write for accessing memory.
- I/O Read and I/O Write for accessing I/O devices.
- Interrupt Request and Interrupt Acknowledge for handling interrupts.
-Bus Request and Bus Grant for managing device access to the bus.
- Clock to synchronize operations.
Bus Architecture :
In computer systems, a bus is a communication channel that allows different
hardware components (like the CPU, memory, and input/output devices) to
communicate and share data. The bus architecture of a computer system is
how these communication pathways are structured to connect components.
This structure can differ in width (how much data it can transfer at once),
speed, and protocols (communication rules).
In the 8085 microprocessor, the bus organization refers to the structure and
role of different buses that allow the microprocessor to communicate with
other devices in the system. The 8085 microprocessor has three primary types
of buses:
1. Address Bus (16-bit, Unidirectional)
2. Data Bus (8-bit, Bidirectional)
3. Control Bus (Bidirectional)
Here's an explanation of each:

1. Address Bus (16-bit, Unidirectional)


• The 16-bit address bus is used by the 8085 microprocessor to specify the
address of the memory location or I/O device it wants to communicate
with.
• Being 16-bits wide allows the 8085 to address up to 64 KB of memory
(since 216=65536 memory locations).
• It is unidirectional, meaning it only sends addresses from the
microprocessor to memory or I/O devices.

2. Data Bus (8-bit, Bidirectional)


• The 8-bit data bus transfers actual data between the microprocessor and
other components like memory and I/O devices.
• Since the 8085 is an 8-bit microprocessor, it processes data in 8-bit
chunks.
• It is bidirectional, meaning data can flow both to and from the
microprocessor.

3. Control Bus (Bidirectional)


• The control bus manages the operation and flow of data between the
microprocessor and external devices.
• Some primary control signals are:
• RD (Read): Tells memory or an I/O device to place data on the
data bus so the microprocessor can read it.
• WR (Write): Signals the microprocessor to write data from the
data bus to a specified memory or I/O location.
• HLDA (Hold Acknowledge): Indicates that the microprocessor is in
a hold state and temporarily suspends its operations.
• Other control signals, like ALE (Address Latch Enable), INT
(Interrupt Request), and RESET, help manage data flow and
operations.

Diagram: Bus Organization of the 8085 Microprocessor


Here’s a simple conceptual diagram of the 8085’s bus organization:

Key Parts of Bus Architecture


A typical computer bus is organized into three main types of lines:
1. Data Lines
2. Address Lines
3. Control Lines
1. Data Lines
Data lines are channels specifically dedicated to carrying actual data between
system components:
- These lines move information like numbers and instructions in the form of
binary signals.
- They allow parallel data transfer, which means multiple bits of data can be
sent at once.
- The width of the data bus (the number of data lines) determines how much
data can be sent in a single transfer. For example, a 32-bit data bus can send
32 bits simultaneously.

2. Address Lines
Address lines are channels that carry memory addresses:
- They indicate where data should be read from or written to in memory or I/O
devices.
- The number of address lines determines the range of memory that the
computer can access.
- When the CPU needs specific data from memory, it places the data's address
on these lines so memory knows exactly where to look.
3. Control Lines
Control lines manage the flow of data and instructions across the system:
- These lines carry control signals that organize data exchanges, ensuring
components work in the right order.
- They perform tasks like memory read/write, I/O read/write, and interrupt
handling.

Examples of Control Signals:


- Memory Write: Transfers data from the CPU to a specific memory location.
- Memory Read: Brings data from a memory location to the CPU.
- Bus Request and Grant: Used when a device needs to take control of the bus
for communication.
- Interrupt Request and Acknowledge: Manages requests from I/O devices for
the CPU’s attention.

Timing in Bus Architecture


Bus communication requires coordination, which is achieved through timing.
There are two main timing methods:

1. Synchronous Bus: Uses a clock signal to keep all components in sync.


- Each clock cycle starts an operation.
- This method supports high-speed data transfer because the system runs in
predictable cycles.

2. Asynchronous Bus: Doesn’t use a central clock; instead, it uses a


“handshake” between components.
- A master device initiates communication, sending a “ready” signal along
with the data.
- The receiving device responds when it’s ready to complete the transfer.
- This approach is more flexible and better for systems with devices that
operate at different speeds.

Example: Bus Structure for I/O Interface with Input Devices


Each I/O device (like a keyboard) has a unique address on the bus. When the
CPU needs data from an I/O device, it:
1. Sends the device’s address over the Address Lines.
2. Uses the Control Lines to initiate a read operation.
3. Receives the data through the Data Lines.

For example, when reading data from a keyboard:


- The CPU sends the keyboard’s address.
- The bus's control lines initiate a read.
- Data from the keyboard travels over the data lines to the CPU, where it’s
processed.

By organizing these lines and signals, bus architecture enables seamless


communication and coordination across all hardware components in a
computer system. This structure forms the backbone of how computers
operate efficiently, connecting all essential parts to work as a unified system.
Bus Arbitration in Computer Organization
In computer systems, multiple devices—such as the CPU, memory, and
input/output (I/O) controllers—are connected to a common communication
pathway called a bus. Each device needs access to the bus to transfer data.
However, if multiple devices attempt to use the bus at the same time, it can
lead to data conflicts and system instability. Bus arbitration is the mechanism
used to manage and prioritize which device can control the bus at any given
time, ensuring smooth data flow and preventing data corruption.

Key Concepts of Bus Arbitration


• Bus Master: The device that currently controls the bus and manages
data transfers.
• Bus Request: The signal used by a device to request control of the bus.
• Bus Grant: The signal from the arbiter, allowing a device to take control
of the bus.

Types of Bus Arbitration


Bus arbitration can be managed through centralized and distributed methods.

1. Centralized Arbitration
In centralized arbitration, a single arbiter, or bus controller, manages which
device has access to the bus. Common centralized methods include:
• Daisy Chaining: In this method, bus requests are connected in a chain,
and the arbiter grants access sequentially. The first device that requests
access blocks others from receiving a bus grant, giving priority based on
position in the chain.
• Pros: Simple and scalable.
• Cons: Slower priority for devices further along the chain; system
halts if one device fails.
• Polling (Rotating Priority): The arbiter assigns addresses to each device
and checks them in a rotating sequence. The device that matches the
address receives the bus grant.
• Pros: Fairer allocation without favoritism and maintains system
stability if one device fails.
• Cons: Adding devices increases complexity.
• Fixed Priority (Independent Request): Each device has a unique bus
request line, and the arbiter assigns priority based on the device’s
assigned priority level.
• Pros: Fast response and efficient for high-priority devices.
• Cons: Requires a large number of control lines, increasing
hardware complexity.

2. Distributed Arbitration
In distributed arbitration, all devices collectively determine which one will
control the bus. Each device has an ID number, and the priority is based on
these IDs. Devices send requests and check their priority against others in
real-time, granting the highest-priority device access.

Applications of Bus Arbitration


Bus arbitration has several practical uses across different system types:
1. Shared Memory Systems: Allows multiple devices to access a shared
memory without interference.
2. Multi-Processor Systems: Enables multiple processors to communicate
and share memory resources.
3. Input/Output Devices: Allows various I/O devices (e.g., printers,
keyboards) to communicate with the CPU without causing system
conflicts.
4. Real-Time Systems: Ensures timely data transfer to meet strict time
constraints, essential for high-priority tasks.
5. Embedded Systems: Allows sensors, actuators, and controllers in
embedded systems to share the bus for real-time monitoring and control.

Advantages of Bus Arbitration


• Efficient Resource Use: Ensures fair access to the bus, preventing any
device from monopolizing resources.
• Minimizes Data Corruption: Allows only one device at a time to access
the bus, reducing data conflict risks.
• Supports Multiple Devices: Essential for systems with multiple
peripherals, enabling smooth data transfer.
• Real-Time Support: Provides prioritized access, especially useful in real-
time systems to avoid delays.
• Enhanced System Stability: Prevents device conflicts, reducing the risk
of system crashes.

Challenges of Bus Arbitration


• Complexity: As the number of devices increases, managing priorities and
granting access become more complex.
• Potential Delays: Some arbitration methods, like polling, may lead to
delays for devices lower in priority.
• Cost and Hardware Requirements: Methods like fixed priority
arbitration require additional control lines, increasing the cost and
complexity of the system.

Bus Arbitration Techniques :


Bus arbitration techniques are crucial for
managing access to a shared bus in computer
systems. Here’s an in-depth explanation of the main
bus arbitration methods:
1. Daisy Chaining Arbitration
In the Daisy Chaining method, multiple devices are connected in series, or
"daisy-chained," along a common bus. Each device has a specific priority
based on its position in the chain, with the highest priority device located
nearest to the controller.

• How it Works:
• The arbiter (or bus controller) sends a bus grant signal down the
chain.
• The first device in the chain that needs the bus will “capture” this
grant signal, effectively blocking it from passing to the next
devices in line.
• This device then becomes the bus master for the duration of the
bus transaction, during which it completes its data transfer.
• Once the transaction is complete, the device releases the bus
grant signal, allowing it to move to the next device in the chain if it
has a request.
• Advantages:
• Simple and Cost-Effective: Only one control line is required for the
bus grant signal, which makes the method relatively inexpensive
and straightforward to implement.
• Easy Expansion: Devices can be added to the chain without
complex rewiring or additional control lines.
• Disadvantages:
• Position-Based Priority: Devices closer to the bus arbiter have
higher priority, which can lead to unfair access for lower-priority
devices further down the chain.
• Propagation Delay: The signal must travel through each device in
the chain, which can create delays, especially if the chain is long.
• Single Point of Failure: If one device in the chain fails, it can
disrupt access for devices further down the chain.

2. Polling (Rotating Priority) Arbitration


The Polling method (also called Rotating Priority) is a more balanced
technique where the arbiter checks each device’s request in a sequential
order.
• How it Works:
• The arbiter sends a polling address to each device in sequence.
• When a device receives its designated address and has a request
pending, it acknowledges the arbiter and takes control of the bus.
• If the device does not need access, the arbiter moves on to poll the
next device in the list.
• Once a device has completed its transaction, the arbiter continues
polling the remaining devices.
• Advantages:
• Fair Access: Since each device is checked in turn, no device is
favored. This method provides equal opportunity for all devices to
access the bus.
• Fault Tolerance: If one device fails, it does not disrupt polling for
other devices, making the system more robust.
• Disadvantages:
• Latency: Each device must wait for its turn in the polling cycle,
which can lead to delays, especially in systems with many devices.
• Complexity for Larger Systems: Adding devices requires modifying
the polling sequence, and larger systems may experience
increased address line requirements and complexity.

3. Fixed Priority (Independent Request) Arbitration


In the Fixed Priority Arbitration method, each device is assigned a fixed
priority level. This is done using separate bus request and bus grant lines for
each device.
• How it Works:
• Devices with pending requests send a bus request signal to the
arbiter.
• The arbiter contains a priority encoder that assigns the bus to the
device with the highest priority.
• When multiple requests are received, the arbiter selects the device
with the highest fixed priority and sends a bus grant signal to that
device.
• The selected device then becomes the bus master, completes its
transaction, and releases the bus.
• Advantages:
• Fast Response Time: The priority-based decision process is quick,
making this method well-suited for systems with time-sensitive
tasks.
• Predictable Performance: Devices with higher priority always gain
access first, making it easier to ensure critical operations occur
without delay.
• Disadvantages:
• Unfair Access: Devices with lower priorities may experience delays
or even be “starved” if higher-priority devices constantly request
the bus.
• Increased Hardware Cost: The need for individual request and
grant lines for each device requires more hardware, increasing
complexity and cost.
Bus And Memory Transfer :

In digital systems with multiple registers, transferring data


directly between registers would require extensive wiring, which can become
inefficient and complex. A bus structure streamlines this by providing a
shared pathway that allows registers to transfer information in a more
organized and economical way. Here’s a breakdown of the key points:

Bus System Overview


• Bus Structure: A bus consists of a set of lines, with each line
corresponding to one bit. For instance, an 8-bit bus would have 8 lines.
Registers use these shared lines to transfer binary data.
• Control Signals: To select which register’s data goes onto the bus,
control signals (such as select lines) are employed. These signals control
multiplexers (MUXes) that direct the data from the chosen register to
the bus.

Example: 4-Register Bus System


Imagine a system with four registers (A, B, C, D), each with 4 bits. Here’s how
a bus transfer works with these registers:
• Multiplexer Setup: The system uses four 4-to-1 MUXes (one for each bit
position across the registers). Each multiplexer has four inputs (0–3) for
the data from each register and two select inputs (S1, S0) that
determine which register’s data appears on the bus.
• Select Lines: The select lines S1 and S0 connect to all MUXes and
determine which register’s data is routed to the bus based on their
binary values.

Operation with Select Lines


Here’s how data transfer occurs for different select line values:
1. S1S0 = 00: The MUXes select input 0 from each MUX, corresponding to
Register A, placing Register A’s contents on the bus.
2. S1S0 = 01: Input 1 is selected, routing Register B’s contents to the bus.
3. S1S0 = 10: Input 2 is selected, so Register C’s data goes onto the bus.
4. S1S0 = 11: Input 3 is selected, transferring Register D’s data to the bus.

Function Table for Selection Lines


Here’s a table that shows which register is selected based on the binary
values of S1 and S0:
S Register
S1
0 Selected
0 0 Register A
0 1 Register B
1 0 Register C
1 1 Register D

This configuration is essential for simplifying data transfer in a multi-register


system, reducing wiring complexity and enhancing modularity in digital
designs.

Memory Transfer :
In computer systems, memory transfer operations involve moving data
between memory and registers, typically classified as read and write
operations. These operations are fundamental for data access and storage.

Memory Transfer Operations


1. Read Operation: This operation transfers data from memory to a register
or user-end location.
• Notation: Read: DR←M[AR]
• Here:
• M represents the memory word.
• AR (Address Register) holds the address of the memory word
being accessed.
• DR (Data Register) temporarily holds the data being
transferred.
• Explanation: The Read operation directs the data from the memory
word M[AR] (located at the address specified in AR) to be loaded
into the data register DR.
2. Write Operation: This operation transfers new information from a
register into memory.
• Notation: Write: M[AR]←R1
• Here:
• M is the memory word where data is stored.
• AR (Address Register) specifies the memory address to
which data is being written.
• R1 is the register holding the data to be written to memory.
• Explanation: The Write operation takes the data from register R1
and stores it in the memory word M at the address specified by
AR.

Summary of Notations and Registers


• Memory Word (M): Represents a unit of data in memory.
• Address Register (AR): Holds the address of the memory location to
access.
• Data Register (DR): Holds data being transferred during a read
operation.
• Register R1: Holds data to be stored during a write operation.
These operations enable the CPU to interact with memory, facilitating data
processing and storage in the system.

You might also like