Unit 1
Unit 1
Computer Architecture
Computer architecture defines the functional behavior of a computer system. It outlines the fundamental
design principles and specifications that govern the system's operation. Key aspects of computer
architecture include:
• Instruction Set Architecture (ISA): Defines the set of instructions a processor can understand and
execute.
• Memory Organization: Specifies how memory is structured and accessed, including memory
hierarchy (cache, main memory, secondary storage).
• Data Representation: Determines how data is encoded and stored, such as integer, floating-point,
and character formats.
• Input/Output (I/O) Mechanisms: Defines how the system interacts with external devices.
Computer Organization
Computer organization deals with the structural implementation of a computer system, focusing on how
the architectural components are physically realized. It addresses the hardware details and
interconnections that bring the architectural design to life. Key aspects of computer organization include:
• Hardware Components: Specifies the physical components used, such as processors, memory
modules, and I/O devices.
• Control Unit: Designs the circuitry that controls the execution of instructions.
• Data Path: Defines the data flow between components, including registers, buses, and functional
units.
• Timing and Control Signals: Coordinates the activities of different components.
Differentiation
While computer architecture provides the blueprint, computer organization is responsible for the actual
construction of the building.
Level of
High-level Low-level
Abstraction
Miniaturize
Size Large Smaller Smaller Very small
d
Much Extremely
Speed Slow Faster Very fast
faster fast
Power
Consumptio High Less Less Very low Minimal
n
Assembly
Programmin Machine language, High-level High-level AI-specific
g Languages language high-level languages languages languages
languages
Volatile (data lost when power Non-volatile (data persists even when power
Volatility
is off) is off)
Access
Direct access Sequential or direct access
Method
Primary Memory is used for immediate access to data and instructions during program execution. It is
faster but more expensive and has limited capacity. Secondary Memory is used for long-term storage of
data and programs. It is slower but cheaper and has a larger capacity.
Registers
Registers are high-speed storage locations within the CPU that are used to store data temporarily during
processing. They are essential for the efficient execution of instructions.
Types of Registers:
1. General-Purpose Registers:
o Used for various purposes like storing data, addresses, and intermediate results.
o Examples: AX, BX, CX, DX in x86 architecture.
2. Special-Purpose Registers:
o Dedicated to specific functions.
o Examples:
▪ Program Counter (PC): Stores the memory address of the next instruction to be
fetched.
▪ Instruction Register (IR): Holds the instruction currently being executed.
▪ Memory Address Register (MAR): Stores the memory address to be accessed.
▪ Memory Buffer Register (MBR): Holds data to be written to or read from memory.
▪ Accumulator (AC): Stores intermediate results of arithmetic and logical operations.
▪ Status Register (SR): Stores information about the status of the CPU, such as carry,
zero, and overflow flags.
Operations Performed by Registers in Fetch-Decode-Execute Cycle:
1. Fetch:
o The PC holds the address of the next instruction to be fetched.
o The control unit sends the address in the PC to the Memory Address Register (MAR).
o The memory unit fetches the instruction from the specified address and stores it in the
Memory Buffer Register (MBR).
o The instruction is then transferred to the Instruction Register (IR).
2. Decode:
o The control unit decodes the instruction in the IR to determine the operation to be
performed and the operands involved.
3. Execute:
o The control unit generates control signals to execute the instruction.
o The appropriate registers (e.g., MAR, MBR, AC) are used to access memory, fetch operands,
and store results.
o The ALU performs arithmetic and logical operations on the operands.
o The results are stored in registers or written back to memory.
Opcode, Operands, and Instruction Statements
• Opcode (Operation Code): The part of an instruction that specifies the operation to be performed
(e.g., add, subtract, load, store).
• Operand: The data on which the operation is performed. Operands can be immediate values,
register values, or memory addresses.
• Instruction Statement: A complete instruction, consisting of an opcode and one or more operands.
How the Memory and the Processor Can Be Connected
The primary method of connecting the processor and main memory is through a system bus. A system bus
is a collection of wires that carry data, addresses, and control signals between the processor and memory.
Key Components of a System Bus:
1. Address Bus: Carries memory addresses from the processor to the memory.
2. Data Bus: Carries data between the processor and memory.
3. Control Bus: Carries control signals to coordinate the transfer of data and instructions.
Memory Access Process:
1. Address Generation: The processor generates the memory address of the instruction or data to be
accessed.
2. Address Placement: The address is placed on the address bus.
3. Read/Write Signal: A control signal (read or write) is sent on the control bus to indicate the desired
operation.
4. Memory Access: The memory, upon receiving the address and control signal, either reads the data
from the specified memory location or writes data to it.
5. Data Transfer: The data is transferred between the memory and the processor via the data bus.
Visual Representation:
Additional Considerations:
• Memory Hierarchy: Modern computer systems often employ a memory hierarchy to improve
performance. This hierarchy includes:
o Registers: Fastest but smallest storage.
o Cache Memory: High-speed memory that stores frequently accessed data.
o Main Memory: Larger and slower than cache, but faster than secondary storage.
o Secondary Storage: Slowest but largest storage (e.g., hard disk, SSD).
• Memory Interleaving: This technique divides memory into banks to allow simultaneous access to
multiple memory locations, improving performance.
• Memory Mapping: The process of assigning specific memory addresses to different devices and
software components.
By understanding the fundamental principles of memory-processor communication, we can appreciate the
efficiency and complexity of modern computer systems.
Number Representation and Arithmetic Operations
Binary Number System
• Binary Addition:
o Similar to decimal addition, but with base 2.
o Rules:
▪ 0+0=0
▪ 0+1=1
▪ 1+0=1
▪ 1 + 1 = 10 (carry over 1 to the next bit)
• Binary Subtraction:
o Can be done directly or using 2's complement method.
o 2's complement method:
▪ Find the 2's complement of the subtrahend.
▪ Add it to the minuend.
▪ If there's a carry-out, discard it.
• Binary Multiplication:
o Similar to decimal multiplication, but with binary digits.
o Multiply each digit of the multiplier with the multiplicand.
o Shift the partial products and add them up.
• Binary Division:
o Similar to decimal division.
o Divide the dividend by the divisor, bit by bit.
o Subtract the divisor from the dividend or a partial remainder.
o If the subtraction is possible, write a 1 in the quotient.
o Shift the divisor and repeat the process.
Signed and Unsigned Representation
• Unsigned Numbers:
o All bits represent the magnitude of the number.
o Range: 0 to 2^n - 1 for an n-bit number.
• Signed Numbers:
o Sign-Magnitude:
▪ First bit represents the sign (0 for positive, 1 for negative).
▪ Remaining bits represent the magnitude.
o 1's Complement:
▪ Invert all bits of the positive number to get the negative number.
o 2's Complement:
▪ Invert all bits of the positive number and add 1.
▪ More efficient for arithmetic operations and widely used.
Floating-Point Representation
• Used to represent real numbers with a wide range of values.
• IEEE 754 Standard:
o Sign Bit: Determines the sign of the number.
o Exponent: Represents the magnitude of the number.
o Mantissa: Represents the precision of the number.
Character Representation
• ASCII: American Standard Code for Information Interchange.
o Uses 7-bit codes to represent characters.
• Unicode: A more comprehensive character encoding standard that supports a wider range of
characters from different languages.
Performance
• Technology: Smaller transistors and advanced fabrication techniques lead to faster and more
efficient processors.
• Parallelism:
o Instruction-Level Parallelism: Overlapping the execution of instructions.
o Multicore Processors: Multiple cores on a single chip for parallel processing.
o Multiprocessors: Multiple processors working together.
o Message Passing Multicomputers: Interconnected computers communicating to solve
problems.
Additional Notes:
• Overflow: Occurs when the result of an arithmetic operation exceeds the maximum representable
value for a given number of bits.
• Underflow: Occurs when the result of an arithmetic operation is smaller than the minimum
representable value.
• Fixed-Point Representation: Used to represent fractional numbers with a fixed number of decimal
places.
• Error Detection and Correction: Techniques like parity checking and checksums are used to detect
and correct errors in data transmission and storage.