0% found this document useful (0 votes)
12 views21 pages

CA Lecture 1

The document provides an overview of computer architecture, detailing its definition, components, and types, including Von Neumann and Harvard architectures. It explains the fetch-decode-execute cycle and compares RISC and CISC design philosophies, highlighting their advantages and disadvantages. The document emphasizes the importance of architecture in ensuring efficient and effective computer system performance.

Uploaded by

malikaqeel aqeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views21 pages

CA Lecture 1

The document provides an overview of computer architecture, detailing its definition, components, and types, including Von Neumann and Harvard architectures. It explains the fetch-decode-execute cycle and compares RISC and CISC design philosophies, highlighting their advantages and disadvantages. The document emphasizes the importance of architecture in ensuring efficient and effective computer system performance.

Uploaded by

malikaqeel aqeel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Computer Architecture

Lecture 1
By
Ms. Maryam Arshad
What is an architecture?

• Architecture refers to the design and structure of a system, whether physical or


abstract, that defines how its components are organized, interact, and function
together to achieve specific goals.
• In computer science and computer engineering, computer architecture is a
description of the structure of a computer system made from component parts.

SET OF COMPUTER COMPUTER


INSTRUCTIONS ORGANIZATION ARCHITECTURE
Impact of an Architecture:
Computer Architecture

• Computer Architecture is the design and


organization of a computer system. It focuses
on the structure and interaction of hardware
components and the logical framework that
defines how software interacts with hardware
to perform tasks. Computer architecture
ensures that systems are efficient, scalable,
and capable of executing the desired
operations.
Components of Computer Architecture

• Processor (CPU)
•The brain of the computer that executes instructions.
•Includes control units, registers, and ALUs.
• Memory (Primary & Secondary)
• Hierarchical structure for storing data and instructions.
• Includes cache, main memory (RAM), and secondary storage.
• Input/Output (I/O) Systems
•Interfaces for external communication and data exchange.
•Examples: Keyboards, monitors, and network devices.
• Interconnects
•Communication channels between components (e.g., buses, bridges).
Types of Computer Architectures:

1. Von Neumann Architecture


• Single memory for instructions and data.
• Sequential instruction execution.
2. Harvard Architecture
• Separate memory for instructions and data.
• Allows parallel processing.
3. RISC vs. CISC Architectures
• RISC (Reduced Instruction Set Computing): Simple instructions, faster
execution.
• CISC (Complex Instruction Set Computing): More complex instructions,
fewer per program.
Fetch-Decode-Execute Cycle
The operation of a computer architecture can be described in
the following steps:
• Fetch:
The control unit fetches the next instruction from memory
using the program counter (PC).
• Decode:
The fetched instruction is decoded to determine the
operation and required data.
• Execute:
The decoded instruction is executed, which may involve the
ALU, memory, or I/O devices.
• Store:
The program counter is updated to point to the next
instruction.
Von Neumann Architecture
• The Von Neumann Architecture
is a foundational computer
architecture model that describes
how a computer processes
instructions and data. Proposed by
John von Neumann in 1945, this
architecture is still the basis for
most modern computers. It
organizes a computer into specific
functional components and defines
how these components interact.
Key Features of Von Neumann Architecture
• Single Storage for Data and Instructions:
Both data and program instructions are stored in the same memory unit. This is in contrast to other
architectures like Harvard Architecture, where data and instructions are stored separately.
• Sequential Instruction Execution:
Instructions are fetched from memory and executed one at a time, in a sequential manner (Fetch-Decode-
Execute Cycle).
• Centralized Control:
A single control unit manages program execution and orchestrates the operation of other components.
• Use of Binary Data:
Data and instructions are represented in binary form.
• Memory Addressability:
Each memory location is uniquely addressable, which allows data and instructions to be retrieved
efficiently.
Components of Von Neumann Architecture
• Memory (Storage):
• Stores both data and instructions.
• Memory is typically organized in a linear sequence and accessed by addresses.
• Central Processing Unit (CPU):
• The CPU is the brain of the computer and consists of two main subcomponents:
• Arithmetic Logic Unit (ALU):
• Performs arithmetic operations (addition, subtraction, etc.).
• Performs logical operations (AND, OR, NOT, etc.).
• Control Unit (CU):
• Fetches instructions from memory.
• Decodes and executes instructions by generating control signals.
• Input/output Devices:
• Input devices (e.g., keyboard, mouse) send data to the CPU.
• Output devices (e.g., monitor, printer) receive processed data from the CPU.
• Bus System: The system bus facilitates communication between components and consists of:
• Data Bus: Transfers data between the CPU, memory, and I/O devices.
• Address Bus: Transfers memory addresses to identify where data is stored.
• Control Bus: Transfers control signals (e.g., read/write operations).
Advantages & Disadvantages:
Disadvantages of Von Neumann
Advantages of Von Neumann
Architecture
Architecture •Von Neumann Bottleneck:
• Simplifies the design and construction of The shared memory for data and instructions
computers by using a single memory for causes a bottleneck as the CPU has to wait for
both data and instructions. data or instructions to be fetched sequentially.
•Performance Limitation:
• Allows flexibility in programming as
The sequential nature of instruction processing
instructions and data are stored in the same limits the system's performance.
memory. •Memory Organization:
• Provides a straightforward and well- Since data and instructions are stored in the same
defined structure for implementing memory, bugs or malicious code can overwrite
instructions.
algorithms.
Harvard Architecture
• The Harvard Architecture is a computer architecture model that contrasts with the Von Neumann
Architecture by providing separate memory spaces for instructions and data. This separation
allows for simultaneous access to both instructions and data, potentially enhancing performance
and efficiency. Originating from the Harvard Mark I relay-based computer developed in 1940s
during World War II, the Harvard Architecture has evolved and found applications in various
modern computing systems.
Key Features of Harvard Architecture
• Separate Memory for Instructions and Data:
• Instruction Memory: Stores the program instructions.
• Data Memory: Stores the data to be processed.
• Independent Buses:
• Utilizes separate buses for fetching instructions and accessing data, enabling simultaneous
transmission and reducing bottlenecks.
• Parallelism:
• Allows parallel access to instructions and data, potentially increasing the throughput and
performance of the system.
• Distinct Instruction and Data Paths:
• Differentiates the pathways for instruction and data processing, enhancing security and
integrity by preventing data from inadvertently modifying instructions.
Components of Harvard Architecture
• Instruction Memory (Program Memory):
• Dedicated memory space that exclusively stores the sequence of instructions to be executed
by the CPU.
• Data Memory:
• Separate memory space for storing data operands required by the instructions.
• Arithmetic Logic Unit (ALU):
• Performs arithmetic and logical operations on the data fetched from the data memory.
• Control Unit (CU):
• Manages the fetching of instructions from instruction memory and the execution of those
instructions.
• Separate Buses:
• Instruction Bus: Transfers instructions from instruction memory to the CPU.
• Data Bus: Transfers data between data memory and the CPU.
Advantages & Disadvantages
Advantages of Harvard Architecture Disadvantages of Harvard Architecture
• Increased Performance: •Increased Complexity and Cost:
• Simultaneous access to instructions and data can lead • Managing separate memory systems and buses adds
complexity to the hardware design, potentially
to faster execution times and higher throughput.
increasing costs.
• Reduced Bottleneck: •Flexibility Limitations:
• Separate buses eliminate the Von Neumann bottleneck, • Fixed separation between instruction and data memory
where instruction and data fetches contend for the can limit flexibility in program execution and data
same bus. handling.
• Enhanced Security and Reliability: •Memory Utilization Inefficiency:
• Protects instructions from being accidentally or
• May lead to inefficient memory usage if one memory
(instruction or data) is underutilized while the other is
maliciously modified by data operations.
over utilized.
• Optimized Memory Access: •Design Complexity:
• Instruction and data memories can be optimized • More complex control logic is required to manage
independently for speed and size based on their separate memory pathways and synchronization
specific requirements. between them.
Difference between Von Neumann & Harvard

Feature Harvard Architecture Von Neumann Architecture

Unified memory for both instructions and


Memory Structure Separate memories for instructions and data
data

Buses Separate buses for instructions and data Single bus for both instructions and data

Supports simultaneous access to instructions and


Parallelism Access is sequential; potential bottleneck
data
Can be limited by the Von Neumann
Performance Generally higher due to parallel access
bottleneck

Complexity More complex due to separate memory systems Simpler design with unified memory

General-purpose computers, most traditional


Typical Use Cases Embedded systems, digital signal processors (DSPs)
PCs
RISC

• RISC (Reduced Instruction Set Computing) is a CPU design philosophy that


emphasizes simplicity and efficiency by using a small, highly optimized set of instructions.
Each instruction is designed to execute in a single clock cycle, which simplifies the
processor's design and allows for faster performance.
• Characteristics of RISC
• Simpler instruction, hence simple instruction decoding.
• Instruction comes undersize of one word.
• Instruction takes a single clock cycle to get executed.
• More general-purpose registers.
• Simple Addressing Modes.
• Fewer Data types.
• A pipeline can be achieved.
Advantages vs. Disadvantages of RISC

Advantages of RISC Disadvantages of RISC


• Simpler instructions: RISC processors use a • More instructions required: RISC processors
smaller set of simple instructions, which require more instructions to perform
makes them easier to decode and execute complex tasks than CISC processors.
quickly. This results in faster processing times.
• Increased memory usage: RISC processors
• Faster execution: Because RISC processors require more memory to store the additional
have a simpler instruction set, they can execute instructions needed to perform complex
instructions faster than CISC processors. tasks.
• Lower power consumption: RISC processors
• Higher cost: Developing and manufacturing
consume less power than CISC processors,
RISC processors can be more expensive than
making them ideal for portable devices.
CISC processors.
CISC
• CISC (Complex Instruction Set Computing) is a CPU design philosophy that emphasizes a rich set of
instructions, where each instruction can perform multiple tasks in a single operation, such as memory access,
arithmetic, and logic. This reduces the number of instructions required for complex operations, resulting in
compact code that saves memory. However, the complexity of the instructions means that execution often
requires multiple clock cycles and a more intricate processor design.

• Characteristics of CISC
• Complex instruction, hence complex instruction decoding.
• Instructions are larger than one-word size.
• Instruction may take more than a single clock cycle to get
executed.
• Less number of general-purpose registers as operations
get performed in memory itself.
• Complex Addressing Modes.
• More Data types.
Advantages vs. Disadvantages of CISC

Advantages of CISC Disadvantages of CISC


• Reduced code size: CISC processors use complex • Slower execution: CISC processors take longer
instructions that can perform multiple operations, to execute instructions because they have more
reducing the amount of code needed to perform a complex instructions and need more time to
task.
decode them.
• More memory efficient: Because CISC instructions
• More complex design: CISC processors have
are more complex, they require fewer instructions
more complex instruction sets, which makes
to perform complex tasks, which can result in
more memory-efficient code. them more difficult to design and manufacture.

• Widely used: CISC processors have been in use for • Higher power consumption: CISC processors
a longer time than RISC processors, so they have a consume more power than RISC processors
larger user base and more available software. because of their more complex instruction sets.

You might also like