0% found this document useful (0 votes)
35 views24 pages

Com 314 Handout

The document provides an overview of computer systems, detailing their historical evolution through five generations, from vacuum tubes to microprocessors and artificial intelligence. It discusses computer architecture, including functional units, performance measures, and instruction sets, as well as memory hierarchy and types of memory. Key concepts such as the fetch-execute cycle, RISC and CISC architectures, and number systems are also covered.

Uploaded by

Akinladejo Dotun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views24 pages

Com 314 Handout

The document provides an overview of computer systems, detailing their historical evolution through five generations, from vacuum tubes to microprocessors and artificial intelligence. It discusses computer architecture, including functional units, performance measures, and instruction sets, as well as memory hierarchy and types of memory. Key concepts such as the fetch-execute cycle, RISC and CISC architectures, and number systems are also covered.

Uploaded by

Akinladejo Dotun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

CHAPTER ONE

INTRODUCTION TO COMPUTER SYSTEM

Historical Background of Computer Systems

Computer systems have evolved through several generations:


1️⃣ First Generation (1940-1956):

 Used vacuum tubes for computation.


 Large, expensive, and consumed a lot of power.
 Example: ENIAC.

2️⃣ Second Generation (1956-1963):

 Transistors replaced vacuum tubes.


 Smaller, faster, and more reliable.

3️⃣ Third Generation (1964-1971):

 Introduced Integrated Circuits (ICs).


 Allowed multiple users to access the same computer simultaneously.

4️⃣ Fourth Generation (1971-Present):

 Microprocessors integrated the CPU onto a single chip.


 Led to the development of personal computers.

5️⃣ Fifth Generation (Present and Beyond):

 Focuses on artificial intelligence, quantum computing, and parallel processing.

1.2 Architectural Development and Styles


Computer architecture refers to the design and organization of a computer’s components.

 Von Neumann Architecture: Single memory for data and instructions.


 Harvard Architecture: Separate memories for data and instructions.
 Modern Architectures: Incorporate parallelism and pipelining for better performance.

1.3 Technological Developments

Significant advancements include:

 Moore’s Law: The number of transistors on a chip doubles every 18-24 months.
 Miniaturization: Chips have become smaller and more powerful.
 Emergence of GPUs: Specialized for parallel processing tasks.

1.4 Performance Measures

Computer performance is evaluated using metrics such as:

 Clock Speed: Measured in GHz, determines how fast a processor executes instructions.
 Throughput: The number of instructions executed per unit time.
 Latency: Time taken to complete a single task.
 Benchmarking: Testing systems under specific workloads.

CHAPTER TWO
COMPUTER SYSTEM ARCHITECTURE

Functional Units of a Computer System

A computer system consists of functional units that work together to process data and execute
tasks.

Key Functional Units:

1️⃣ Input/Output Units:

 Input Unit: Accepts data from the external environment and converts it into a format the
computer can process (e.g., keyboard, mouse).
 Output Unit: Converts processed data into a human-readable format (e.g., monitor,
printer).

2️⃣ Arithmetic and Logic Unit (ALU):

 Performs arithmetic operations (e.g., addition, subtraction) and logical operations (e.g.,
AND, OR).
 Acts as the brain of computation.

3️⃣ Control Unit:

 Directs and coordinates the activities of the computer by interpreting instructions.


 Manages the flow of data between the CPU, memory, and I/O devices.

4️⃣ Memory Unit:

 Stores data, instructions, and intermediate results.


 Divided into primary memory (RAM, ROM) and secondary memory (HDD, SSD).

5️⃣ Registers:

 Small, high-speed storage locations within the CPU.


 Used to store temporary data and instructions during processing.

2.2 Basic Processor Architecture

A processor architecture defines how the CPU is designed to execute instructions.


Key components include:

 Control Unit (CU): Fetches and decodes instructions.


 Arithmetic Logic Unit (ALU): Executes arithmetic and logical operations.
 Registers: Temporary storage for operands and results.
 Cache: High-speed memory for frequently accessed data.

2.3 Fetch and Execute Cycle

The fetch and execute cycle describes how the CPU processes instructions.

Steps in the Cycle:

1️⃣ Fetch: The CPU fetches the next instruction from memory.
2️⃣ Decode: The control unit interprets the instruction.
3️⃣ Execute: The instruction is executed by the ALU or relevant unit.
4️⃣ Store: The result is written back to memory or a register.

2.4 Types of Computer Architectures

1️⃣ Von Neumann Architecture:

 Uses a single memory for both data and instructions.


 Simple and cost-effective.
 Limitation: Von Neumann bottleneck (slow transfer between memory and CPU).

2️⃣ RISC Architecture (Reduced Instruction Set Computing):

 Focuses on a small set of simple instructions.


 Optimized for faster execution.
 Example: ARM processors in mobile devices.

3️⃣ CISC Architecture (Complex Instruction Set Computing):

 Supports a large set of complex instructions.


 Suitable for complex operations requiring fewer lines of assembly code.
 Example: Intel x86 processors.

2.5 RISC Design Principles and Performance Merits

RISC Design Principles:

 Simple instructions that can execute in a single clock cycle.


 Load and store architecture (only load/store instructions access memory).
 Large number of general-purpose registers.

Performance Merits:

 Faster execution due to fewer clock cycles per instruction.


 Easier to optimize and pipeline.
 Reduces hardware complexity, lowering cost.

CHAPTER THREE
COMPUTER ARITHEMETIC AND OPERATORS

Concepts of Number Systems

Computers operate using binary numbers (0s and 1s) as they rely on electronic signals (on/off
states). Other number systems include decimal, octal, and hexadecimal.

Types of Number Systems:

1️⃣ Binary (Base-2): Uses two symbols: 0 and 1.

 Example: 102 = 1110(decimal).

2️⃣ Decimal (Base-10): Standard number system humans use.

 Example: 4510

3️⃣ Octal (Base-8): Uses digits 0-7.

 Example: 238 = 1910

4️⃣ Hexadecimal (Base-16): Uses digits 0-9 and letters A-F.

 Example: 1A16 = 2610

Conversions Between Number Systems:

 Binary to Decimal: Multiply each bit by 2n (where n is its position from the right,
starting at 0).
 Decimal to Binary: Divide the number by 2 and record the remainders.

3.2 Integer Arithmetic


Integer arithmetic involves operations like addition, subtraction, multiplication, and division
using integers.

Binary Subtraction (Using Borrow):

Similar to decimal subtraction, but borrows occur from the next higher bit.

Two’s Complement Representation

Two’s complement is used to represent negative integers in binary.


 The most significant bit (MSB) is the sign bit:
o 0 for positive numbers.
o 1 for negative numbers.

How to Find Two’s Complement:

1️⃣ Invert all bits (1 to 0, 0 to 1).


2️⃣ Add 1 to the inverted binary number.

Example:

Find the two’s complement of 510

1. 510=000001012 (in 8 bits).


2. Invert: 111110102.
3. Add 1: 111110112 (equals -- 510).

Two’s Complement Arithmetic

Two’s complement simplifies binary arithmetic for signed integers.

 Addition and Subtraction: Perform normal binary addition, ignoring overflow.

Example:
Adding 510 and −310:

 5=000001012
 −3=111111012 (Two’s complement of 3).
 Sum: 000000102 =210.

3.5 Floating-Point Arithmetic


Floating-point representation is used for real numbers, allowing for fractional components (e.g.,
3.14).

Structure of Floating-Point Numbers (IEEE 754 Standard):

1. Sign Bit (1 bit): Indicates the sign (0 for positive, 1 for negative).
2. Exponent (8 bits): Stores the exponent in a biased form.
3. Mantissa (23 bits): Represents the significant digits.

Example:
3.2510 in IEEE 754 format:

1. Convert 3.25 to binary: 11.012


2. Normalize: 1.101×21.
3. Store as: Sign = 0, Exponent = 127+1 = 128, Mantissa = 101000...
CHAPTER FOUR

DESIGN OF CONTROL UNITS & PROCESSING UNIT

Define the Control Unit and Its Structure

The Control Unit (CU) is a component of the CPU responsible for directing and coordinating
the activities of the computer. It ensures the correct execution of instructions by managing the
flow of data between the CPU, memory, and I/O devices.

Structure of the Control Unit:

1. Instruction Decoder: Interprets machine language instructions fetched from memory.


2. Control Signals Generator: Sends control signals to other parts of the system, such as
the ALU and memory.
3. Timing Unit: Synchronizes operations based on the clock.
4. Registers: Temporary storage for intermediate data used by the CU.

4.2 Explain Hardwired and Microprogrammed Control Units

Hardwired Control Unit:

 Built using fixed hardware circuits that generate control signals.


 Faster but inflexible, as changes require modifying the hardware.

Example: Used in RISC processors for speed.

Microprogrammed Control Unit:

 Uses a control memory that stores a set of microinstructions.


 Each microinstruction generates specific control signals.
 Easier to modify or debug compared to hardwired units.
Example: Used in CISC processors for flexibility.

4.3 Explain the Functions of a Control Unit

1. Fetch: Retrieves the instruction from memory.


2. Decode: Interprets the instruction.
3. Control Signal Generation: Sends signals to execute the instruction.
4. Execution Monitoring: Ensures the instruction executes correctly.

4.4 Describe CPU Components and Data Path Organization

Components of the CPU:

1. Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.


2. Control Unit (CU): Manages the execution of instructions.
3. Registers: Temporary storage for data and instructions.

Data Path Organization:

The data path refers to the flow of data within the CPU. It includes:

 Buses: Pathways for data transfer between CPU components.


 Multiplexers (MUX): Selects data from multiple inputs.
 Arithmetic Circuit: Executes arithmetic operations.
 Logic Circuit: Executes logical operations.

Example:
For an addition operation, the data path involves fetching operands from registers, sending them
to the ALU, performing the addition, and storing the result back.
4.5 Explain the CPU Instruction Cycle

The instruction cycle is the sequence of steps the CPU follows to execute an instruction.

Steps in the Instruction Cycle:

1. Fetch:

 The control unit retrieves the next instruction from memory, using the Program Counter
(PC).
2️⃣ Decode:
 The control unit interprets the instruction to determine the operation and the operands.
3️⃣ Execute:
 The ALU or another component performs the operation.
4️⃣ Store:
 The result is written back to a register or memory.

Example: Executing ADD R1, R2, R3:

 Fetch: Instruction is fetched from memory.


 Decode: CU interprets the instruction as R1 = R2 + R3.
 Execute: ALU performs the addition.
 Store: Result is stored in R1.
CHAPTER FIVE

STRUCTURE OF COMPUTER INSTRUCTION SET

Define an Instruction Set and Its Design

An instruction set is the collection of commands or instructions that a CPU can execute. It
serves as the interface between the software (programs) and the hardware (CPU).

Components of an Instruction:

1. Opcode (Operation Code): Specifies the operation to be performed (e.g., ADD, SUB,
MOV).
2. Operands: Specifies the data to be operated on (e.g., registers, memory locations)
3. Addressing Mode: Specifies how the operand is accessed (e.g., direct, indirect).

Instruction Set Design:

 Length: Determines the number of bits in an instruction.


 Complexity: Balances between simple (RISC) and complex (CISC) instructions.
 Registers: Number and type of registers supported by the CPU.

5.2 List Types of Instruction Sets

1. Data Movement Instructions:

 Transfers data between memory, registers, or I/O devices.


 Example: MOV R1, R2 (Copy contents of R2 to R1).

2. Arithmetic and Logical Instructions:

 Performs mathematical or logical operations.


 Examples: ADD R1, R2, R3 (Add R2 and R3, store in R1), AND R1, R2.
3. Control Instructions:

 Alters the sequence of execution (branching, looping).


 Examples: JMP LABEL (Jump to a label), CALL FUNCTION.

4. Input/Output Instructions:

 Manages interaction with peripheral devices.


 Examples: IN R1, PORT (Read data from a port into R1), OUT PORT, R1.

5.3 Explain the Operation of an Instruction Set

An instruction set operates by breaking down high-level tasks into smaller operations executable
by the CPU.

Example: Execution of ADD Instruction

Instruction: ADD R1, R2, R3

1. Fetch: CPU retrieves the instruction from memory.


2. Decode: Control Unit interprets it as “Add R2 and R3, store in R1.”
3. Execute: ALU performs the addition.
4. Store: Result is saved in R1.

5.4 Describe Memory Locations and Addressing Modes

Memory Locations:

Memory is divided into locations identified by unique addresses. Each location can store a fixed
amount of data (e.g., 1 byte).
Addressing Modes:

Addressing modes define how the CPU locates operands.

1. Immediate Mode: Operand is specified directly in the instruction.

 Example: ADD R1, #5 (Add 5 to R1).

2. Direct Mode: Address of the operand is given in the instruction.

 Example: LOAD R1, 100 (Load data from memory address 100 to R1).

3. Indirect Mode: Address of the operand is stored in a register.

 Example: LOAD R1, (R2) (Load data from the address stored in R2).

4. Indexed Mode: Combines a base address and an offset.

 Example: LOAD R1, 100(R2) (Load data from address 100 + contents of R2).

5.5 Explain Instruction Types

1. Data Movement Instructions:

Transfer data between registers, memory, or I/O.

 Example: MOV R1, R2.

2. Arithmetic and Logical Instructions:

Perform calculations and logic operations.

 Example: ADD R1, R2, R3 (Arithmetic), AND R1, R2 (Logical).


3. Sequencing Instructions:

Change the flow of execution.

 Example: JMP LABEL, CALL FUNCTION.

4. Input/Output Instructions:

Control communication with peripheral devices.

 Example: IN R1, PORT, OUT PORT, R1.


CHAPTER SIX

ORGANIZATION AND MANAGEMNT OF COMPUTER MEMORY SYSTEM

6.1 Explain the Concept of Memory Hierarchy

The memory hierarchy is a structured organization of different types of memory based on


speed, size, and cost. The goal is to optimize performance by storing frequently accessed data in
the fastest memory.

Levels in the Memory Hierarchy:

1️⃣ Registers:

 Located inside the CPU.


 Fastest and smallest memory.
 Stores immediate data for processing.

2️⃣ Cache Memory:

 High-speed memory located between the CPU and main memory.


 Reduces the time needed to access frequently used data.

3️⃣ Main Memory (RAM):

 Larger and slower than cache memory.


 Temporarily stores data and instructions for active processes.

4️⃣ Secondary Storage (HDD, SSD):

 Used for long-term storage.


 Much larger and slower than main memory.

5️⃣ Tertiary Storage (e.g., Cloud, Backup Disks):

 Used for archival purposes.


 Slowest and cheapest storage option.

6.2 Explain Memory Structures, Backing Store, and Cache Memory

Memory Structures:

1️⃣ Volatile Memory:

 Loses data when power is turned off.


 Examples: RAM (Random Access Memory).

2️⃣ Non-Volatile Memory:

 Retains data even when power is turned off.


 Examples: ROM (Read-Only Memory), Flash Storage.

Backing Store:

 Refers to secondary storage devices like hard drives (HDDs) or solid-state drives
(SSDs).
 Used to store data that is not actively being used.

Cache Memory:

 Acts as a buffer between the CPU and main memory.


 Stores copies of frequently accessed data to reduce access time.
 Divided into levels:
o L1 Cache: Closest to the CPU, smallest, and fastest.
o L2 Cache: Larger than L1 but slower.
o L3 Cache: Shared by multiple CPU cores, larger and slower than L2.
6.3 Describe Memory Mapping Techniques

Memory mapping refers to how data in the main memory is mapped to the cache for faster
access.

Memory Mapping Techniques:

1️⃣ Direct Mapping:

 Each block of main memory maps to a specific cache line.


 Simple but may lead to frequent overwriting if multiple blocks compete for the same
cache line.

2️⃣ Fully Associative Mapping:

 Any block in main memory can be placed in any cache line.


 Offers flexibility but requires more hardware for searching.

3️⃣ Set Associative Mapping:

 A hybrid of direct and fully associative mapping.


 Memory is divided into sets, and each block maps to a specific set but can occupy any
line within the set.

6.4 Explain Main Memory, Virtual Memory, and One-Level Store

Main Memory (RAM):

 Holds data and instructions that the CPU actively uses.


 Faster than secondary storage but volatile.
Virtual Memory:

 Extends the size of physical memory by using disk space as additional memory.
 Allows programs to use more memory than physically available.
 Implemented using paging or segmentation.

One-Level Store:

 Combines main memory and secondary storage into a single addressable memory space.
 Abstracts the difference between fast (main) and slow (secondary) storage.

6.5 Explain Memory Management Techniques

1. Paging:

 Divides memory into fixed-size blocks called pages.


 Logical memory is divided into pages, and physical memory is divided into frames.
 The OS maintains a page table to map logical addresses to physical addresses.

2. Segmentation:

 Divides memory into variable-sized segments based on the logical divisions of a program
(e.g., code, data, stack).
 Each segment has a base address and a limit.

3. Paged Segmentation:

 Combines paging and segmentation for efficient memory use.


 Each segment is divided into pages, and these pages are mapped to physical memory.
CHAPTER SEVEN

LOW LEVEL PARALLELISM IN PROCESSORS

Explain the Concept of Parallel Computing

Parallel computing refers to the simultaneous execution of multiple processes or tasks to


improve performance and efficiency. It divides a problem into smaller sub-problems, solving
them concurrently.

Key Features of Parallel Computing:

1️⃣ Task Decomposition: Dividing a program into smaller, independent tasks.


2️⃣ Concurrent Execution: Tasks are executed at the same time on multiple processors or cores.
3️⃣ Scalability: Performance improves as more processors are added.

7.2 Describe How Parallel Computing Can Be Achieved

Parallel computing can be achieved through:

1️⃣ Multicore Processors:

 A single CPU contains multiple cores, each capable of executing tasks independently.
 Example: Modern Intel and AMD processors.

2️⃣ Multiprocessing:

 A computer has multiple CPUs (processors) working together.


 Example: High-performance servers.

3️⃣ Cluster Computing:

 Multiple interconnected computers work as a single system.


 Example: Supercomputers.
4️⃣ Grid Computing:

 Distributes tasks across geographically dispersed computers.


 Example: Scientific research projects like SETI@home.

7.3 Explain the Benefits of Parallel Computing

1️⃣ Speed: Tasks are completed faster by dividing them among processors.
2️⃣ Efficiency: Maximizes the utilization of available resources.
3️⃣ Scalability: Can handle larger and more complex problems as resources grow.
4️⃣ Energy Efficiency: Multicore processors can achieve better performance per watt.

7.4 Explain the Concept of Pipelining

Pipelining is a technique where multiple instructions are overlapped during execution. Each
instruction is divided into stages, and each stage is processed concurrently.

Stages in Pipelining (Example: 5-Stage Pipeline):

1️⃣ Fetch: Retrieve the instruction from memory.


2️⃣ Decode: Interpret the instruction.
3️⃣ Execute: Perform the operation in the ALU.
4️⃣ Memory Access: Read or write data from/to memory.
5️⃣ Write Back: Store the result in a register.

Example:

If one instruction takes 5 cycles, a pipeline allows the CPU to execute multiple instructions
simultaneously, achieving one instruction per cycle after the pipeline is full.
7.5 Describe a Basic Pipeline for a Computer System

A basic pipeline consists of:


1️⃣ Instruction Fetch Stage: Reads the instruction from memory.
2️⃣ Instruction Decode Stage: Decodes the fetched instruction.
3️⃣ Execute Stage: Executes the operation or computation.
4️⃣ Memory Access Stage: Reads or writes data as required by the instruction.
5️⃣ Write-Back Stage: Writes the result back to a register.

7.6 Discuss Problems Associated with Pipeline Operations

1. Structural Hazards:

 Occur when multiple instructions need the same hardware resource at the same time.
 Example: Two instructions require access to memory simultaneously.

2. Data Hazards:

 Occur when an instruction depends on the result of a previous instruction that hasn’t
completed.
 Types:
o RAW (Read After Write): Reading data before it’s written.
o WAR (Write After Read): Writing data before it’s read.

3. Control Hazards:

 Occur during branch instructions when the pipeline doesn’t know which instruction to
fetch next.
 Example: Conditional jumps.

7.7 Explain Performance Optimization Using Pipelining


1️⃣ Instruction-Level Parallelism (ILP): Optimizes the execution of independent instructions.
2️⃣ Pipeline Depth: Increasing the number of stages in a pipeline to handle more instructions
concurrently.
3️⃣ Branch Prediction: Reduces control hazards by predicting the outcome of branch
instructions.
4️⃣ Superscalar Execution: Allows multiple instructions to execute in parallel in a single
pipeline.

You might also like