Assignment
Assignment
Summary of Lecture 1
FETCH ,DECODE ,EXEUCUTE CYCLE EXPLANATION:
https://www.youtube.com/watch?v=Z5JC9Ve1sfI
**************************
Computer organization refers to the way different parts of a computer are arranged and how
they interact with each other. It includes the hardware components and the way they
communicate to perform tasks.
When you input something into a computer (like typing a document), several steps happen to
execute that input. Here’s a simplified overview:
1. Input: You provide data using input devices (like a keyboard or mouse).
2. Processing: The computer processes this data. This is where the microprocessor (or
CPU, Central Processing Unit) comes into play.
3. Storage: The data and results can be temporarily stored in RAM (Random Access
Memory) or saved to long-term storage like a hard drive or SSD (Solid State Drive).
4. Output: Finally, the results are sent to output devices (like a monitor or printer).
A microprocessor is the brain of the computer. Here are its main parts:
1. ALU (Arithmetic Logic Unit): This part performs all arithmetic (addition, subtraction)
and logical operations (comparisons).
2. Control Unit: This unit directs the operations of the computer. It tells the other parts of
the microprocessor what to do and when.
3. Registers: These are small storage locations within the CPU that hold data temporarily
while it is being processed. They are very fast and help speed up operations.
4. Cache Memory: This is a small amount of very fast memory located inside or very close
to the CPU. It stores frequently used data so the CPU can access it quickly.
5. Bus: The bus is a system of pathways used for communication between different parts of
the computer, such as between the CPU, memory, and input/output devices.
A microprocessor is like the brain of a computer or any electronic device. It controls all
the tasks and calculations that need to happen for the device to work. Let’s break it down
in simple terms:
What is a Microprocessor?
A microprocessor is a small chip that can process information and execute instructions
given by a computer program. It reads data, performs operations on it (like adding,
subtracting, or comparing), and then gives out results.
3. Registers:
These are tiny memory locations inside the microprocessor. They hold data temporarily
while the microprocessor is working on it. Think of them like scratchpads, where the
microprocessor quickly jots down information while solving problems.
4. Cache:
This is super-fast memory inside the microprocessor. It stores frequently accessed data
and instructions so that the microprocessor doesn’t have to wait to fetch them from the
slower main memory (RAM). It’s like a “short-term memory” to keep things moving
quickly.
5. Buses:
Buses are the communication lines that connect different parts of the microprocessor.
There are three main types:
o Address Bus: Carries memory addresses, telling the microprocessor where to find
data.
o Control Bus: Sends signals to manage various functions of the microprocessor.
6. Clock:
The clock keeps everything running in sync. It generates a steady stream of pulses that
coordinate the microprocessor’s operations. The speed of the clock determines how fast
the microprocessor can process data.
In Summary:
The Control Unit acts like a manager, telling other parts what to do.
Registers and Cache store temporary data to help the processor work faster.
The Clock keeps everything in rhythm, determining the speed at which the processor
works.
A microprocessor is like the brain of a computer or any electronic device. It controls all
the tasks and calculations that need to happen for the device to work. Let’s break it down
in simple terms:
2. Decode: The CPU interprets the instruction to understand what action to take.
3. Execute: The CPU performs the action using the ALU, control unit, and registers.
Open Architecture:
Open architecture refers to a computer system design that allows for easy integration and
compatibility with other systems or components. This means:
Flexibility: Users can add or change hardware and software without restrictions.
Examples of open architecture include many PC systems, where you can swap out parts like the
CPU, RAM, or graphics card.
Closed Architecture:
Closed architecture, on the other hand, refers to a system design that is more restrictive. In this
case:
Limited Flexibility: Users cannot easily add or replace components; the system is
usually proprietary.
Control: The original manufacturer retains control over the design and specifications,
often limiting access to them.
An example of closed architecture is Apple's early Macintosh computers, where hardware and
software were tightly integrated.
Key Differences
High; users can upgrade and customize Low; upgrades and customizations are
Flexibility
easily limited
Feature Open Architecture Closed Architecture
Access to Design Open; designs are publicly available Closed; designs are proprietary
For instance, while architecture determines if a computer has a specific instruction (like
multiplication), organization decides how that instruction is implemented—whether through
specialized hardware or by using existing components. Many manufacturers, such as IBM with
its System/370, design families of computers that share the same architecture but differ in
organization to offer varied price and performance options.
The chapter also covers the structure and function of computers. Computers are hierarchical
systems made up of subsystems. At every level of this hierarchy, two elements are important: the
structure (how components are connected) and the function (what each component does). The
four basic functions of a computer are data processing, data storage, data movement, and
control. These functions work together to allow computers to perform a wide range of tasks,
such as processing information, storing data, and interacting with external devices.
1. CPU: The central processing unit that performs calculations and controls the system.
Within the CPU, the control unit manages operations, while the ALU (arithmetic and logic unit)
handles data processing. Small, fast registers temporarily store data, and the CPU
interconnection links all these components together, enabling smooth operation.
This chapter lays the foundation for understanding the more advanced topics discussed in later
chapters.
Assignment 2
Summary of Lecture 2
Functional view of Computer System:
The Functional View of a Computer System breaks down how a computer operates by
focusing on its essential tasks. At the highest level, a computer performs four key functions:
1. Data Processing
This is the main function of a computer. It processes data according to instructions provided by a
program. The CPU (Central Processing Unit) is responsible for performing mathematical
calculations, logical comparisons, and other operations on the input data. This processed data is
then transformed into meaningful output.
2. Data Storage
A computer must store data, both temporarily and permanently. Primary storage (like RAM)
holds data temporarily while it’s being processed. Secondary storage (like hard drives or SSDs)
is used for long-term storage of files, programs, and data that can be retrieved later when needed.
3. Data Movement
A computer needs to move data between its components and with external devices. This includes
transferring data between memory, the CPU, and I/O devices (Input/Output), like keyboards,
monitors, and printers. Data movement is handled through internal buses and external
connections (e.g., USB, network cables).
4. Control
The control function coordinates the operations of the computer. The control unit within the
CPU directs the sequence of operations, ensuring that data is processed, moved, and stored in the
correct order. It interprets program instructions and orchestrates the actions of the other
components.
Together, these four functions work seamlessly to allow a computer to perform complex tasks,
handle large amounts of data, and respond to user commands efficiently.
The CPU is the brain of the computer. It is responsible for processing data and controlling the
system's operations. It consists of two main parts:
Arithmetic Logic Unit (ALU): Handles all mathematical calculations and logical
operations.
Control Unit (CU): Directs the operation of the computer by interpreting instructions
and managing the flow of data between different components.
This is the system’s short-term memory, where data and instructions are stored while the CPU
processes them. RAM (Random Access Memory) is volatile, meaning its data is lost when the
computer is turned off. It allows for fast access to data, which is critical for performance.
These are the components that allow the computer to interact with the outside world. Input
devices (like keyboards, mice, and scanners) bring data into the system, while output devices
(like monitors, printers, and speakers) display or produce the results of the computer's
processing.
4. Secondary Storage
Unlike RAM, secondary storage provides long-term storage for data and programs. Devices like
hard drives, SSDs (Solid-State Drives), and optical drives store data even when the system is
powered off. This type of storage is slower than RAM but provides much larger capacity.
The system bus is the communication pathway that connects the CPU, memory, and I/O devices.
It transfers data between these components and ensures they work together. The bus can be
thought of as the computer’s nervous system, enabling the flow of information.
Together, these structural components form the physical makeup of a computer system. Each
part plays a crucial role in ensuring that the computer can execute programs, store data, and
interact with users and other devices effectively.
Fetch Cycle
Process: The control unit fetches the instruction that needs to be executed by:
1. Using the Program Counter (PC) to determine the address of the next
instruction.
2. Loading the instruction from that memory address into the Instruction Register
(IR).
Execute Cycle
Process: The control unit decodes the instruction in the IR and carries out the necessary
actions by:
2. Performance Design: The chapter emphasizes the ongoing effort to improve computer
performance. Factors like microprocessor speed, which has grown due to miniaturization and
efficient design, are crucial. Techniques such as pipelining (processing multiple instructions
simultaneously) and parallel execution have greatly enhanced speed. However, the imbalance
between processor speed and memory access times has posed challenges. To address this,
designers use techniques like caching and wider memory data paths.
3. The Evolution of Intel's x86 Architecture: Intel’s x86 microprocessor family is highlighted
as an example of computer architecture evolution. Starting with the 8086 in the 1970s, Intel
gradually increased speed, memory addressing capabilities, and added advanced features like
floating-point operations and multicore processing. The Pentium series and later processors
introduced techniques like superscalar execution, which allows multiple instructions to be
executed in parallel.
4. Embedded Systems and ARM Architecture: The chapter also introduces embedded
systems, which are specialized computing systems found within other devices (e.g., cars,
appliances). The ARM architecture, based on RISC (Reduced Instruction Set Computer)
principles, dominates the embedded systems market due to its efficiency and low power
consumption. ARM's processors are widely used in smartphones and other compact devices.
Assignment 3
Chapter 3 Summary: A Top-Level View of Computer Function and
Interconnection
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 3 of Computer Organization and Architecture: Designing for Performance by William
Stallings covers the basic functions and structures within a computer system, focusing on how
different components interact and communicate.
1. Computer Components
The chapter begins by revisiting the von Neumann architecture, which is based on the idea that
both data and instructions are stored in a single, read-write memory. It emphasizes that
computers operate in a sequential manner, with the CPU fetching and executing instructions
stored in memory. The primary components of a computer are the CPU, main memory, and I/O
modules (Input/Output). These components work together through interconnections that allow
for efficient data movement and control.
2. Computer Function
The fundamental task of a computer is to execute programs. The instruction cycle is central to
this process, which consists of two main stages:
Fetch Cycle: The CPU fetches the next instruction from memory, storing it in the
instruction register.
Execute Cycle: The CPU interprets and executes the instruction, performing tasks like
reading/writing data, processing, or altering the control flow.
This cycle continues until the program ends or an interrupt occurs. Interrupts allow other
devices or processes to signal the CPU, improving efficiency by enabling the CPU to handle I/O
operations while continuing with other tasks.
3. Interconnection Structures
I/O to CPU or Memory: The CPU sends or receives data from external devices via the
I/O module.
Direct Memory Access (DMA): In some cases, I/O devices directly exchange data with
memory without CPU involvement.
4. Bus Interconnection
Buses serve as the communication pathway between the system’s components. A typical bus has
three key elements:
Buses operate under specific rules to ensure that multiple devices can communicate without
conflict. The chapter also introduces the concept of bus hierarchies, where high-performance
devices use a faster local bus while lower-priority devices use an expansion bus to avoid system
bottlenecks.
The chapter concludes with a discussion of PCI, a popular bus standard for connecting
peripheral devices to the main computer. PCI supports high-speed data transfer and is designed
to be flexible, allowing for expansion and the integration of various devices such as graphics
cards, network controllers, and storage devices.
This chapter gives a broad overview of the fundamental functions and structures of computer
systems, emphasizing the importance of efficient data flow and component interconnection.
Assignment 4
Chapter 4 Summary: Cache Memory
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 4 of Computer Organization and Architecture: Designing for Performance by William
Stallings discusses the role of cache memory in modern computer systems, focusing on its
principles, design, and performance.
The chapter begins with a look at the memory hierarchy, which organizes memory into levels
based on speed, cost, and size. At the top are registers, followed by cache memory, main
memory (RAM), and finally external memory such as hard disks. Each level trades off speed
for capacity, with faster memory being more expensive and of smaller size.
This hierarchy helps balance cost and performance. The challenge is to keep the data that the
CPU needs frequently in the faster, smaller memory levels, while less frequently accessed data
remains in slower memory.
Cache memory is a small, fast type of memory located closer to the CPU than main memory. It
temporarily stores copies of frequently accessed data from main memory, reducing the time the
CPU spends waiting for data. When the CPU requests data, it first checks the cache, and if the
data is there, it's called a cache hit. If not, it’s a cache miss, and the data is fetched from main
memory, causing a delay.
The principle of locality of reference underpins cache memory design. This principle suggests
that programs tend to access the same data repeatedly (temporal locality) or data near recently
accessed data (spatial locality). Therefore, cache memory is designed to store data that is likely
to be reused soon.
Cache size: Larger caches can store more data, reducing cache misses, but they are also
more expensive and slower.
Mapping function: This determines how blocks of memory are mapped to cache lines.
Common methods include:
Replacement algorithms: When the cache is full, the system must decide which data to
replace. Common strategies include Least Recently Used (LRU), First-In-First-Out
(FIFO), and random replacement.
Write policy: This defines how changes in the cache are written back to main memory,
with options like write-through (updates are made to both cache and memory) or write-
back (updates are only made to memory when the data is removed from the cache).
The chapter also delves into how cache memory is implemented in specific processors, such as
Pentium 4 and ARM architectures. These modern processors often use multi-level caches (L1,
L2, and even L3), each offering a different balance of speed and capacity. The goal is to reduce
memory access time and improve overall system performance.
This chapter outlines the importance of cache memory in improving the speed of data access and
how various design elements play a role in optimizing its efficiency.
Assignment 5
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 5 Summary: Internal Memory
Chapter 5 of Computer Organization and Architecture: Designing for Performance by William
Stallings focuses on internal memory, particularly semiconductor memory technologies like
DRAM, SRAM, and ROM, as well as advanced memory systems and error correction
techniques.
The chapter opens by discussing the two main types of semiconductor memory: Dynamic
RAM (DRAM) and Static RAM (SRAM). DRAM stores data as electrical charges on
capacitors, which need regular refreshing due to leakage. SRAM, on the other hand, uses flip-
flops to store data, making it faster but more expensive and less dense than DRAM. DRAM is
typically used for main memory, while SRAM is used in cache memory for quicker access.
The chapter also covers Read-Only Memory (ROM), a non-volatile memory that retains data
even when the power is off. Types of ROM include PROM (Programmable ROM), EPROM
(Erasable PROM), and EEPROM (Electrically Erasable PROM). Flash memory is another
key type of non-volatile memory that is widely used due to its flexibility and faster erasure time
compared to EPROM.
2. Error Correction
Memory systems are prone to errors, and the chapter explains how error correction techniques
are applied to increase reliability. Errors can be categorized into hard failures (permanent
defects) and soft errors (temporary glitches). Error-correcting codes (ECC), like the
Hamming code, are introduced to detect and correct these errors. ECC adds extra bits to data,
allowing the system to detect and fix single-bit errors and, in some cases, detect two-bit errors.
To address the slower speeds of traditional DRAM, advanced types of DRAM, like
Synchronous DRAM (SDRAM) and Rambus DRAM (RDRAM), have been developed.
SDRAM synchronizes with the system clock, allowing faster and more efficient data transfer.
DDR SDRAM (Double Data Rate SDRAM) improves on this by doubling the data transfer rate,
sending data on both the rising and falling edges of the clock cycle.
Rambus DRAM is another high-speed memory technology, primarily used in older high-
performance systems. It features a specialized bus architecture that enables faster data transfers
but has largely been overtaken by DDR technologies.
4. Cache DRAM
Another development is Cache DRAM (CDRAM), which integrates a small amount of SRAM
onto a DRAM chip. This allows the chip to function both as a traditional DRAM and as a cache,
improving performance for random and sequential data access.
5. Interleaved Memory
The concept of interleaved memory is also introduced. It organizes memory into multiple banks
that can handle multiple data requests simultaneously, increasing throughput and improving
memory access times.
Assignment 6
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 6 Summary: External Memory
Chapter 6 of Computer Organization and Architecture: Designing for Performance by William
Stallings provides an overview of external memory systems, including magnetic disks, RAID
configurations, optical memory, and magnetic tapes.
Magnetic disks are fundamental to external storage. A magnetic disk is composed of platters
with a magnetizable surface, organized in concentric rings called tracks. Data is read and written
by a head that magnetizes small areas on the platter. Disk performance depends on factors like
seek time (moving the head to the right track), rotational delay (waiting for the disk to spin to
the right spot), and transfer rate (how fast data moves between the disk and memory). Disk
systems use multiple zone recording, increasing storage by dividing the disk into zones, each
with a different density of bits.
RAID is a method for organizing data across multiple disks to improve performance and
reliability. Stallings outlines different RAID levels:
RAID 1: Mirrors data on two disks, offering high reliability at the cost of storage.
RAID 2 & 3: Use error-correcting codes or parity for data redundancy and are efficient
for large transfers but are rarely used commercially.
RAID 4 & 5: Employ block-level striping with parity; RAID 5 spreads parity blocks
across all drives, balancing reliability and performance.
RAID 6: Similar to RAID 5 but adds an extra parity block for increased fault tolerance,
allowing the system to function even if two disks fail.
3. Optical Memory
Optical disks store data using laser technology and include CDs, DVDs, and Blu-ray discs:
CD-ROMs store data permanently, while CD-Rs are write-once, and CD-RWs are
rewritable.
DVDs offer much larger storage than CDs by packing data more densely, supporting dual
layers and dual sides.
4. Magnetic Tape
Magnetic tape is an older, cost-effective storage medium, often used for archival storage. Data
is stored in long tracks and read in a sequential manner, making it slower but highly reliable for
large data volumes. Modern tape systems use techniques like serpentine recording, which
allows more efficient storage.
Assignment 7
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 7 Summary: Input/Output (I/O)
1. External Devices
External devices are connected to a computer system through I/O modules and can be classified
into three types:
Machine-readable devices, such as disk drives and sensors, enable data exchange with
equipment.
Communication devices, like network interfaces, support data transfer with remote
systems.
Each device interacts with the computer through an I/O module that manages the control, data,
and status signals required for communication.
2. I/O Modules
I/O modules serve as intermediaries between the CPU and peripheral devices. They handle:
Data buffering to bridge speed gaps between the CPU and external devices.
I/O modules vary in complexity, from simple controllers to more advanced I/O channels that
manage data transfers autonomously.
3. Programmed I/O
In programmed I/O, the CPU directly controls the I/O process, issuing commands and checking
device status until data transfers complete. This method is simple but inefficient, as the CPU
remains idle while waiting for the I/O operation to finish.
4. Interrupt-Driven I/O
To improve efficiency, interrupt-driven I/O allows the CPU to issue commands and continue
executing other tasks. When the I/O operation completes, the device sends an interrupt signal to
notify the CPU. This method reduces CPU waiting time but still requires processor intervention
for each data transfer.
DMA is used for transferring large blocks of data without continuous CPU involvement. The
DMA controller manages the data transfer between memory and peripherals, allowing the CPU
to perform other operations. This approach is more efficient than interrupt-driven I/O and
significantly improves performance.
Advanced systems use I/O channels, which are capable of executing I/O commands
independently. These channels can handle multiple devices and manage data transfers without
CPU intervention, freeing the processor to focus on more critical tasks. This development
evolved into I/O processors, which function like a specialized computer for managing complex
I/O operations.
FireWire (IEEE 1394) is a high-speed serial bus supporting daisy-chain connections, hot
plugging, and automatic configuration.
InfiniBand is a scalable high-speed interface used for data centers, supporting both
point-to-point and multipoint configurations to facilitate communication between systems
and devices.
This chapter provides a comprehensive overview of how I/O systems are designed and managed,
from basic programmed I/O to sophisticated DMA and I/O channels, highlighting their
importance in overall system performance.
4o
Assignment 8
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 8 Summary: Operating System Support
Chapter 8 of Computer Organization and Architecture: Designing for Performance by William
Stallings explores the relationship between operating systems (OS) and computer hardware,
focusing on how OS manages resources and supports the execution of programs efficiently.
The OS is the interface between users and computer hardware, designed for:
A resource manager, handling memory, CPU, and I/O devices to coordinate tasks
effectively.
2. Types of Operating Systems
Batch Systems: Programs are processed sequentially with little user interaction.
Interactive Systems: Allow user interaction during program execution, ideal for real-
time tasks.
Time-sharing Systems: Multiple users interact concurrently, sharing CPU time in small
intervals.
3. Scheduling
Long-term scheduling: Decides which programs are admitted into the system.
Short-term scheduling: Selects which task the CPU executes next, based on algorithms
like round-robin or priority-based selection.
The concept of process states is introduced, where tasks transition between states: New, Ready,
Running, Waiting, and Terminated. Each process has a Process Control Block (PCB) to track
its execution details, such as memory location and I/O status.
4. Memory Management
Swapping: Temporarily moving processes between memory and disk to free up space.
Partitioning: Dividing memory into fixed or variable segments for different processes.
Virtual Memory: Enables large programs to execute even when they exceed physical
memory, loading only necessary pages on demand.
Hardware like the Translation Lookaside Buffer (TLB) helps speed up virtual memory by
caching page table entries.
To avoid conflicts:
Privileged instructions restrict critical tasks (e.g., I/O operations) to the OS.
Pentium: Supports segmentation and paging with flexible memory views, including
linear or segmented models.
Assignment 9
Reference Book: Computer Organization and Architecture
Designing for Performance
Author:William Stallings
Chapter 9 Summary: Operating System Support
Chapter 9 of Computer Organization and Architecture: Designing for Performance by William
Stallings focuses on arithmetic operations in computers, exploring integer and floating-point
representation and their associated operations.
The ALU is the core part of a computer responsible for performing arithmetic (addition,
subtraction, multiplication, division) and logical operations. Data flows to and from the ALU via
registers, and the control unit oversees these operations. The ALU also sets flags to indicate
conditions like overflow during computations.
2. Integer Representation
Computers use binary numbers for arithmetic. The three main ways to represent integers are:
Sign-Magnitude Representation: Uses the most significant bit (MSB) as the sign (0 for
positive, 1 for negative). However, it has limitations like two representations for zero.
Fixed-Point Representation: Assumes a fixed position for the binary point, useful for
representing fractions.
3. Integer Arithmetic
Division: Similar to long division in decimal, binary division shifts and subtracts
repeatedly to compute the quotient and remainder. Algorithms like restoring and non-
restoring division simplify implementation.
4. Floating-Point Representation
5. Floating-Point Arithmetic
Rounding: Due to limited precision, rounding methods ensure approximate results stay
within acceptable error bounds.