0% found this document useful (0 votes)
163 views28 pages

Flynn's and Fengs Architecture

The document discusses parallelism in computer architecture, focusing on Flynn's and Feng's classifications. Flynn's classification categorizes systems based on instruction and data streams into four types: SISD, SIMD, MISD, and MIMD, while Feng's classification emphasizes the degree of parallelism in processing bits and words. The document highlights the characteristics, use cases, and examples of each classification type.

Uploaded by

Sagar Khanal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views28 pages

Flynn's and Fengs Architecture

The document discusses parallelism in computer architecture, focusing on Flynn's and Feng's classifications. Flynn's classification categorizes systems based on instruction and data streams into four types: SISD, SIMD, MISD, and MIMD, while Feng's classification emphasizes the degree of parallelism in processing bits and words. The document highlights the characteristics, use cases, and examples of each classification type.

Uploaded by

Sagar Khanal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Taxonomies in Computer

Architecture: Flynn's and Feng's


Classifications

Presented By: Soujanya Subedi


Introduction
• Parallelism in computer architecture refers to the simultaneous
execution of multiple tasks or instructions to improve performance
and efficiency.
• It enables systems to handle larger workloads, process data faster,
and solve complex problems.
• Parallelism Models: Flynn’s Classification and Feng's Classification
Flynn’s Classification
Flynn's Classification/ Flynn’s Taxonomy
• Proposed by Michael J. Flynn in 1966.
• Classifies computer systems based on how they handle instruction
streams and data streams.
• Instruction Streams: The sequence of instructions fetched, decoded, and
executed by the processor.
• Data Streams: The flow of data operated on by the instructions during
execution.
• Applicable in categorizing modern computing systems like AI
accelerators, cloud infrastructures, and distributed systems
Flynn's Classification
Four Categories:
1. SISD: Single Instruction, Single Data
2. SIMD: Single Instruction, Multiple Data
3. MISD: Multiple Instruction, Single Data
4. MIMD: Multiple Instruction, Multiple Data
SISD (Single Instruction, Single Data)
• A uniprocessor system where a single instruction operates on a single
data stream at any given time.
• Often referred to as sequential computers due to their step-by-step
processing
• Characteristics:
• Only one Control Unit (CU) and one Processing Unit (PU).
• Instructions are processed in a strict, sequential manner.
• Primary memory stores both instructions and data.
• Processing bottlenecks can occur due to the sequential nature of execution.
SISD (Single Instruction, Single Data)
• How It Works: Fetch-Decode-Execute Cycle
• Use Case: Suited for general-purpose computing, such as word
processing, simple calculations, and basic operating systems.
• Examples:
• Von Neumann Architecture Based Computers
• Intel 8085 Microprocessor
SIMD (Single Instruction, Multiple Data)
• A computing system where a single instruction is executed across
multiple data streams simultaneously.
• Also Known as Vector Processing
• Enables data-level parallelism, making it efficient for repetitive
computations over large datasets
• Characteristics:
• One Control Unit (CU) issues a single instruction to all Processing Units (PUs)
simultaneously.
• Each Processing Unit works on a separate piece of data from memory.
• Operations occur on a lock-step basis: all processors execute the same
operation at the same time
SIMD (Single Instruction, Multiple Data)
• How It Works (Example):
• A single instruction, such as ADD, is applied to an entire array.
• If an array contains elements [1, 2, 3], SIMD can compute [1+1, 2+1, 3+1] in a
single step for multiple processors.
SIMD (Single Instruction, Multiple Data)
• Examples:
• Cray Vector Processors: Early machines designed for high-performance
scientific computing.
• Modern GPUs (Graphics Processing Units): Specialized hardware for parallel
processing in gaming, AI, and graphics rendering.
• Use Cases: Scientific Simulations, Image Processing, Machine Learning
MISD (Multiple Instruction, Single Data)
• A system where multiple instructions are executed on the same data
stream simultaneously.
• Represents instruction-level parallelism rather than data-level
parallelism
• Characteristics:
• Multiple Control Units (CUs), each fetching and executing different
instructions.
• All instructions operate on the same input data stream.
• Focuses on redundancy and fault tolerance rather than computational
performance.
MISD (Multiple Instruction, Single Data)
• How It Works (Example):
• For the same input value x, different processors perform operations such as:
• Processor 1: sin(x)
• Processor 2: cos(x)
• Processor 3: tan(x)
• System can generate multiple results or validate
computations for reliability.

• Use Cases:
• Fault Tolerance: spacecraft control or nuclear
reactor monitoring.
MISD (Multiple Instruction, Single Data)
• Examples:
• Aerospace systems for flight control and navigation, where accuracy and fault
tolerance are crucial.
• Specialized embedded systems in military or safety-critical industries.
MIMD (Multiple Instruction, Multiple Data)
• A computing system where multiple processors execute different
instructions on separate data streams simultaneously.
• Enables task-level parallelism and asynchronous operation for diverse
workloads.
• Characteristics:
• Each processor has its own Control Unit (CU) and Processing Unit (PU).
• Instructions and data streams are independent, allowing for flexibility and
high performance.
• Tasks can execute in parallel without interference, making MIMD suitable for
a wide range of applications.
MIMD (Multiple Instruction, Multiple Data)
• How It Works (Example):
• Processor 1: Processes database queries.
• Processor 2: Performs machine learning computations.
• Processor 3: Handles user interface tasks.
• Each processor operates asynchronously,
optimizing resource utilization and throughput.

• Two Subtypes:
• Shared Memory Systems
• Distributed Memory Systems
MIMD (Multiple Instruction, Multiple Data)
• Shared Memory Systems:
• Processors share access to a global memory space.
• Communication occurs via shared memory, simplifying data exchange.
• Examples: Symmetric Multi-Processing (SMP) systems like IBM SMP.

• Distributed Memory Systems:


• Each processor has its local memory, and communication happens over a
network (e.g., message passing).
• Scalable architecture used in clusters and grid computing.
• Examples: Systems connected in mesh, tree, or hypercube networks.
MIMD (Multiple Instruction, Multiple Data)

Shared Memory Systems Distributed Memory Systems


MIMD (Multiple Instruction, Multiple Data)
• Use Cases:
• Multicore CPUs: Found in personal computers and mobile devices for
multitasking.
• Distributed Systems: Used in cloud computing and big data processing.
• Supercomputers: Solve complex problems like climate modeling, genome
sequencing, and AI training.

• Examples:
• Multicore Processors: Intel Core, AMD Ryzen.
• Distributed Systems: HPC clusters, cloud data centers.
• Supercomputers: Tianhe, Summit, Fugaku.
Feng’s Classification
Feng’s Classification
• Proposed By Tse-Yun Feng in 1972.
• Classifies computer architectures based on their degree of parallelism.

Key Concepts:
• Maximum degree of parallelism P:
• The maximum number of binary digits (bits) that can be processed within a unit time.
• Parameters:
• Let I = 1,2,3,…. T be the different timing instants and P1,P2,….PT be the
corresponding bits processed
• Average parallelism Pa: (P1 + P2 + ….. + PT)/T
• Utilization rate μ: (Pa / P)
• For full utilization: Pi=P, μ=1
Feng’s Classification
• Four Categories
• Word Serial Bit Serial (WSBS)
• Word Serial Bit Parallel (WSBP)
• Word Parallel Bit Serial (WPBS)
• Word Parallel Bit Parallel (WPBP)
• The classification is based on the content stored in memory
• The contents can be data or instructions.
Word Serial Bit Serial (WSBS)
• Bit Serial Processing: One bit of one word is processed at a time.
• Slowest among all categories, due to serial operation.
• Characteristics:
• Single Processing Unit (PU) operates sequentially.
• High processing time for complex tasks.
• Example:
• First-Generation Computers such as ENIAC and UNIVAC, which processed data
in a strictly sequential manner.
Word Serial Bit Parallel (WSBP)
• Word Slice Processing: All bits of one word are processed at a time,
but one word at a time.
• Moderate parallelism, offering improved speed over WSBS.
• Characteristics:
• Allows for faster computation than purely serial processing.
• Suitable for tasks that can be executed word by word.
• Examples:
• IBM 370, Cray-1, PDP-11: These systems utilized word serial bit parallel
processing to enhance efficiency.
Word Parallel Bit Serial (WPBS)
• Bit Slice Processing: One bit from multiple words is processed
simultaneously.
• Moderate parallelism; emphasizes concurrency at the word level.
• Characteristics:
• Multiple processing units (PUs) handle different words but focus on a single
bit.
• Useful for operations requiring bit-level manipulation across multiple data
points.
• Examples:
• STARAN and MPP (Massively Parallel Processors)
Word Parallel Bit Parallel (WPBP)
• Fully Parallel Processing: All bits of all words are processed
simultaneously.
• Maximum parallelism; the fastest processing mode available.
• Characteristics:
• High throughput due to simultaneous operations across multiple processing
units.
• Optimal for tasks requiring extensive computation over large datasets.
• Examples: C.mmp (Connection Machine Multicomputer), PEPE
(Parallel Processing Engine), CRAY-2 Supercomputer, NVIDIA GPUs
Feng's Classification
Flynn’s vs Feng’s Classification
Aspect Flynn's Classification Feng's Classification
Proposed by Michael J. Flynn in 1966 Tse-yun Feng in 1972
Focus Instruction streams vs. data streams Degree of parallelism based on bit and word processing
Focuses on instruction-level and data-level
Parallelism Type Focuses on bit-level and word-level parallelism
parallelism
Utilization rate is defined based on the average degree
Utilization Rate Not explicitly defined
of parallelism
Based on the combination of serial and parallel
Execution Model Based on how instructions and data are processed
processing at bit and word levels
General categories applicable to various
Application Scope Focused on parallel processing architectures
computing systems
Performance Higher categories (SIMD, MIMD) provide better Higher degrees of parallelism (WPBP) lead to maximum
Effects performance for parallel tasks processing efficiency
Example Systems SISD: Intel 8085 WSBS: First-generation computers
SIMD: Modern GPUs WSBP: IBM 370, Cray-1
MISD: Aerospace systems WPBS: STARAN, MPP processors
MIMD: Multicore CPUs, distributed systems WPBP: C.mmp, PEPE processors
The End

You might also like