0% found this document useful (0 votes)
20 views

Task 1 Types of Parallel Processing

Uploaded by

kj5zzkjj2t
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Task 1 Types of Parallel Processing

Uploaded by

kj5zzkjj2t
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Task #1: Types of Parallel Processing -

Sequence Matrix Multiplication Code

1. Task Parallelism: Also known as functional parallelism, this involves dividing a task
into smaller sub-tasks or functions that can be executed concurrently. Each sub-task
is performed independently, and the results are combined at the end. Task
parallelism is commonly used in applications such as multimedia processing, where
different parts of an image or video can be processed simultaneously.
2. Data Parallelism: In data parallelism, the same operation is performed on multiple
pieces of data simultaneously. This can be achieved by distributing the data across
multiple processing units, such as CPU cores or GPU cores, and executing the
operation in parallel on each piece of data. Data parallelism is often used in
applications such as scientific computing, where the same computation needs to be
performed on large datasets.
3. Bit-Level Parallelism: This involves processing multiple bits of data simultaneously.
Bit-level parallelism can be implemented using specialized hardware such as SIMD
(Single Instruction, Multiple Data) units or vector processors. These units can perform
the same operation on multiple bits or elements of data in parallel, greatly increasing
throughput for certain types of computations.
4. Instruction-Level Parallelism (ILP): ILP involves executing multiple instructions from
a single instruction stream simultaneously. This can be achieved through techniques
such as pipelining, superscalar execution, and out-of-order execution. ILP is
commonly used in modern processors to exploit parallelism at the instruction level
and improve performance.
5. Task Farming: Task farming, also known as task parallelism with work-stealing,
involves dynamically distributing tasks among a pool of worker threads or processes.
This allows for efficient utilization of resources by dynamically adjusting the workload
distribution based on the availability of processing units. Task farming is commonly
used in parallel computing frameworks such as OpenMP and Intel Threading Building
Blocks (TBB).
6. Pipeline Parallelism: In pipeline parallelism, different stages of a computation are
executed concurrently, with each stage processing a different part of the data. The
output of one stage serves as the input to the next stage, creating a pipeline of
processing stages. Pipeline parallelism is commonly used in applications such as
digital signal processing and image processing, where data can be processed
sequentially through multiple stages.

2.

#include <stdio.h>
#define SIZE 3

void multiplyMatrix(int matA[][SIZE], int matB[][SIZE], int result[][SIZE]) {

for (int i = 0; i < SIZE; ++i) {

for (int j = 0; j < SIZE; ++j) {

result[i][j] = 0;

for (int k = 0; k < SIZE; ++k) {

result[i][j] += matA[i][k] * matB[k][j];

void printMatrix(int mat[][SIZE]) {

for (int i = 0; i < SIZE; ++i) {

for (int j = 0; j < SIZE; ++j) {

printf("%d ", mat[i][j]);

printf("\n");

int main() {

int matA[SIZE][SIZE] = {{1, 2, 3},

{4, 5, 6},

{7, 8, 9}};

int matB[SIZE][SIZE] = {{9, 8, 7},

{6, 5, 4},

{3, 2, 1}};

int result[SIZE][SIZE];
multiplyMatrix(matA, matB, result);

printf("Matrix A:\n");

printMatrix(matA);

printf("\nMatrix B:\n");

printMatrix(matB);

printf("\nResultant Matrix:\n");

printMatrix(result);

return 0;

You might also like