0% found this document useful (0 votes)
10 views

Programming

Simplified Parallelism enhances computer performance by dividing large tasks into smaller, concurrent operations, allowing for faster processing. It is exemplified in scenarios like car assembly lines and image editing, where multiple tasks are executed simultaneously. The approach offers benefits such as increased speed, efficiency, and scalability, while also utilizing tools like the Task Parallel Library (TPL) for easier implementation in programming.

Uploaded by

ivan emmanuel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Programming

Simplified Parallelism enhances computer performance by dividing large tasks into smaller, concurrent operations, allowing for faster processing. It is exemplified in scenarios like car assembly lines and image editing, where multiple tasks are executed simultaneously. The approach offers benefits such as increased speed, efficiency, and scalability, while also utilizing tools like the Task Parallel Library (TPL) for easier implementation in programming.

Uploaded by

ivan emmanuel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

HOME ABOUT MORE

Tasks &

SIMPLIFIED
PARALLELISM
INTRODUCTION HOME ABOUT MORE

Simplified Parallelism is a way of making


computers work better. It involves doing lots of
things at the same time. This is done by
splitting big jobs into smaller parts that can be
done at the same time. This makes the
computer work better.
HOME ABOUT MORE

1 EXMAPLE
Imagine a car factory where there is an assembly line. Instead of one person
building a car from start to finish, the process is divided into several stages:

Stage 1: One person puts in the chassis.

Stage 2: Another person installs the engine.

Stage 3: A third person assembles the doors.

Stage 4: Finally, another person paints the car.

These stages are done in parallel for different cars at the same time, allowing
multiple cars to be assembled simultaneously, speeding up the overall process.
HOME ABOUT MORE

2 EXMAPLE

When editing an image or applying filters,


simplified parallelism allows you to split the
image into several parts (e.g. blocks of pixels)
and process each part in parallel, making editing
faster.
BENEFITS OF SIMPLIFIED HOME ABOUT MORE

PARALLELISM

Speed: Completes tasks in less time by


distributing work across multiple processing
cores.

Efficiency: Optimises the use of hardware


resources, avoiding waste.

Scalability: Facilitates the handling of larger and


more complex tasks by increasing hardware
resources.
DATA HOME ABOUT MORE

PARALLELISM
Data Parallelism & TPL

Data Parallelism: Concurrently performs the same


operation on different elements of a collection or array.
Source is partitioned for multi-threaded processing.

Task Parallel Library (TPL): Simplifies parallel operations


using Parallel.For and Parallel.ForEach.
Automatically handles threads, locks, and workload
balancing.

Features & Customization


Task Scheduler: Automatically partitions and balances
workloads.
Advanced Controls:
ParallelOptions for concurrency control.
ParallelLoopState and CancellationToken for state
management and cancellation.
HOME ABOUT MORE

PARALLEL PROGRAMMING IN C#
Pros and Cons of Parallel Programming Parallel Programming in C# Using Parallel.Invoke for Improvement

Advantages: Scenario: Execute heavy Parallel Execution: Tasks


✅ Faster Execution: computations using both run simultaneously
Enables multiple actions to sequential and parallel (hardware-dependent).
run simultaneously, approaches. Result: Total time is less
reducing operation time. Sequential Example: than the sum of task
✅ Efficient Resource Actions run one after times.
Utilization: Optimizes another.
hardware usage when Total Time: Sum of C#
parallelism is well- individual task times. Parallel.Invoke(
implemented. () =>
Disadvantages: C# HeavyComputation("A"),
❌ Complexity: Harder to HeavyComputation("A"); () =>
write, read, and debug. HeavyComputation("B"); HeavyComputation("B"),
❌ Hardware Dependency: // ... other tasks // ... other tasks
Behavior may vary based );
on hardware
configurations.
HOME ABOUT MORE

PARALLEL PROGRAMMING IN C#
Key Notes on Parallel.Invoke

Execution Time: Reduces


overall time, but
parallelism isn’t
guaranteed.
Order: Execution order is
not preserved.

Other Parallel Methods in TPL

Parallel.For: For-loop with


parallel iterations.

Parallel.ForEach: Parallel
iteration over collections.

Parallel.ForEachAsync:
Added in .NET 6 for
asynchronous execution.
HOME ABOUT MORE

COMPARISON OF
SEQUENTIAL VS
PARALLEL
EXECUTION
PARALLEL
EXECUTION
HEAVYCOMPUTATIONASYNC(STRIN
G NAME) METHOD

SIMULATES INTENSIVE TASKS,


JUST LIKE THE ORIGINAL METHOD,
BUT IS DESIGNED AS AN
ASYNCHRONOUS METHOD (ASYNC
TASK<INT>).
ASYNCHRONOUS
SYNCHRONIZATION: USING ASYNC
ALLOWS THIS METHOD TO BE
EXECUTED CONCURRENTLY WITH
OTHER ASYNCHRONOUS METHODS.
PARALLEL
EXECUTION MAIN METHOD

PARALLEL TASKS: THE


HEAVYCOMPUTATIONASYNC TASKS FOR "A", "B",
"C", "D", AND "E" ARE STARTED SIMULTANEOUSLY
AND GROUPED INTO AN ARRAY (TASKS).
TASK.WHENALL: COORDINATES THE EXECUTION
OF ALL TASKS IN PARALLEL AND WAITS FOR ALL
TASKS TO FINISH BEFORE CONTINUING.
MEASURING TOTAL TIME: BY MEASURING TIME
WITH STOPWATCH, WE CAPTURE THE DURATION
OF THE ENTIRE PROCESS (ALL TASKS RUNNING
IN PARALLEL).
RESULTS

THIS SHOWS HOW THE DIFFERENCE BETWEEN


SEQUENTIAL PROCESSES IS MUCH LONGER THAN
PARALLEL PROCESSES.
HOME ABOUT MORE

RUNTIME 6000
Tiempo de Ejecución (ms)

COMPARISON:
5000

SEQUENTIAL VS
4000
PARALLEL
3000

2000

1000

0
Secuencial paralelo
Time (s) Secuencial (%) Paralelo (%)
100 CPU USAGE COMPARISON
(SECUENTIAL VS
80 PARALLEL)

60

40

20

0
1 2 3 4 5
HOME ABOUT MORE

THANK YOU
www.reallygreatsite.com

You might also like