0% found this document useful (0 votes)
7 views

Dalgorithm

This document contains information about probabilistic algorithms and parallel algorithm analysis. It discusses types of probabilistic algorithms like Monte Carlo, Las Vegas, and Sherwood methods. It then covers parallel algorithm models including data parallel, task graph, work pool, master-slave, pipeline, and hybrid models. The data parallel model assigns similar tasks to different processors, while the task graph model expresses parallelism through a task graph. The work pool and master-slave models dynamically and statically assign tasks respectively. The pipeline model passes data through a queue of processes.

Uploaded by

Anteneh bezah
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Dalgorithm

This document contains information about probabilistic algorithms and parallel algorithm analysis. It discusses types of probabilistic algorithms like Monte Carlo, Las Vegas, and Sherwood methods. It then covers parallel algorithm models including data parallel, task graph, work pool, master-slave, pipeline, and hybrid models. The data parallel model assigns similar tasks to different processors, while the task graph model expresses parallelism through a task graph. The work pool and master-slave models dynamically and statically assign tasks respectively. The pipeline model passes data through a queue of processes.

Uploaded by

Anteneh bezah
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

CPU BUISNESS AND TECHNOLOGY COLLEGE

Design and algorithm analysis

GROUP MEMBERS ID
1, KALKIDAN ZINABU RCS/2011/12
2, ANTENEH BEZA RCS/1991/12
3, SAMUEL KASAHUN RCS/1929/12
4, MINTESNOTE GEZAHEGN RCS/1977/12
5, DAGEM ENEYEW RCS/1976/12

SECTION 2

Probabilistic algorithms
A randomized algorithm is an algorithm that employs a degree of randomness as part of its
logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to
guide its behavior, in the hope of achieving good performance in the "average case" over all
possible choices of random determined by the random bits; thus, either the running time, or
the output (or both) are random variables
Probabilistic algorithms: ‘Monte Carlo’ methods
Algorithms which always return a result, but the result may not always be correct. We attempt
to minimize the probability of an incorrect result, and using the random element, multiple runs
of the algorithm will reduce the probability of incorrect results.
Probabilistic algorithms: ‘Las Vegas’ methods
Algorithms that never return an incorrect result, but may not produce results at all on some
runs. Again, we wish to minimize the probability of no result, and, because of the random
element, multiple runs will reduce the probability of no result. Las Vegas algorithms may
produce tractable computations for tasks for which deterministic algorithms are intractable
even on average. However, we cannot guarantee a result and there is no upper bound on the
time for a result to appear, but the expected time may in fact be small.
Probabilistic algorithms: ‘Sherwood’ methods
Algorithms which always return a result and the correct result, but where a random element
increases the efficiency, by avoiding or reducing the probability of worst-case behavior. This is
useful for algorithms which have a poor worst-case behavior but a good average-case behavior,
and in particular can be used where embedding an algorithm in an application may lead to
increased worst-case behavior.

Parallel Algorithm
Parallel Algorithm - Analysis
Analysis of an algorithm helps us determine whether the algorithm is useful or not. Generally,
an algorithm is analyzed based on its execution time (Time Complexity) and the amount of
space (Space Complexity) it requires.
Since we have sophisticated memory devices available at reasonable cost, storage space is no
longer an issue. Hence, space complexity is not given so much of importance.
Parallel algorithms are designed to improve the computation speed of a computer. For
analyzing a Parallel Algorithm, we normally consider the following parameters −

 Time complexity (Execution Time),


 Total number of processors used, and
 Total cost.
Parallel Algorithm - Models
The model of a parallel algorithm is developed by considering a strategy for dividing the data
and processing method and applying a suitable strategy to reduce interactions. In this chapter,
we will discuss the following Parallel Algorithm Models −

 Data parallel model


 Task graph model
 Work pool model
 Master slave model
 Producer consumer or pipeline model
 Hybrid model

Data Parallel
In data parallel model, tasks are assigned to processes and each task performs similar types of
operations on different data. Data parallelism is a consequence of single operations that is
being applied on multiple data items.
Data-parallel model can be applied on shared-address spaces and message-passing paradigms.
In data-parallel model, interaction overheads can be reduced by selecting a locality preserving
decomposition, by using optimized collective interaction routines, or by overlapping
computation and interaction.

Task Graph Model


In the task graph model, parallelism is expressed by a task graph. A task graph can be either
trivial or nontrivial. In this model, the correlation among the tasks are utilized to promote
locality or to minimize interaction costs. This model is enforced to solve problems in which the
quantity of data associated with the tasks is huge compared to the number of computation
associated with them. The tasks are assigned to help improve the cost of data movement
among the tasks.
Examples − Parallel quick sort, sparse matrix factorization, and parallel algorithms derived via
divide-and-conquer approach.

Work Pool Model


In work pool model, tasks are dynamically assigned to the processes for balancing the load.
Therefore, any process may potentially execute any task. This model is used when the quantity
of data associated with tasks is comparatively smaller than the computation associated with the
tasks.
There is no desired pre-assigning of tasks onto the processes. Assigning of tasks is centralized or
decentralized. Pointers to the tasks are saved in a physically shared list, in a priority queue, or in
a hash table or tree, or they could be saved in a physically distributed data structure.
Example − Parallel tree search
Master-Slave Model
In the master-slave model, one or more master processes generate task and allocate it to slave
processes. The tasks may be allocated beforehand if −

 the master can estimate the volume of the tasks, or


 a random assigning can do a satisfactory job of balancing load, or
 slaves are assigned smaller pieces of task at different times.
This model is generally equally suitable to shared-address-space or message-passing
paradigms, since the interaction is naturally two ways.

Precautions in using the master-slave model


Care should be taken to assure that the master does not become a congestion point. It may
happen if the tasks are too small or the workers are comparatively fast.
The tasks should be selected in a way that the cost of performing a task dominates the cost of
communication and the cost of synchronization.
Asynchronous interaction may help overlap interaction and the computation associated with
work generation by the master.
Pipeline Model
It is also known as the producer-consumer model. Here a set of data is passed on through a
series of processes, each of which performs some task on it. Here, the arrival of new data
generates the execution of a new task by a process in the queue. The processes could form a
queue in the shape of linear or multidimensional arrays, trees, or general graphs with or
without cycles.
This model is a chain of producers and consumers. Each process in the queue can be considered
as a consumer of a sequence of data items for the process preceding it in the queue and as a
producer of data for the process following it in the queue. The queue does not need to be a
linear chain; it can be a directed graph. The most common interaction minimization technique
applicable to this model is overlapping interaction with computation.
Example − Parallel LU factorization algorithm.
Hybrid Models
A hybrid algorithm model is required when more than one model may be needed to solve a
problem.
A hybrid model may be composed of either multiple models applied hierarchically or multiple
models applied sequentially to different phases of a parallel algorithm.
Example − Parallel quick sort

You might also like