Unit 4 HPC Part4
Unit 4 HPC Part4
Prof. B. J. Dange
Assistant Professor
E-mail : [email protected]
Contact No: 91301 91301 Ext :145, 9604146122
Isoefficiency Metric of Scalability
• For a given problem size, as we increase the number of processing elements, the overall
efficiency of the parallel system goes down for all systems.
• For some systems, the efficiency of a parallel system increases if the problem size is
increased while keeping the number of processing elements constant.
Variation of efficiency: (a) as the number of processing elements is increased for a given
problem size; and (b) as the problem size is increased for a given number of processing
elements. The phenomenon illustrated in graph (b) is not common to all parallel systems.
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 3
Isoefficiency Metric of Scalability
• What is the rate at which the problem size must increase with respect to the number of
processing elements to keep the efficiency fixed?
• This rate determines the scalability of the system. The slower this rate, the better.
• Before we formalize this rate, we define the problem size W as the asymptotic number of
operations associated with the best serial algorithm to solve the problem.
(9)
Finally, we write the expression for efficiency as
(10)
• This function determines the ease with which a parallel system can maintain a constant
efficiency and hence achieve speedups increasing in proportion to the number of
processing elements
= (14)
• Using only the second term, Equation 12 yields the following relation between W
and p:
(15)
• The larger of these two asymptotic rates determines the isoefficiency. This is given
by Θ(p3)
DEPARTMENT OF COMPUTER ENGINEERING, Sanjivani COE, Kopargaon 9
Cost-Optimality and the Isoefficiency Function
• A parallel system is cost-optimal if and only if
(16)
(17)
(18)
• If we have an isoefficiency function f(p), then it follows that the relation W = Ω(f(p))
must be satisfied to ensure the cost-optimality of a parallel system as it is scaled up.
• For a problem consisting of W units of work, no more than W processing elements can
be used cost-optimally.
• The problem size must increase at least as fast as Θ(p) to maintain fixed efficiency;
hence, Ω(p) is the asymptotic lower bound on the isoefficiency function.
• Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar, "Introduction to
Parallel Computing", 2nd edition, Addison-Wesley, 2003, ISBN: 0-201-64865-2.