Jump to content

Pairwise summation: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
(29 intermediate revisions by 18 users not shown)
Line 2: Line 2:
first1=Nicholas J. | last1=Higham | journal=[[SIAM Journal on Scientific Computing]] |
first1=Nicholas J. | last1=Higham | journal=[[SIAM Journal on Scientific Computing]] |
volume=14 | issue=4 | pages=783–799 | doi=10.1137/0914050 | year=1993
volume=14 | issue=4 | pages=783–799 | doi=10.1137/0914050 | year=1993
}}</ref> Although there are other techniques such as [[Kahan summation]] that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost&mdash;it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.
| citeseerx=10.1.1.43.3535 }}</ref> Although there are other techniques such as [[Kahan summation]] that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost&mdash;it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.


In particular, pairwise summation of a sequence of ''n'' numbers ''x<sub>n</sub>'' works by [[recursion (computer science)|recursively]] breaking the sequence into two halves, summing each half, and adding the two sums: a [[divide and conquer algorithm]]. Its roundoff errors grow [[Big O notation|asymptotically]] as at most ''O''(ε&nbsp;log&nbsp;''n''), where ε is the [[machine precision]] (assuming a fixed [[condition number]], as discussed below).<ref name=Higham93/> In comparison, the naive technique of accumulating the sum in sequence (adding each ''x<sub>i</sub>'' one at a time for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n'') has roundoff errors that grow at worst as ''O''(ε''n'').<ref name=Higham93/> [[Kahan summation]] has a [[error bound|worst-case error]] of roughly ''O''(ε), independent of ''n'', but requires several times more arithmetic operations.<ref name=Higham93/> If the roundoff errors are random, and in particular have random signs, then they form a [[random walk]] and the error growth is reduced to an average of <math>O(\varepsilon \sqrt{\log n})</math> for pairwise summation.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref>
In particular, pairwise summation of a sequence of ''n'' numbers ''x<sub>n</sub>'' works by [[recursion (computer science)|recursively]] breaking the sequence into two halves, summing each half, and adding the two sums: a [[divide and conquer algorithm]]. Its worst-case roundoff errors grow [[Big O notation|asymptotically]] as at most ''O''(ε&nbsp;log&nbsp;''n''), where ε is the [[machine precision]] (assuming a fixed [[condition number]], as discussed below).<ref name=Higham93/> In comparison, the naive technique of accumulating the sum in sequence (adding each ''x<sub>i</sub>'' one at a time for ''i''&nbsp;=&nbsp;1,&nbsp;...,&nbsp;''n'') has roundoff errors that grow at worst as ''O''(ε''n'').<ref name=Higham93/> [[Kahan summation]] has a [[error bound|worst-case error]] of roughly ''O''(ε), independent of ''n'', but requires several times more arithmetic operations.<ref name=Higham93/> If the roundoff errors are random, and in particular have random signs, then they form a [[random walk]] and the error growth is reduced to an average of <math>O(\varepsilon \sqrt{\log n})</math> for pairwise summation.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref>


Precisely this recursive structure of pairwise summation is found in many [[fast Fourier transform]] (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref><ref name=JohnsonFrigo08>S. G. Johnson and M. Frigo, "[http://cnx.org/content/m16336/latest/ Implementing FFTs in practice], in ''[http://cnx.org/content/col10550/ Fast Fourier Transforms]'', edited by [[C. Sidney Burrus]] (2008).</ref>
A very similar recursive structure of summation is found in many [[fast Fourier transform]] (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.<ref name="Tasche"/><ref name=JohnsonFrigo08>S. G. Johnson and M. Frigo, "[http://cnx.org/content/m16336/latest/ Implementing FFTs in practice], in ''[http://cnx.org/content/col10550/ Fast Fourier Transforms]'', edited by [[C. Sidney Burrus]] (2008).</ref>


==The algorithm==
==The algorithm==


In [[pseudocode]], the pairwise summation algorithm for an [[Array data type|array]] ''x'' of length ''n'' can be written:
In [[pseudocode]], the pairwise summation algorithm for an [[Array data type|array]] {{var|x}} of length {{var|n}} ≥ 0 can be written:


''s'' = '''pairwise'''(''x''[1&hellip;''n''])
''s'' = '''pairwise'''(''x''[1...''n''])
if ''n'' &le; ''N'' ''base case: naive summation for a sufficiently small array''
'''if''' ''n'' &le; ''N'' ''base case: naive summation for a sufficiently small array''
''s'' = ''x''[1]
''s'' = 0
for ''i'' = 2 to ''n''
'''for''' ''i'' = 1 to ''n''
''s'' = ''s'' + ''x''[''i'']
''s'' = ''s'' + ''x''[''i'']
else ''divide and conquer: recursively sum two halves of the array''
'''else''' ''divide and conquer: recursively sum two halves of the array''
''m'' = [[Floor and ceiling functions|floor]](''n'' / 2)
''m'' = [[Floor and ceiling functions|floor]](''n'' / 2)
''s'' = '''pairwise'''(''x''[1&hellip;''m'']) + '''pairwise'''(''x''[''m''+1&hellip;''n''])
''s'' = '''pairwise'''(''x''[1...''m'']) + '''pairwise'''(''x''[''m''+1...''n''])
endif
'''end if'''


For some sufficiently small ''N'', this algorithm switches to a naive loop-based summation as a [[base case]], whose error bound is ''O''(ε''N''). Therefore, the entire sum has a worst-case error that grows asymptotically as ''O''(ε''N''&nbsp;log&nbsp;''n'') for large ''n'', for a given condition number (see below), and the smallest error bound is attained for&nbsp;''N''&nbsp;=&nbsp;1.
For some sufficiently small {{var|N}}, this algorithm switches to a naive loop-based summation as a [[Recursion#base case|base case]], whose error bound is O().<ref>{{cite book|first=Nicholas | last=Higham |title=Accuracy and Stability of Numerical Algorithms (2 ed)| publisher=SIAM|year=2002 | pages=81–82}}</ref> The entire sum has a worst-case error that grows asymptotically as ''O''(ε&nbsp;log&nbsp;''n'') for large ''n'', for a given condition number (see below).


In an algorithm of this sort (as for [[Divide and conquer algorithm#Choosing the base cases|divide and conquer algorithm]]s in general<ref>Radu Rugina and Martin Rinard, "[http://people.csail.mit.edu/rinard/paper/lcpc00.pdf Recursion unrolling for divide and conquer programs]," in ''Languages and Compilers for Parallel Computing'', chapter 3, pp. 34–48. ''Lecture Notes in Computer Science'' vol. 2017 (Berlin: Springer, 2001).</ref>), it is desirable to use a larger base case in order to [[Amortized analysis|amortize]] the overhead of the recursion. If ''N''&nbsp;=&nbsp;1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every ''N''/2 inputs if the recursion stops at exactly&nbsp;''n''&nbsp;=&nbsp;''N''. By making ''N'' sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations<ref name=JohnsonFrigo08/>).
In an algorithm of this sort (as for [[Divide and conquer algorithm#Choosing the base cases|divide and conquer algorithm]]s in general<ref>Radu Rugina and Martin Rinard, "[http://people.csail.mit.edu/rinard/paper/lcpc00.pdf Recursion unrolling for divide and conquer programs]," in ''Languages and Compilers for Parallel Computing'', chapter 3, pp. 34–48. ''Lecture Notes in Computer Science'' vol. 2017 (Berlin: Springer, 2001).</ref>), it is desirable to use a larger base case in order to [[Amortized analysis|amortize]] the overhead of the recursion. If ''N''&nbsp;=&nbsp;1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every ''N''/2 inputs if the recursion stops at exactly&nbsp;''n''&nbsp;=&nbsp;''N''. By making ''N'' sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations<ref name=JohnsonFrigo08/>).
Line 44: Line 44:
:<math>\frac{|E_n|}{|S_n|} \leq \frac{\varepsilon \log_2 n}{1 - \varepsilon \log_2 n} \left(\frac{\sum_{i=1}^n |x_i|}{\left| \sum_{i=1}^n x_i \right|}\right). </math>
:<math>\frac{|E_n|}{|S_n|} \leq \frac{\varepsilon \log_2 n}{1 - \varepsilon \log_2 n} \left(\frac{\sum_{i=1}^n |x_i|}{\left| \sum_{i=1}^n x_i \right|}\right). </math>


In the expression for the relative error bound, the fraction (&Sigma;|''x<sub>i</sub>''|/|&Sigma;''x<sub>i</sub>''|) is the [[condition number]] of the summation problem. Essentially, the condition number represents the ''intrinsic'' sensitivity of the summation problem to errors, regardless of how it is computed.<ref>L. N. Trefethen and D. Bau, ''Numerical Linear Algebra'' (SIAM: Philadelphia, 1997).</ref> The relative error bound of ''every'' ([[backwards stable]]) summation method by a fixed algorithm in fixed precision (i.e. not those that use [[arbitrary precision]] arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.<ref name=Higham93/> An ''ill-conditioned'' summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands ''x<sub>i</sub>'' are uncorrelated random numbers with zero mean, the sum is a [[random walk]] and the condition number will grow proportional to <math>\sqrt{n}</math>. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as <math>n\to\infty</math>. If the inputs are all [[non-negative]], then the condition number is 1.
In the expression for the relative error bound, the fraction (&Sigma;|''x<sub>i</sub>''|/|&Sigma;''x<sub>i</sub>''|) is the [[condition number]] of the summation problem. Essentially, the condition number represents the ''intrinsic'' sensitivity of the summation problem to errors, regardless of how it is computed.<ref>L. N. Trefethen and D. Bau, ''Numerical Linear Algebra'' (SIAM: Philadelphia, 1997).</ref> The relative error bound of ''every'' ([[backwards stable]]) summation method by a fixed algorithm in fixed precision (i.e. not those that use [[arbitrary-precision arithmetic]], nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.<ref name=Higham93/> An ''ill-conditioned'' summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands ''x<sub>i</sub>'' are uncorrelated random numbers with zero mean, the sum is a [[random walk]] and the condition number will grow proportional to <math>\sqrt{n}</math>. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as <math>n\to\infty</math>. If the inputs are all [[non-negative]], then the condition number is 1.


Note that the <math>1 - \varepsilon \log_2 n</math> denominator is effectively 1 in practice, since <math>\varepsilon \log_2 n</math> is much smaller than 1 until ''n'' becomes of order 2<sup>1/ε</sup>, which is roughly 10<sup>10<sup>15</sup></sup> in double precision.
Note that the <math>1 - \varepsilon \log_2 n</math> denominator is effectively 1 in practice, since <math>\varepsilon \log_2 n</math> is much smaller than 1 until ''n'' becomes of order 2<sup>1/ε</sup>, which is roughly 10<sup>10<sup>15</sup></sup> in double precision.


In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as <math>O(\varepsilon n)</math> multiplied by the condition number.<ref name=Higham93/> In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a [[root mean square]] relative error that grows as <math>O(\varepsilon \sqrt{n})</math> and pairwise summation as an error that grows as <math>O(\varepsilon \sqrt{\log n})</math> on average.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000).</ref>
In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as <math>O(\varepsilon n)</math> multiplied by the condition number.<ref name=Higham93/> In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a [[root mean square]] relative error that grows as <math>O(\varepsilon \sqrt{n})</math> and pairwise summation has an error that grows as <math>O(\varepsilon \sqrt{\log n})</math> on average.<ref name="Tasche"/>

==Software implementations==

Pairwise summation is the default summation algorithm in [[NumPy]]<ref>[https://github.com/numpy/numpy/pull/3685 ENH: implement pairwise summation], github.com/numpy/numpy pull request #3685 (September 2013).</ref> and the [[Julia (programming language)|Julia technical-computing language]],<ref>[https://github.com/JuliaLang/julia/pull/4039 RFC: use pairwise summation for sum, cumsum, and cumprod], github.com/JuliaLang/julia pull request #4039 (August 2013).</ref> where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case).

Other software implementations include the HPCsharp library<ref>https://github.com/DragonSpit/HPCsharp HPCsharp nuget package of high performance C# algorithms</ref> for the [[C Sharp (programming language)|C#]] language and the standard library summation<ref>{{Cite web|title=std.algorithm.iteration - D Programming Language|url=https://dlang.org/phobos/std_algorithm_iteration.html#sum|access-date=2021-04-23|website=dlang.org}}</ref> in [[D (programming language)|D]].


==References==
==References==
{{reflist}}
<references/>


[[Category:Computer arithmetic]]
[[Category:Computer arithmetic]]

Revision as of 18:59, 25 January 2024

In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as On).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

The algorithm

In pseudocode, the pairwise summation algorithm for an array x of length n ≥ 0 can be written:

s = pairwise(x[1...n])
      if nN     base case: naive summation for a sufficiently small array
          s = 0
          for i = 1 to n
              s = s + x[i]
      else         divide and conquer: recursively sum two halves of the array
          m = floor(n / 2)
          s = pairwise(x[1...m]) + pairwise(x[m+1...n])
      end if

For some sufficiently small N, this algorithm switches to a naive loop-based summation as a base case, whose error bound is O(Nε).[4] The entire sum has a worst-case error that grows asymptotically as O(ε log n) for large n, for a given condition number (see below).

In an algorithm of this sort (as for divide and conquer algorithms in general[5]), it is desirable to use a larger base case in order to amortize the overhead of the recursion. If N = 1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every N/2 inputs if the recursion stops at exactly n = N. By making N sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations[3]).

Regardless of N, exactly n−1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation.

A variation on this idea is to break the sum into b blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a "superblock" algorithm by its proposers.[6] The above pairwise algorithm corresponds to b = 2 for every stage except for the last stage which is b = N.

Accuracy

Suppose that one is summing n values xi, for i = 1, ..., n. The exact sum is:

(computed with infinite precision).

With pairwise summation for a base case N = 1, one instead obtains , where the error is bounded above by:[1]

where ε is the machine precision of the arithmetic being employed (e.g. ε ≈ 10−16 for standard double precision floating point). Usually, the quantity of interest is the relative error , which is therefore bounded above by:

In the expression for the relative error bound, the fraction (Σ|xi|/|Σxi|) is the condition number of the summation problem. Essentially, the condition number represents the intrinsic sensitivity of the summation problem to errors, regardless of how it is computed.[7] The relative error bound of every (backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that use arbitrary-precision arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.[1] An ill-conditioned summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands xi are uncorrelated random numbers with zero mean, the sum is a random walk and the condition number will grow proportional to . On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as . If the inputs are all non-negative, then the condition number is 1.

Note that the denominator is effectively 1 in practice, since is much smaller than 1 until n becomes of order 21/ε, which is roughly 101015 in double precision.

In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as multiplied by the condition number.[1] In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a root mean square relative error that grows as and pairwise summation has an error that grows as on average.[2]

Software implementations

Pairwise summation is the default summation algorithm in NumPy[8] and the Julia technical-computing language,[9] where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case).

Other software implementations include the HPCsharp library[10] for the C# language and the standard library summation[11] in D.

References

  1. ^ a b c d e f g Higham, Nicholas J. (1993), "The accuracy of floating point summation", SIAM Journal on Scientific Computing, 14 (4): 783–799, CiteSeerX 10.1.1.43.3535, doi:10.1137/0914050
  2. ^ a b c Manfred Tasche and Hansmartin Zeuner Handbook of Analytic-Computational Methods in Applied Mathematics Boca Raton, FL: CRC Press, 2000).
  3. ^ a b S. G. Johnson and M. Frigo, "Implementing FFTs in practice, in Fast Fourier Transforms, edited by C. Sidney Burrus (2008).
  4. ^ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed). SIAM. pp. 81–82.
  5. ^ Radu Rugina and Martin Rinard, "Recursion unrolling for divide and conquer programs," in Languages and Compilers for Parallel Computing, chapter 3, pp. 34–48. Lecture Notes in Computer Science vol. 2017 (Berlin: Springer, 2001).
  6. ^ Anthony M. Castaldo, R. Clint Whaley, and Anthony T. Chronopoulos, "Reducing floating-point error in dot product using the superblock family of algorithms," SIAM J. Sci. Comput., vol. 32, pp. 1156–1174 (2008).
  7. ^ L. N. Trefethen and D. Bau, Numerical Linear Algebra (SIAM: Philadelphia, 1997).
  8. ^ ENH: implement pairwise summation, github.com/numpy/numpy pull request #3685 (September 2013).
  9. ^ RFC: use pairwise summation for sum, cumsum, and cumprod, github.com/JuliaLang/julia pull request #4039 (August 2013).
  10. ^ https://github.com/DragonSpit/HPCsharp HPCsharp nuget package of high performance C# algorithms
  11. ^ "std.algorithm.iteration - D Programming Language". dlang.org. Retrieved 2021-04-23.