0% found this document useful (0 votes)
9 views5 pages

Polymultiply

This document discusses the fast Fourier transform (FFT) algorithm and its application in polynomial multiplication, highlighting its efficiency compared to traditional methods. It introduces the discrete Fourier transform (DFT), explains the recursive nature of FFT, and outlines the steps for fast polynomial multiplication and convolution. Additionally, it provides a brief history of the FFT's development and its significance in digital signal processing.

Uploaded by

s171364
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views5 pages

Polymultiply

This document discusses the fast Fourier transform (FFT) algorithm and its application in polynomial multiplication, highlighting its efficiency compared to traditional methods. It introduces the discrete Fourier transform (DFT), explains the recursive nature of FFT, and outlines the steps for fast polynomial multiplication and convolution. Additionally, it provides a brief history of the FFT's development and its significance in digital signal processing.

Uploaded by

s171364
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Polynomial Multiplication and Fast Fourier Transform

(Com S 477/577 Notes)

Yan-Bin Jia

Sep 20, 2022

In this lecture we will describe the famous algorithm of fast Fourier transform (FFT), which
has revolutionized digital signal processing and in many ways changed our life. It was listed by the
Science magazine as one of the ten greatest algorithms in the 20th century. Here we will learn FFT
in the context of polynomial multiplication, and later on into the semester reveal its connection to
Fourier transform.
Suppose we are given two polynomials:

p(x) = a0 + a1 x + · · · + an−1 xn−1 ,


q(x) = b0 + b1 x + · · · + bn−1 xn−1 .

Their product is defined by

p(x) · q(x) = c0 + c1 x + · · · c2n−2 x2n−2

where X
ci = ak bi−k .
max{0,i−(n−1)}≤k≤min{i,n−1}

In computing the product polynomial, every ai is multiplied with every bj , for 0 ≤ i, j ≤ n − 1. So


there are at most n2 multiplications, given that some of the coefficients may be zero. Obtaining
every ci involves one fewer additions than multiplications. So there are at most n2 −2n+1 additions
involved. In short, the number of arithmetic operations is O(n2 ). This is hardly efficient.
But can we obtain the product more efficiently? The answer is yes, by the use of a well-known
method called fast Fourier transform, or simply, FFT.

1 Discrete Fourier Transform


Let us start with introducing the discrete Fourier transform (DFT) problem. Denote by ωn an nth

complex root of 1, that is, ωn = ei n , where i2 = −1. DFT is the mapping between two vectors:
   
a0 â0
 a1   â1 
a =  .  7−→ â =  . 
   
 .  .  .  .
an−1 ân−1

1
such that
n−1
X
âj = ak ωnjk , j = 0, . . . , n − 1.
k=0

It can also be written as a matrix equation:

1 1 1 1
    
··· a0 â0
 1 ωn ωn2 ··· ωnn−1   a1   â1 
.. .. .. .. .. = .
     
.. ..
. . . . .
  
   .   . 
2(n−1) (n−1)2
1 ωnn−1 ωn · · · ωn an−1 ân−1

The matrix above is a Vandermonde matrix and denoted by Vn .


Essentially, DFT evaluates the polynomial

p(x) = a0 + a1 x + · · · + an−1 xn−1

at n points ωn0 , ωn1 , . . . , ωnn−1 ; in other words, âk = p(ωnk ) for 0 ≤ k ≤ n − 1. From now on we
assume that n is a power of 2. If not, we can always add in higher order terms with zero coefficients
an = an+1 = · · · = a2⌈log2 n⌉ −1 = 0. The powers of ωn are illustrated in the complex plane in the
following figure.

yi

n/4+1 i n/4−1
ωn ωn

n/2−1
ωn ωn

x
−1 1
n/2+1
ωn ωnn−1

3n/4−1 3n/4+1
ωn −i ωn

The fast Fourier transform algorithm cleverly makes use of the following properties about ωn :

ωnn = 1,
ωnn+k = ω k ,
n
ωn2 = −1,
n
+k
ωn 2
= −ωnk .

2
It uses a divide-and-conquer strategy. More specifically, it divides p(x) into two polynomials p0 (x)
and p1 (x), both of degree n2 − 1; namely,

p0 (x) = a0 + a2 x + · · · + an−2 x 2 −1 ,
n

p1 (x) = a1 + a3 x + · · · + an−1 x 2 −1 .
n

Hence
p(x) = p0 (x2 ) + xp1 (x2 ). (1)
In this way the problem of evaluating p(x) at ωn0 , . . . , ωnn−1 breaks down into two steps:
1. evaluating p0 (x) and p1 (x) at (ωn0 )2 , (ωn1 )2 , . . ., (ωnn−1 )2 ,
2. combining the resulting according to (1).
Note that the list (ωn0 )2 , (ωn1 )2 , . . ., (ωnn−1 )2 consists of only n2 complex roots of unity, i.e.,
ωn0 , ωn2 , . . . , ωnn−2 .
So the subproblems of evaluating p0 (x) and p1 (x) have exactly the same form as
the original problem of evaluating p(x), only at half the size. This decomposition forms the basis
for the recursive FFT algorithm presented below.
Recursive-DFT(a, n)
1 if n = 1
2 then return a

3 ωn ← ei n
4 ω←1
5 a[0] ← (a0 , a2 , . . . , an−2 )
6 a[1] ← (a1 , a3 , . . . , an−1 )
7 â[0] ← Recursive-DFT(a[0] , n2 )
8 â[1] ← Recursive-DFT(a[1] , n2 )
9 for k = 0 to n2 − 1 do
[0] [1]
10 âk ← âk + ωâk
[0] [1]
11 âk+ n2 ← âk − ωâk
12 ω ← ωωn
13 return (â0 , â1 , . . . , ân−1 )
To verify the correctness, we here understand line 11 in the procedure Recursive-DFT:
[0] [1]
âk+ n2 = âk − ωâk .

At the kth iteration of the for loop of lines 9–12, ω = ωnk . We have
[0] [1]
âk+ n2 = âk − ωnk âk
[0] k+ n [1]
= âk + ωn 2 âk
k+ n
   
= p0 ωn2k + ωn 2 p1 ωn2k
k+ n
   
= p0 ωn2k+n + ωn 2 p1 ωn2k+n
 k+ n 
= p ωn 2 , from (1).

3
Let T (n) be the running time of Recursive-DFT. Steps 1–6 take time Θ(n). Steps 7 and 8
each takes time T ( n2 ). Steps 9–13 take time Θ(n). So we end up with the recurrence
n
T (n) = 2T + Θ(n),
2
which has the solution
T (n) = Θ(n log2 n).

2 Inverse DFT
Suppose we need to compute the inverse Fourier transform given by

a = Vn−1 â.

Namely, we would like to determine the coefficients of the polynomial p(x) = a0 + · · · + an−1 xn−1
given its values at ωn0 , . . . , ωnn−1 . Can we do it with the same efficiency, that is, in time Θ(n log n)?
The answer is yes. To see why, note that the Vandermonde matrix Vn has inverse

1 1 1 1
 
···
−(n−1)
1 ωn−1 ωn−2
 
1 ··· ωn 
Vn−1 = 
 
n .. .. .. .. 
 . . . ··· . 

−(n−1) −2(n−1) −(n−1)2
1 ωn ωn · · · ωn
Pn−1 k j
To verify the above, make use of the equation j=0 (ωn ) = 0 for non-negative integer k not
divisible by n.
Based on the above observation, we can still apply Recursive-DFT by replacing a with â, â
with a, ωn with ωn−1 (that is, ωnn−1 ), and scaling the result by n1 .

3 Fast Multiplication of Two Polynomials


Let us now go back to the two polynomials at the beginning:

p(x) = a0 + a1 x + · · · + an−1 xn−1 ,


q(x) = b0 + b1 x + · · · + bn−1 xn−1 .

Their product
(p · q)(x) = p(x) · q(x) = c0 + c1 x + · · · c2n−2 x2n−2
can be computed by combining FFT with interpolation. The computation takes time Θ(n log n)
and consists of the following three steps:
0 , . . . , ω 2n−1 using DFT. This step takes time Θ(n log n).
1. Evaluate p(x) and q(x) at 2n points ω2n 2n

4
2. Obtain the values of p(x)q(x) at these 2n points through pointwise multiplication
0 0 0
(p · q)(ω2n ) = p(ω2n ) · q(ω2n ),
1 1 1
(p · q)(ω2n ) = p(ω2n ) · q(ω2n ),
..
.
2n−1 2n−1 2n−1
(p · q)(ω2n ) = p(ω2n ) · q(ω2n ).

This step takes time Θ(n).

3. Interpolate the polynomial p · q at the product values using inverse DFT to obtain coefficients
c0 , c1 , . . . , c2n−2 . This last step requires time Θ(n log n).

We can also use FFT to compute the convolution of two vectors

a = (a0 , . . . , an−1 ) and b = (b0 , . . . , bn−1 ),

which is defined as a vector c = (c0 , . . . , cn−1 ) where


j
X
cj = ak bj−k , j = 0, . . . , n − 1.
k=0

The running time is again Θ(n log n).

4 History of FFT
Modern FFT is widely credited to the paper [2] by Cooley and Tukey. But the algorithm had
been discovered independently by a few individuals in the past. Only the appearance of digital
computers and the wide application of signal processing made people realize the importance of fast
computation of large Fourier series. An incomplete list of pioneers includes

• Gauss (1805) — the earliest known origin of the FFT algorithm.

• Runge and König (1924) — the doubling algorithm.

• Danielson and Lanczos (1942) — divide-and-conquer on DFTs.

• Rudnick (1960s) — the first computer program implementation with O(n log n) time.

References
[1] T. H. Cormen et al. Introduction to Algorithms. McGraw-Hill, Inc., 2nd edition, 2001.

[2] J. W. Cooley and J. W. Tukey. An algorithm for the machine calculation of complex Fourier
series. Mathematics of Computation, 19(90):297-301, 1965.

You might also like