Fourier
Fourier
Fourier
26 de noviembre de 2023
1
Fundamentos de Comunicaciones I URV
The aim of this brief paper is to show a easy and chill way to calculate Fourier series based on the theory
of linear algebra and orthogonal basis.
1. Introduction
Definition of Basis
Let V be a vector space over a field F . A subset B of V is called a basis for V if the following conditions
hold:
2. B is linearly independent, i.e., no vector in B can be written as a linear combination of the others.
If B satisfies these conditions, then V is said to be finite-dimensional, and the number of vectors in B
is the dimension of V , denoted as dim(V ).
u = c1 v1 + c2 v2 + . . . + cn vn
⟨u, v⟩ = u1 v1 + u2 v2 + . . . + un vn
Two vectors u and v are orthogonal if their scalar product is zero, i.e., ⟨u, v⟩ = 0.
This integral represents the area between the graphs of f (x) and g(x) over the interval [a, b].
Two functions f (x) and g(x) are orthogonal over [a, b] if their scalar product is zero, i.e., ⟨f, g⟩ = 0.
Basically, this explains us how similar (or not) are the two functions.
For instance, let’s suppose we have a a basis n = 3, we can represent in a basis like that:
1 0 0
e1 = 0 , e2 = 1 , e3 = 0
0 0 1
Each vector in this basis corresponds to a unit vector along one of the coordinate axes in three-dimensional
space.
Any vector v in R3 can be expressed as a linear combination of these basis vectors:
v = x1 e1 + x2 e2 + x3 e3
where x1 , x2 , and x3 are the coordinates of v with respect to the standard basis.
This is just a generalization for any space.
In tensor notation, the Kronecker delta is often used to represent the identity matrix. For a vector space
V , the identity matrix in terms of the Kronecker delta is given by:
1 if i = j
δij =
0 if i ̸= j
∞
a0 X 2πn 2πn
f (x) ∼ + an cos x + bn sin x
2 n=1
b−a b−a
∞
X 2πn
f (x) ∼ cn ei b−a x
n=−∞
∞
X
f (r, θ) ∼ rn (an cos(nθ) + bn sin(nθ))
n=−∞
where r is the radial coordinate, θ is the angular coordinate, and an and bn are the polar coefficients.
Now it looks difficult, but according to orthogonality, we should now these integrals:
These orthogonality relations are fundamental in Fourier series and other areas of mathematical analysis.
∞
a0 X 2πn 2πn
f (x) ∼ + an cos x + bn sin x
2 n=1
b−a b−a
To ensure the convergence and validity of the Fourier series, the following conditions must be satisfied:
1. Periodicity: The function f (x) must be periodic on the interval [a, b] with a period T = b − a.
2. Piece-wise Continuity: f (x) must be piece-wise continuous on [a, b], allowing for a finite number of
jump discontinuities.
3. Finite Number of Extreme: f (x) must have a finite number of extreme within each period T .
4. Finite Number of Discontinuities: The number of discontinuities of f (x) in [a, b] should be finite.
These conditions ensure that the Fourier series provides an accurate representation of the function within
the specified interval.
To deduce this from the Fourier series, we start with the exponential series representation:
∞
X 2πn
f (x) ∼ cn ei b−a x
n=−∞
2π n
where cn are the complex Fourier coefficients. By letting T = b − a, ω0 = T , and ξ = T, we can rewrite
this as:
∞
X
f (x) ∼ cn eiω0 ξx
n=−∞
The inverse transform recovers the original function f (x) from its frequency domain representation F (ξ).
F −1 {F (ξ)e−i2πξx0 } = f (x − x0 )
4. Scaling: Scaling in the time domain corresponds to scaling in the frequency domain.
1 x
F −1 {F (αξ)} = f
|α| α
Example
sin(πξ)
Consider the function F (ξ) = sinc(ξ), where sinc(ξ) = πξ . The inverse Fourier transform of F (ξ) is
given by:
Z ∞
−1
f (x) = F {sinc(ξ)} = sinc(ξ)ei2πξx dξ
−∞
x
This integral evaluates to f (x) = rect 2 , where rect(x) is the rectangular function:
1
1 if |x| < 2
rect(x) =
0 otherwise
∞
a0 X 2πn 2πn
f (x) ∼ + an cos x + bn sin x
2 n=1
b−a b−a
Z b
1
a0 = f (x) dx
b−a a
Z b
2 2πn
an = f (x) cos x dx
b−a a b−a
Z b
2 2πn
bn = f (x) sin x dx
b−a a b−a
Z b 0 if m ̸= n
2πm 2πn
cos x cos x dx =
a b−a b−a b−a if m = n
2
Z b 0 if m ̸= n
2πm 2πn
sin x sin x dx =
a b−a b−a b−a if m = n
2
Z b
2πm 2πn
cos x sin x dx = 0
a b − a b −a
These orthogonality integrals, prove that the Fourier series is a set of sums.
Consider a periodic function f (x) with period T . The Fourier series expansion involves calculating the
coefficients Cn given by:
Z a+T
1 2πn
Cn = f (x) cos x dx
T a T
For a par function f (x), the formula simplifies to:
Z a+T
2 2πn
Cn = f (x) cos x dx
T a T
The orthogonality of trigonometric functions over one period implies the following relations:
Z a+T
2πm 2πn T
cos x cos x dx = δmn
a T T 2
Z a+T
2πm 2πn T
sin x sin x dx = δmn
a T T 2
Z a+T
2πm 2πn
cos x sin x dx = 0
a T T
Using these properties, we can derive the formula for the coefficients:
Z a+T
2 2πn
Cn = f (x) cos x dx
T a T
This formula allows us to calculate the coefficients in the Fourier series expansion of f (x).
To find the minimum of this error, we differentiate E with respect to the parameters of the approximation
and set the derivatives to zero.
Let g(x) be an approximation represented by a Fourier series:
∞
a0 X 2πn 2πn
g(x) = + an cos x + bn sin x
2 n=1
T T
Z b
" ∞ #2
a0 X 2πn 2πn
= f (x) − − an cos x + bn sin x dx
a 2 n=1
T T
To find the minimum, we differentiate E with respect to a0 , an , and bn and set the derivatives to zero.
For a0 :
Z b
∂E
=− [f (x) − g(x)] dx = 0
∂a0 a
For an :
Z b
∂E 2πn
= −2 [f (x) − g(x)] cos x dx = 0
∂an a T
For bn :
Z b
∂E 2πn
= −2 [f (x) − g(x)] sin x dx = 0
∂bn a T
Solving these equations provides the coefficients a0 , an , and bn that minimize the squared error.
Indeed, we can use this alternative expression for the calculation of the error.
∞
a0 X 2πn 2πn
g(x) = + an cos x + bn sin x
2 n=1
T T
The minimum squared error is given by the norm squared of the difference between the signal and its
approximation:
Z a+T
1
E= [f (x) − g(x)]2 dx
T a
Z a+T
" ∞ #2
1 a0 X 2πn 2πn
= f (x) − − an cos x + bn sin x dx
T a 2 n=1
T T
To find the minimum, differentiate E with respect to a0 , an , and bn and set the derivatives to zero.
For a0 :
Z a+T
∂E
=− [f (x) − g(x)] dx = 0
∂a0 a
For an :
Z a+T
∂E 2πn
= −2 [f (x) − g(x)] cos x dx = 0
∂an a T
For bn :
Z a+T
∂E 2πn
= −2 [f (x) − g(x)] sin x dx = 0
∂bn a T
Solving these equations provides the coefficients a0 , an , and bn that minimize the squared error.
∞
a0 X 2πn 2πn
g(x) = + an cos x + bn sin x
2 n=1
T T
Z a+T
2 2πn
an = f (x) cos x dx
T a T
Z a+T
2 2πn
bn = f (x) sin x dx
T a T
Now, let’s consider the inner product of f (x) with the basis functions:
Z a+T
2 2πn
⟨f, cos(ω0 t)⟩ = f (x) cos x dx
T a T
Z a+T
2 2πn
⟨f, sin(ω0 t)⟩ = f (x) sin x dx
T a T
Now, we can express the coefficients in terms of inner products:
a0 = ⟨f, 1⟩
So, each Fourier coefficient can be seen as the ratio of the inner product of f (x) with the corresponding
basis function to the inner product of the basis function with itself.
∞
a0 X 2πn 2πn
g(x) = + an cos x + bn sin x
2 n=1
T T
This relation emphasizes the conservation of energy between the time and frequency domains.
∞
X
g(x) = cn eiω0 nx
n=−∞
This version relates the energy in the time domain to the magnitudes of the complex coefficients in the
frequency domain.
∞
a0 X
f (x) = + (an cos(nx) + bn sin(nx))
2 n=1
1
f (x) = sin(x)
π
Now, let’s verify Parseval’s theorem:
Z 2π ∞
1 X
| sin(x)|2 dx = (a2n + b2n )
2π 0 n=1
Z 2π
1 1
sin2 (x) dx =
2π 0 π2
The right-hand side is the sum of the squares of the Fourier coefficients, which is b21 = 1. Therefore,
Parseval’s theorem holds for this example.
1 for 0 ≤ x < π
f (x) =
−1 for π ≤ x < 2π
∞
4 X sin((2n − 1)x)
f (x) =
π n=1 2n − 1
g(x) = x
The Fourier series for this saw tooth wave, considering its odd nature, is given by:
∞
1 1 X sin(nx)
g(x) = −
2 π n=1 n
∞
X (−1)n+1
h(x) = sin(nx)
n=1
n2
F{δ(t)} = 1
2. Pulse (u(t))
1
F{u(t)} = + πδ(ω)
iω
1
F{e−at u(t)} =
a + iω
ω
F{rect(t)} = sinc
2