Signal Reconstruction Using Neural Networks
Signal Reconstruction Using Neural Networks
Neural Networks
Aniket Sujay
17122006
Let’s break down the title.
Signal Reconstruction.
● In signal processing, reconstruction usually means the determination of original
signal or expected signal from a sequence of samples.
● Given a type of signal we will train the neural network and then when some
“broken” input is provided it will give the original signal with some degree of
accuracy.
Neural Networks
● Neural networks or artificial neural networks are computing
systems that perform tasks by considering examples, without
being explicitly programmed to do so.
● ANNs are weighted, directed computational graphs. This means
that each node in the graph performs some computation on the
input, then sends the result to next node by multiplying the
output by a weight.
● It “learns”, by examining the differences in the given output and
the processed output and using the said difference to adjust the
calculation parameters.
Input Layer Output Layer
Processing
Units
Hidden Layer
● The first layer is the input layer from which data is fed into the graph.
● Each circle in the graph represents a processing unit.
● Each unit has a state of activation, which is equivalent to the output of the system.
● Connections within the system denotes the influence of the preceding unit on the
input that is fed into the next unit. This influence is represented by a number
called weight.
● And associated with each succeeding unit there is a bias or offset, which shifts the
incoming input.
● We define a propagation rule for each unit that determines the overall output of a
unit.
● A learning rule as a whole is defined for the graph.
● Ex: In most ANNs we define cost functions that represent the error of the
computation. The job of the network is to reduce the error as much as possible.
This done using extremely sophisticated algorithms like stochastic gradient
descent.
● After one computation is done, we need to adjust the different parameters of the
units so that the error reduction is ensured.
● This is done using an algorithm called back propagation.
● After repetitive calculations we obtain the result.
Recurrent Neural Networks?
● Our aim in this demonstration is to showcase how neural networks are adept in
working with sequential data.
● For this purpose a special kind of neural network was developed called recurrent
neural network.
● Recurrent networks are based on the concept of memory. Which means the
network remembers the patterns in data and based on the pattern, it predicts the
next number in the sequence.
● This memory capability is implemented by feedback connection in the unit.
Feedback
Output Layer
Input Layer
Demonstration.
● My demonstration consists of 4 parts.
● First is constructing the input signal. This done using the Fourier series.
● Second is training the RNN on the generated data.
● Then we have to calibrate the hyperparameters of the RNN for maximum
accuracy and fast convergence.
● Comparison and the original and reconstructed signal is done.
Results: