Super Listener: 2. Signal Processing
Super Listener: 2. Signal Processing
Super Listener: 2. Signal Processing
SUPER LISTENER
P.MANO
Electronics & communication Engg.
Renganayagi Varatharaj College Engineering
Sivakasi, Tamilnadu
ABSTRACT
This paper presents a synthesis based technique to
improve the listening of the humans through the device.
The method that used here is sound synthesizing
technique. This technique is applied to the speech
synthesized from the parameters derived for the speech
processing of the word Digital Signal Processing.
From the output of the device, the more number of the
signals that detected in the input were synthesized
separately and filtered specifically in the sound. So the
device is also being a developed mobile phone with a
specific application. The mobile can do the process that
what we need. It gets the input as audio in the detection
circuit. The output was connected with an enhancement
device to the humans.
M.DINESH
Electronics & communication Engg.
Renganayagi Varatharaj College Engineering
Sivakasi, Tamilnadu
2. SIGNAL PROCESSING
Humans are the most advanced signal processors in
speech and pattern recognition, speech synthesis, so it is
easy to split the human voice from the others sounds.
Generally sounds are vibrations, they has compression
and rarefaction.
1. INTRODUCTION
Speech processing devices like digital hearing aids,
mobiles and other man machine interface in our daily
life to make them more robust in noisy environmental
conditions. In this technically developing world, there
are lots of noise pollution were presents. So the sound
proof rooms may be available at the home but one cant
use everywhere. But one can easily carry their mobiles
as sound proofing. Human speech can be modeled as
filter acting on excitation waveform. There often occur
conditions under which we measure and then transform
the speech signal to another form in order to enhance our
ability to communicate with each other is 4 KHz. The
detected audio signal is in the form of analog signal.
Then the audio was transmitted to digital form through
analog to digital converter. One of the most common
noises is the background noise which is present at any
location. Other types of noise include channel noise
which affects both digital and analog transmission. The
speech signal is corrupted by various noises such as
Gaussian noise, babble noise.
3. METHODS
1. Filtering the specific sound method
2. Amplification method
4. AUDIO DETECTION
Sound wave is a pressure wave, a detector could used to
detect oscillations in pressure from a high pressure to a
low pressure and back to a high pressure. As the
Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India
55
International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014
was shown to the user and user can take the decision.
For an example, when we are in the situation there we
are being hearing the sound by two men by talking to us
in the same time, on that time we have to use a device
that gives a advancement of listening, so we can filtered
out ones sound and we also can hear the remaining
ones speech clearly. This method was also used to hear
small noise into amplified one. When the user is in the
political meeting, the user cant hear the leaders speech
clearly, if he uses a device, he can clearly hear the
leaders speech.
Algorithm
5. AMPLIFICATION METHOD
The problem was explained in the above part so the
solution is; the problem is to be amplified and compared
then finally user can take a decision who user doesnt
want. The both of their speech were identified and that
was to be amplified and compared by the device then it
Algorithm I:
i). S(n) --.wav file for running time=1 min. or 2 min. ii)
For running time iii) Input signal iv) Original Signal
ii) After applying windowing/Filter/LPC Algorithms
iii) Synthesized
iv) After applying codec file: codec, fs, D);
Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India
56
International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014
v) LPC=16
vi) LPC+codec
vii) Input signal
viii) For running time 1 sec/1min/2min.
ix) Codec with file or file name (ref.16kb)
x) Plot codec
xi) Grid
xii) Run.
Algorithm II:
Step1: s (n), to .wav file
Step2: Input signal
Step3: Original file
Step4: After applying windowing/filter/LPC algorithm
Step5: Synthesized
Step6: after applying codec file
Step7: codec, fs, D:; LPC=16+codec Step8: deburg-save
and run
Step9: we have to received output signal or voiced or
unvoiced signal
Step10: expected response, plot, original plot and
synthesized plot
Algorithm III: Simulation Procedure
i) Entering into mat lab environment
ii) Opening of the Mat Lab Software
iii) After opening the Mat Lab the first screen is as
iv) Shows existed data
v) Selecting the project from computer as
vi) Importing project into the Matlab Software as
vii) Click on exe File i.e. practical values are observed
Algorithm IV:
Step 1: s (n) =speech signal
Step 2: windowing
Step 3: pre-emphasizing
Step 4: LPC, y (n)
Algorithm:
Step 1: windowing a pre-emphasizing
Step 2: LPC or coefficients
Step 3: wavelet based analysis
Step 4: Inverse filtering
Step 5: voiced/unvoiced detection (If voiced: Vowel,
AEIOU, Pitch=Frequency = they are calculated as
voiced signal or high magnitude Step 6: Pitchthe
frequency at where the signal pick is maximum, high
magnitude
is
Available.
SignalGraph-------Energy/magnitude vs. frequency graph is identified with
help of pitch values. Pitch Calculation: Magnitude or
Energy vs. Frequency=PITCH
Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India
57
International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014
Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India
58