Akash Music Report - Internship Document PDF
Akash Music Report - Internship Document PDF
Submitted by
KODIGANTI AKASH
Roll No: B092195
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND COMMUNICATION ENGINEERING
JULY 2014
1
ROCKSTUDIOS
Kukatpally, Hyderabad
CERTIFICATE
MR.T.V.SRINIVASA RAJU
DIRECTOR,
Music Technologist,
ROCKSTUDIOS
2
EmbeddedRF Technologies
Dilshuknagar, Hyderabad
CERTIFICATE
Certified that the summer internship report on “MATLAB Signal Processing as a part of
Industrial exposure to Industry of Electronic Music” is the bonafide work of “Kodiganti
Akash, Roll No: B092195” 3rd Year B.Tech in Electronics & communication Engineering
of RGUKT Basar Campus of Rajiv Gandhi University of Knowledge Technologies
(RGUKT), Andhra Pradesh carried out under my supervision during 26.5.2014 to 30.07.2014.
Place: Dilshuknagar,
Date: 30-07-2014
SANJAY KHATUA
SUPERVISOR
M.Tech IIT Kharagpur,
Dilshuknagar.
3
ACKNOWLEDGMENT
I express my gratitude and sincere thanks to MR.SRINIVAS RAJU, the Director and
Sr.Faculty at ROCKSTUDIOS and Technician as well as composer from ANNAPURNA
STUDIOS for giving me an opportunity to pursue this internship. He was highly appreciated
and applauded by music critics and industry experts for his award winning background
scores. His Musical knowledge both in technical and non-technical based guidance has given
me a scope to cater my Musical Career and to proceed with my future planning on Music also
I want to thank Mr.ANIRUDH ,a highly qualified Musician and Sound Engineer for his
valuable suggestions and influencing encouragement helped as greatly in the completion of
this internship.
4
ABSTRACT
In a Musical Industry:
Signal Processing works are tremendously used in Musical industry for Digital Audio
Production where recording, mixing and editing and many Audio FX are implemented so that
the resultant audio file obtained is of a high quality with improved timbre (tonal accuracy)
and other sound parameters.
An audio file of an instrumental format (not the commercial mp3 songs) consists melodies,
harmonies, tunes of different instruments been played along with vocals (however vocal is
not considered as of now). For a person who want to learn how a song ( melody or tune ,the
way the wave is varying w.r.t. time) can be played in his instrument, he must know what is
the pitch and the scale ( if necessary chords ) .
A musician who has a good knowledge on frequencies of every note that is being played, can
reproduce by trial and error methods to start from the Root note, then finding chords and
scale then the exact tune. But for a beginner who wants to play the exact music might be a
tough task.
Now, a software in which the required audio file is given , working out the processing steps
inside it generates the frequencies ( pitch ) , next chords, later for scale then itself shows the
notes with respect to time.
Adding audio effects to this project makes it look great for a musician to perform advanced
signal processing steps. Channel separation, tempo changing etc., can be included in the later
versions.
5
CONTENTS:
Title Page ………………………………………………………………………………....… 1
Acknowledgment………………………………………………………………….………….4
Abstract ………………………………………………………………………………………5
1. INTRODUCTION………………………………………………………….8
Origin of Electronic Music ........................……………………….………………….8
Piano…………………………………………………………………………………19
6
5. HARDWARE AND SOFTWARE USED……..........................................29
5.1 System Interconnections (USB, Fire-wire, WIFI, and Networking)……………29
7. DSP ROLE…………………………………………………………………44
DSP Audio Effects and Advancement through FPGA
REFERENCES
7
1. INTRODUCTION
MUSIC
Technically, the word Music refers to a complex amalgam of melody, harmony, rhythm,
timbre and also includes the silence, in a particular structure.
Mathematically, it is the formulation of Trigonometric and Fourier analysis which includes
complex matrices with complex computations.
Psychologically, it is unique in each person’s life depending on how they perceive it. Music
stimulates, changes mood and feelings also sometimes bounds to spiritual, mental therapy
and reduce stress.
Scientifically, it is the harmony of waveforms that travel in space with an orderly sequence
of arrangement so as to produce a unified and continuous propagation of waves.
Artistically, Music is the capability of brain for intelligence and style, heart for genuine and
believable emotions, courage to do something creative/exciting.
Mechanical Music: In earlier days, music could be created though the mechanical triggering
of a musical device.
Tape based Music: When tape-based recording came along in the middle part of the last
century, it became possible to edit two or more problematic performances together into a
single, good take. Magnetic recording is a backbone technology of the electronic age. It is a
fundamental way for permanently storing information.
8
MIDI ORIGIN
Keyboard synthesizers were commonly monophonic devices (capable of sounding only one
note at a time) and often generated a thin sound quality. These limiting factors caused early
manufacturers to look for ways to combine instruments together to create a thicker, richer
sound texture.
Father of MIDI: Dave Smith, January 2013: Technical Grammy for the creation of MIDI.
MIDI REVOLUTION
The Musical Instrument Digital Interface is a digital communications protocol. That’s to say,
it is a standardized control language and hardware specification that makes it possible for
electronic instruments, processors, controllers, and other device types to communicate
performance and control-related data in real time.
9
MIDI Device Pressure variation in pressing key
The Digital Word format is the one that features at MIDI revolution.
The MIDI Message is transmitted in either Serial or in parallel communication.
Channel Pressure Messages
Polyphonic Key Pressure
Pitch Bend Messages
MIDI data can only travel in one direction through a single MIDI cable.
It is used in mobile ring-tone in past days, recording studios and much of multimedia
applications are based on MIDI data.
MIDI specification is of 2 bytes the status byte and the data byte.
A status byte is used to identify the type of MIDI function.
A data byte is used to associate a value to the event that’s given by the
accompanying status byte.
Up to 16 discrete MIDI channels can be transmitted through a single MIDI cable or
designated port.
MIDI Modes - Poly/Mono
Poly: An instrument that’s set to respond to MIDI data polyphonically will be
able to play more than one note at a time.
Mono: Conversely, an instrument that’s set to respond to MIDI data
monophonically will only be able to play a single note at any one time.
Controller Values
Channel Volume
Stereo Balance
Sound Variation
Sound Timbre
10
Equalizer with dB values
MIDI ON PHONE
With the integration of the General MIDI standard into various media devices, one of the
fastest-growing MIDI applications, surprisingly the oldest mobile phones which comfortably
resting in our pocket —the ring tone on our cell phones .MIDI helps us to reach out and touch
someone through ring tones.
11
ELECTRONIC MUSIC
With the introduction of electronic music production and MIDI, a musical performance
could be captured in the digital domain and then faithfully played back in a production-type
environment that mimicked the traditional form and functions of multi-track recording. Basic
tracks could be recorded one at a time, allowing a composition to be built up using various
electronic instruments. MIDI finally made it possible for a performance track to be edited,
layered, altered, spindled, mutilated, and improved with relative ease and under completely
automated computer control. If you played a bad note, fix it. If you want to change the key or
tempo of a piece, change it. If you want to change the expressive volume of a phrase in a
song, just do it! Even its sonic character (timbre) can be changed. These capabilities just hint
at the power of MIDI!
FIRST: The introduction of Yamaha’s popular DX-7 synthesizer in the winter of 1983
pioneered Electronic Music Production.
12
Yamaha DX-7 synthesizer AKAI MPC500 sampler
Over the course of time, new instruments came onto the market that offered improved sound
and functional capabilities that led to the beginnings of software sound generators, samplers,
and effects devices. With the eventual maturation of software instruments and systems that
could emulate existing devices or create entirely new range of functions and sound, hardware
controllers began to quickly spring onto the scene that made use of MIDI to communicate
physical control movements into analogous moves in a program or plug-in software interface.
With the introduction of drum machines, modern day synths, samplers, and powerful
hardware or software instruments, it is not only possible but also relatively easy to build up a
composition using instrument voices that closely mimic virtually any instrument that can be
imagined. In the early days, studio musicians spoke out against MIDI, saying that it would be
the robot that would make them obsolete. Although there was a bit of truth to this, these same
musicians are now using the power of MIDI to expand their own musical palate and create
productions of their own. Today, MIDI is being used by many professional and
nonprofessional musicians alike to perform an expanding range of production tasks, including
music production, audio-for-video and film postproduction ,and stage production. Such is
progress.
13
REQUIREMENTS TO ACHIEVE PROJECT GOAL
The aim of the project is to design an “.exe” file that generates Music signals and provides a
better way of accomplishing the Music Industry requirements for Digital Audio Production.
14
2. PHYSICS OF SOUND (AUDIO)
According to physics, Sound consists of three basic elements: frequency, intensity (loudness),
and timbre (overtones).
The frequency of a sound corresponds to the vibrating frequency of the object that produced
the sound. In terms of human physiology, the human ear can perceive frequencies from
around 20 to 25,000 Hz; however, the ear is most sensitive to frequencies between 1000 and
2000 Hz.
The intensity of a sound corresponds to the amount of sound energy transported across a unit
area per second (or W/m2) and depends on the amplitude of oscillation of the vibrating
object. As you move further away from the vibrating object, the intensity drops in proportion
to one over the distance squared. The human ear can perceive an incredible range of
intensities in terms of decibels is between 0 and 120 dB
15
Timbre (or tonal quality) represents the complex wave pattern that is generated when the
overtones of an instrument, voice, etc., are present along with the fundamental frequency.
The most intense frequency sounded is typically referred to as the fundamental frequency.
The overtones of importance have frequencies up to nth harmonic and harmonics play
important role in timbre characteristics.
Every instrument has its own unique tonal quality. The reason for an instrument’s unique set
of overtones depends on the construction of the instrument (where control systems is an
undertaking part).
MAKING CIRCUITS:
The art of synthesizing sounds via electric circuits is fairly complex business. To accurately
mimic an instrumental sound, train whistle, bird chirp, etc., you must design circuits that can
generate complex waveforms that contain all the overtones and decay and rise time
information. For this purpose, special oscillator and modulator circuits are needed.
16
Fourier series representation of a single tone
Above figure shows how the sound is perceived by the ear, a frequency of 261.6 Hertz, plus
harmonics at 523.2, 784.8, 1046.4 Hertz, etc. If this note were played on another instrument,
the waveform would look different. However, the ear would still hear a frequency of 261.6
Hertz plus the harmonics. Since the two instruments produce the same fundamental
frequency for this note, they sound similar, and are said to have identical pitch. Since the
relative amplitude of the harmonics is different, they will not sound identical, and will be said
to have different timbre.
17
FOURIER ANALYSIS
If the composite signal is periodic, the decomposition gives a series of signals with
discrete frequencies.
Speech signals generally refers to a non-periodic waveform in which the frequency is not
provided in discrete levels instead it is shown by a bandwidth of continuous band of
frequencies ranging from 300 to 3500 Hz.
Unlike speech signals the Music signals corresponds to a dynamic nature, as when beats are
taken into account, they are periodic in nature which is contrary to vocals.
Hence, a keen observation on frequency spectrum in necessary for audio signals.
18
PIANO
The frequencies of the keys on a piano. Note that the piano keys are arranged in groups of 12.
Each set of 12 keys spans an octave which is the doubling of frequency. For example the
frequency of AN is 2N A0 or N octaves higher than A0, e.g. A7=27×27.5=3520Hz. Black keys
correspond to sharp notes.
The most commonly accepted pitch standard is the note A4=440Hz, also known as the
concert pitch. In equal tempered chromatic scales each successive pitch (e.g. piano key) is
related to the previous pitch by a factor of the twelfth root of 2=1.05946309436 known as a
half-tone or a semi-tone (forms an exponential curve).
Major Scales
This is one of the diatonic scales consisting of five whole steps (each whole step is two
semitones) and two half steps between the third and fourth and seventh and eighth notes. It is
often considered to be made up of seven notes (eight if one includes the first note of the next
octave of the scale). A prominent example of a major scale is the C-major (C, D, E, F, G, A,
B, C) and reproduced here together with indications of the distances between the notes
Tone-tone-semitone-tone-tone-tone-semitone
Minor Scales
Similarly a natural minor interval pattern would be:
Tone-semitone-tone-tone-semitone-tone-tone
For example, in the key of ‘A’ minor, the natural minor scale is ‘ABCDEFGA’
19
Figure showing octaves and keys with pitch values
20
A NOTE ON TIMBRE
Timbre is more complicated, being determined by the harmonic content of the signal.
Hearing is based on the amplitude of the frequencies, and is very insensitive to their phase.
The shape of the time domain waveform is only indirectly related to hearing, and usually not
considered in audio systems.
Phase detection of the human ear. The human ear is very insensitive to the relative phase of
the component sinusoids. For example, two waveforms would sound identical, because the
amplitudes of their components are the same, even though their relative phases are different.
The ear's insensitivity to phase can be understood by examining how sound propagates
through the environment. Suppose you are listening to a person speaking across a small room.
Much of the sound reaching your ears is reflected from the walls, ceiling and floor. Since
sound propagation depends on frequency (such as: attenuation, reflection, and resonance),
different frequencies will reach your ear through different paths. This means that the relative
phase of each frequency will change as you move about the room. Since the ear disregards
these phase variations, you perceive the voice as unchanging as you move position. From a
physics standpoint, the phase of an audio signal becomes randomized as it propagates through
a complex environment. Put another way, the ear is insensitive to phase because it contains
little useful information.
21
However, it cannot be said that the ear is completely deaf to the phase .This is because a
phase change can rearrange the time sequence of an audio signal. An example is the chirp
system that changes an impulse into a much longer duration signal. Although they differ only
in their phase, the ear can distinguish between the two sounds because of their difference in
duration.
For the most part, this is just a curiosity, not something that happens in the normal listening
environment. Suppose that we ask a violinist to play a note, say, the A below middle C.
When the waveform is displayed on an oscilloscope, it appear much as the saw tooth.
Since octaves are based on doubling the frequency every fixed number of keys, they are a
logarithmic representation of frequency. This is important because audio information is
generally distributed in this same way. For example, as much audio information is carried in
the octave between 50 hertz and 100 hertz, as in the octave between 10 kHz and 20 kHz.
Even though the piano only covers about 20% of the frequencies that humans can hear (4 kHz
out of 20 kHz), it can produce more than 70% of the audio information that humans can
perceive (7 out of 10 octaves). Likewise, the highest frequency a human can detect drops
from about 20 kHz to 10 kHz over the course of an adult's lifetime. However, this is only a
loss of about 10% of the hearing ability (one octave out of ten). As shown next, this
logarithmic distribution of information directly affects the required sampling rate of audio
signals.
22
3. ROLE OF ANALOG ELECTRONICS
Audio electronics generally deals with converting sound signals into electrical signals and
vice-versa .This conversion process typically is accomplished by means of a microphone and
loudspeaker. Once the sound is converted to electrical form playing is quite simple i.e., we
can amplify the signal, filter out certain frequencies from the signal, combine (mix) the signal
with other signals, transform the signal into a digitally encoded signal that can be stored in
memory, modulate the signal for the purpose of radio-wave transmission, use the signal to
trigger a switch (e.g., transistor or relay) etc..
Transducers
Frequency response: The range of sound wave frequencies a microphone can transduce
effectively.
Signal to Noise Ratio: How much greater the audio signals is compared to the noise inherent
in the microphone system. (Larger numbers are better)
Audio Amplifiers:
Electrical signals within audio circuits often require amplification to effectively drive other
circuit elements or devices. Perhaps the easiest and most efficient way to amplify a signal is
to use an op amp.
Audio amplifiers have high slew rates, high gain-bandwidth products, high input impedances,
low distortion, high voltage/power operation, and very low input noise.
23
Audio Amplifier for guitar amplification
Inverting Amplifier:
The following two circuits act as inverting amplifiers. The gain for both circuits is
determined by −R2/R1, while the input impedance is approximately equal to R1.
In both amplifier circuits, C1acts as an AC coupling capacitor—it acts to pass ac signals while
preventing unwanted dc signals from passing from the previous stage. Without C1, dc levels
would be present at the op amp’s output, which in turn could lead to amplifier saturation and
distortion as the ac portion of the input signal is amplified.C1 also helps prevent low-
frequency noise from reaching the amplifier’s input.
Non-inverting Amplifier:
The preceding inverting amplifier works fine for many applications, but its input impedance
is not incredibly large. To achieve a larger input impedance (useful when bridging a high-
impedance source to the input of an amplifier), you can use one of the following non
inverting amplifiers. The left amplifier circuit uses a dual power supply, whereas the right
amplifier circuit uses a single power supply. The gain for both circuits is equal to R2/R1+1.
Components R1, C1, R2, and the biasing resistors serve the same function as was seen in the
inverting amplifier circuits. The non-inverting input offers an exceptionally high input
impedance and can be matched to the source impedance more readily by adjusting C2 and R3.
The input impedance is approximately equal to R3.
24
Mixer Circuits:
Audio mixers are basically summing amplifiers—they add a number of different input signals
together to form a single superimposed output signal. The circuit below is a simple audio
mixer circuit which uses an op amp. The potentiometer is used as an independent input
volume controller.
Crossover Networks:
25
To determine the component values needed to get the desired response, use the following:
C1=1/(2πf2Rt ), L1 =Rm/(2πf2), C2 =1/(2πf1Rm), and L2 =Rw/2πf1, where f1 and f2 represent the
3-dB points shown in the graph.
26
4. ROLE OF CONTROL SYSTEM IN MUSICAL INSTRUMENTS:
In practice the contribution of any signal can be analyzed both in time domain and frequency
domain analysis. Generally, the notes played in an instruments corresponds to a case of
underdamped system. The parameters such as delay time, rise time, peak time, settling time
can be obtained from the plot and we can estimate the values for damping ratio, damped
frequency of oscillation, damping factor etc., then making the polar plot or any other
frequency plot results in minimizing the complexity of analysis.
Transfer function: In designing the filter response in MATLAB, the transfer function makes
easy to accomplish the task. The cut-off frequency and 3dB bandwidth also the Gain
bandwidth product analysis makes the task much easier.
Piano tone generation: Each tone has its own characteristics in either domains (i.e., time
domain and frequency domain), here the presence of harmonics (overtones) decides the
timbre of any note.
27
Middle C note time and frequency domain plots in MATLAB:
28
5. HARDWARE AND SOFTWARE USED
Hardware:
In addition to the huge number of electronic MIDI Instruments that are currently on the
market, a vast array of supporting MIDI hardware systems also exists for the purpose of
connecting, interfacing, distributing, processing, and diagnosing MIDI data. These systems
are used to integrate all of the individual tools and toys into a working environment that will
hopefully be designed to be powerful, cost effective and easy to use.
MIDI Cable:
FireWire:
The FireWire protocol is similar to the USB standard in that it uses a twisted-pair wiring to
communicate bidirectional, serial data within a hot-swappable, connected chain.
Unlike USB (which can handle up to 127 devices per bus), up to 63 devices can be
connected within a connected FireWire chain.
29
Firewire to USB interfacing Plots for USB vs. Fire wire
Unlike USB, compatibility between the two modes is mildly problematic, as FireWire
800 ports are configured differently from their earlier predecessor and therefore
require adapter cables to ensure compatibility.
Networking:
Local area network (LAN) connections
Data may be shared between independent computers in a home or workplace
LAN environment.
Computer terminals may be connected to a centralized server, allowing data to
be stored, shared, and distributed from a central location.
30
5.2 ELECTRONIC INSTRUMENTS
The most common instruments that in almost any MIDI production facility will probably
belong to the keyboard family. This is due to the fact that keyboards were the first electronic
music devices to gain wide acceptance initially developed to record and control many of their
performance and control parameters.
31
The MIDI keyboard controller is a keyboard device that’s expressly designed to control
hard/software synths, samplers, modules, and other devices within a connected MIDI
production system.
THE SYNTHESIZER
A synthesizer (or synth) is an electronic instrument that uses multiple sound generators, filters
and oscillator blocks to create complex waveforms that can be combined into countless sonic
variations.
Synthesizers generate sounds using a number of different technologies or program
algorithms. The earliest synthesizers were analog in nature and generated sounds using a
technology known as frequency modulation (FM) synthesis.
The sample-based systems are often called wavetable synthesizers to control a sample’s
overall sound character such as sample mixing, envelope, pitch, volume, pan, and
modulation.
32
Additive synthesis: It makes use of combined waveforms that are generated, mixed, and
varied in level over time to create new timbres that are composed of multiple and complex
harmonics that, like the waveforms, vary over time.
Subtractive synthesis: It makes extensive use of filtering to alter and subtract overtones from
a generated waveform (or series of waveforms).
For example, a device could start with a square or saw tooth waveform that, with the use of
filters, could be altered to approximate an acoustic instrument. These generated sounds can
also be filtered and changed in level over time to more closely approximate a desired sound.
THE SAMPLER
A sampler is a device that can convert audio into a digital form that is then imported into
internal random access memory (RAM). Once audio has been sampled or loaded into RAM
(from disk, disc, or diskette), segments of sampled audio can then be edited, transposed,
processed, and played in a polyphonic, musical fashion. In short, a sampler can be thought of
as a digital audio memory device that lets you record, edit, looped, modulated, filtered, and
amplified and reload samples into RAM.
Signal processing capabilities, such as basic editing, looping, gain changing, reverse, sample-
rate conversion, pitch change, and digital mixing can also be easily applied to change the
sounds in an almost infinite number of ways.
33
For example, a single key might be layered so that pressing the key lightly would reproduce a
softly recorded sample, while pressing it harder would produce a louder sample with a sharp,
percussive attack.
Most samplers have extensive edit capabilities that allow the sounds to be modified in much
the same way as a synthesizer, using such modifiers as:
Velocity
Panning
Expression (modulation and user control variations)
Low-frequency oscillation (LFO)
Attack, delay, sustain, and release (ADSR) and other envelope processing parameters
Keyboard scaling
After touch
Many sampling systems will often include such features as integrated signal processing,
multiple outputs (offering isolated channel outputs for added live mixing and signal
processing power or for recording individual voices to a multi-track recording system), and
integrated MIDI sequencing capabilities.
34
Most drum machines allow drum and percussion voices be manually assigned to a particular
MIDI note value. As the percussion sounds might not be related to any musical interval,
you’re free to assign a drum voices to any keyboard note and range that you’d like.
35
SOUND CARD
The Sound card must translate between Sound waves and bits, which can be found in almost
every home, generate sounds using a simple form of digitally controlled FM synthesis. Both
the soft-and hardware synth systems almost always conform to the General MIDI specs,
which has universally defined the overall patch and drum-sound structure so that a MIDI file
will be uniformly played by all such synths with the correct instrument voicing and levels.
36
6. DIGITAL AUDIO PRODUCTION
Due to the fact that MIDI is a digital medium and as such can easily be interfaced with
devices that output or control digital audio. Devices such as samplers, digital audio
workstations, hard disk recorders, and digital audio are commonly used to record, reproduce,
and transfer sound within such an environment.
In recent years, the way that electronic musicians store, manipulate, and transmit digital audio
has changed dramatically. As with most other media, these changes have been brought about
by the integration of the personal computer into the modern-day project studio environment.
In addition to sequencing MIDI data and controlling production-related devices in a MIDI
system, newer generations of computers and their hardware peripherals have been integrated
into the MIDI environment to receive, edit, manipulate, and reproduce digital audio with
astonishing ease.
37
DIGITAL AUDIO RECORDING
The encoding and decoding phases of the digitization process center around two processes:
Sampling
Quantization
Sampling is a process that effects the overall bandwidth that can be encoded within a sound
file, while quantization refers to the resolution (overall quality and distortion characteristics)
of an encoded signal compared to the original analog signal at its input.
Encode
Decode
Encoding and Decoding are the processes to convert the quantized values to binary coding
and the reverse
PCM or PWM
Delta Modulation
38
These are the modulation techniques used to process the data within short distance i.e.
Baseband communication. Generally the audio formats are converted to Pulse Code
Modulation (PCM) or DPCM (a differential case) then followed by playing with those
samples.
Programmed algorithms
For DSP processors C codes in CCStudio, for MATLAB the MATLAB codes are used for
signal processing.
THE DAW
In recent years, the term digital audio workstation (DAW) has increasingly come to signify
an integrated, computer-based, hard-disk recording system that commonly offers such
features as:
Truth of the matter is, by offering a staggering amount of production power for these
software-based programs and their peripherally connected devices have revolutionized the
faces of professional, project, and personal studios in a way that touches almost every life
within the audio and music production communities.
39
6.2 ELECTRONIC MUSIC PRODUCTION
A modern way of representing the midi data for manipulation and also it is user
friendly.
40
Showing VSTs Equalizer, ADSR parameters, Stereo Speed, Clip distortion, Amplifier
VSTs (Virtual Synthetic Technologies) plays a major role in specifying the mood of the
sound i.e., it has relevant buttons on which different sets of knobs corresponding to different
modulations and effects.
41
6.3 MATLAB PROCESSING
MATLAB is a matrix programming language which provides command for working with
transforms, such as the Laplace and Fourier transforms and Signal Processing tools.
Transforms are used in science and engineering as a tool for simplifying analysis and look at
data from another angle.
42
GUI DESIGN
MATLAB provides an interesting way of generating user friendly pop ups for easy way of
accessing. This is done in GUI (Graphical User Interface) where different buttons, checkbox,
edit block etc., are used for making the output to be user friendly.
A fully functional interface between a standard Musical Instrument Digital Interface (MIDI)
device and a personal computer, including a software application for processing and display
of MIDI data. The hardware interface uses an (Atmel Mega32) microcontroller to facilitate
communications between a MIDI device and computer. The microcontroller receives MIDI
data through a standard MIDI cable, filters and encodes the data, then sends packets to the PC
via a serial UART connection.
43
7. DIGITAL SIGNAL PROCESSING (DSP) - AUDIO EFFECTS
Basic filters
LP, HP, BP, BS
Equalizers
Shelving and peak filters
Advanced filters
Time varying filters
Wah-Wah Phasor
Delay filters
Vibrato
Flanger
Chorus
Echo
Modulators
Ring modulation
Tremolo (AM)
Vibrato (FM)
Nonlinear Processing
Compression
Limiters
Distortion
Enhancers/Exciters
Special Effects
Panning
Reverb
Surround sound
44
ADVANCEMENT THROUGH FPGA
The DSP effects can be easily implemented through FPGA (Field Programmable Gate
Arrays) which also helps in handling different type of signals easily and effectively with
high speed.
Software synthesizers have great flexibility in the connection of the basic components of
many synthesis methods, and can generate any kinds of sounds of any synthesis methods.
However, the performance of the current microprocessors are not still enough for generating
many sounds at a time. Speedup of the computation by some simple hardware is expected
especially for real time performance. With Field Programmable Gate Arrays (FPGAs) along
with hardware accelerators, we can realize any kinds of combinations of different sampling
rates and data width of sounds in generating many sounds (350 sounds on a desktop computer
and 110 sounds on a note computer) at a time. This performance can be achieved by fully
utilizing the flexibility of FPGAs and DSPs.
45
8. CONCLUSION
Musical signal processing has a wide range of applications including digital coding of music
for efficient storage and transmission on mobile phones and portable music players,
modeling and reproduction of the acoustics of music instruments and music halls, digital
music synthesizers, digital audio editors, digital audio mixers, spatial-temporal sound effects
for home entertainment and cinemas, music content classification and indexing and music
search engines for the Internet.
Speech processing details from IIIT HYD – TTS (text to speech conversion)
Variable power supply with 8051 microcontroller interfaced with LCD (a project
from Elevouge-Mumbai)
46
SCOPE
develop a software for sheet music generation and Developing GUI for generating piano
frequency Note in MATLAB (with DSP FX), then processing the application to exe file then
to an Android/windows app.
47
REFERENCES:
Books referred:
“The MIDI Manual A Practical Guide to MIDI in the Project Studio” by David Miles
Websites:
www.mathworks.in
www.tutorialspoint.com/matlab.htm
www.electronics.howstuffworks.com/gadgets/audio-music/cassette.htm
www.davesmithinstruments.com/
www.yamaha.com/products/music-production/synthesizers/motif_xf/motif_xf6/
www.steinberg.net/en/products/vst/rnd_portico_plug_ins/vcm.html
http://amath.colorado.edu/pub/matlab/music/
Other Materials:
Subjects like Control Systems, DEC, FM modulation, SS (along with the help of
Mr.Narasimham and Mr.Kiran Faculty at ACE academy)
48