0% found this document useful (0 votes)
14 views19 pages

5 Synth LMMS1

Uploaded by

hafidzmalek2412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views19 pages

5 Synth LMMS1

Uploaded by

hafidzmalek2412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

5_Synth_LMMS1.

md 2024-08-17

Digital Audio Workstation (DAW) - LMMS (1)

Unit learning outcomes:

Able to outline development of MIDI and synthesizer technology


Able to explain basic concepts of MIDI
Able to explain basic concepts of Sound symthesis techniques
Able to use sequencer to edit music
Able to adjust control parameters of standard sound synthesiers

Follow up discussion

Pending tasks

MIDI 101
Popular Music Scene

A bird eyes' view of popular music scene.

History and Development of MIDI

Introduction to MIDI: MIDI stands for Musical Instrument Digital Interface. It's a technical standard that
describes a communications protocol, digital interface, and electrical connectors.
The Genesis of MIDI: Before MIDI, electronic musical instruments from different manufacturers couldn't
communicate with each other. Ikutaro Kakehashi, the president of Roland, believed that this lack of
standardization was hindering the growth of the electronic music industry. In June 1981, he proposed
the idea of developing a standard to Tom Oberheim, the founder of Oberheim Electronics, who had his
own proprietary interface, the Oberheim System. Kakehashi felt the Oberheim System was too
cumbersome and reached out to Dave Smith, the president of Sequential Circuits, to create a simpler,
more affordable alternative. While Smith discussed the concept with American companies, Kakehashi
engaged with Japanese companies like Yamaha, Korg, and Kawai. Representatives from these
companies met in October to discuss the idea. Initially, only Sequential Circuits and the Japanese
companies showed interest. The MIDI specification was published in August 1983. The MIDI standard
was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their
contributions.
Early Adoption and Impact: The first instruments with MIDI were Sequential Circuits Prophet-600 and
Roland Jupiter-6. By 1983, MIDI was demonstrated at the NAMM show, marking its official debut. MIDI
revolutionized music production, allowing different instruments and devices to work together.
Evolution of MIDI: MIDI was initially used for synthesizers but quickly expanded to other instruments,
computers, and controllers. The introduction of General MIDI (GM) in 1991 standardized instrument
sounds and ensured compatibility across devices. Over the years, MIDI evolved to include new features,
such as MIDI Time Code (MTC) for synchronization.

1 / 19
5_Synth_LMMS1.md 2024-08-17

MIDI 2.0: Announced in 2020, MIDI 2.0 is the first major update in over 35 years. It adds higher
resolution, increased expressiveness, and bidirectional communication.

Technical Specification of MIDI

1. MIDI Messages: Note Messages: Include Note On, Note Off, and Velocity (how hard a key is pressed).
Control Change Messages: Adjust parameters like volume, pan, modulation. Program Change
Messages: Change instrument sounds or presets.
2. MIDI Channels: MIDI supports 16 channels per cable, allowing multiple instruments to be controlled
independently.
3. MIDI Connections: MIDI In, Out, and Thru Ports: MIDI In receives data, MIDI Out sends data, and MIDI
Thru passes data to other devices. DIN Connectors: Traditional 5-pin DIN connectors were standard, but
USB and Bluetooth are now common.
4. MIDI Data Transmission: MIDI transmits data at 31.25 kbaud (bits per second). Data is sent as a series of
8-bit messages, including status bytes (indicating the type of message) and data bytes (carrying the
2 / 19
5_Synth_LMMS1.md 2024-08-17

actual data).
5. MIDI File Format: Standard MIDI Files (SMF) store MIDI data for playback and editing. SMF Type 0: All
tracks combined into a single track. SMF Type 1: Each track stored separately.
6. Advanced MIDI Features: MIDI Time Code (MTC): Synchronizes devices to a common time reference.
System Exclusive Messages (SysEx): Manufacturer-specific messages for detailed control and
configuration. MIDI Machine Control (MMC): Controls devices like tape recorders and DAWs (Digital
Audio Workstations).
7. Practical Applications of MIDI: Sequencing: Composing and arranging music by recording MIDI data.
Live Performance: Using MIDI controllers and instruments on stage. Synchronization: Syncing audio and
video devices for multimedia production. Automation: Automating mixing and effects in DAWs.

Sound Modules

General MIDI sounds Example of music encoded in MIDI files Pianotify; Bitmidi

MIDI Controller and Editor

A MIDI controller is any hardware or software that generates and transmits Musical Instrument Digital
Interface (MIDI) data to MIDI-enabled devices, typically to trigger sounds and control parameters of an
electronic music performance.
3 / 19
5_Synth_LMMS1.md 2024-08-17

4 / 19
5_Synth_LMMS1.md 2024-08-17

Online Sequencer: https://onlinesequencer.net/

Synthesizer 101
Early analog synthesizers (1960s) used technology from electronic analog computers and laboratory test
equipment. Because of the complexity of generating even a single note using analog synthesis, most
synthesizers remained monophonic. Polyphonic analog synthesizers featured limited polyphony, typically
supporting four voices.

During the middle to late 1980s, digital synthesizers and samplers largely replaced analog synthesizers. Early
commercial digital synthesizers used simple hard-wired digital circuitry to implement techniques such as
subtractive synthesis, additive synthesis and FM synthesis. Other techniques, such as wavetable synthesis
and physical modeling, only became possible with the advent of high-speed microprocessor and digital
signal processing technology.

5 / 19
5_Synth_LMMS1.md 2024-08-17

Subtractive Synthesis

Additive Synthesis

Frequency Modulation

6 / 19
5_Synth_LMMS1.md 2024-08-17

7 / 19
5_Synth_LMMS1.md 2024-08-17

Wave Table Systnesis

8 / 19
5_Synth_LMMS1.md 2024-08-17

9 / 19
5_Synth_LMMS1.md 2024-08-17

Envelop

Listen to 'Daisy Bell'

Speech Synthesis

Speech synthesis is the process of generating spoken language using computers. It involves converting text
into speech, enabling machines to communicate verbally with humans. This technology is widely used in
various applications, such as virtual assistants, accessibility tools, and language translation systems.

There are several approaches to speech synthesis, each with its own strengths and limitations. The most
traditional method is concatenative synthesis, where pre-recorded speech segments, or units, are stored in a
database and stitched together to form complete utterances. This method can produce high-quality, natural-
sounding speech but is limited by the variety of pre-recorded units, leading to potential mismatches in
intonation or rhythm.

Another approach is formant synthesis, which generates speech by simulating the human vocal tract's
resonant frequencies, or formants. This method allows for more control over the speech output, making it

10 / 19
5_Synth_LMMS1.md 2024-08-17

possible to synthesize speech with various accents, pitches, and emotions. However, the resulting speech can
sound robotic and less natural compared to other methods.

Parametric synthesis involves the use of machine learning models to generate speech based on parameters
like pitch, duration, and spectral features. The Hidden Markov Model (HMM)-based synthesis is one such
method that gained popularity due to its flexibility and adaptability. While it offers more variability and
control than concatenative synthesis, it can still produce speech that sounds somewhat mechanical.

The most recent and advanced approach is neural network-based synthesis, specifically using deep learning
techniques. Models like WaveNet and Tacotron generate speech by learning patterns directly from large
datasets of human speech. This method can produce highly natural and expressive speech, closely mimicking
the nuances of human speech. However, it requires substantial computational resources and large amounts of
training data.

AI newscaster AI singers

Sound Synthesis 101


Fundamental Computing Concepts

Oscillators (Sinusoid, Triangular and Square waves)

// Parameters
fs = 44100; // Sampling frequency
duration = 5; // Duration in seconds
f = 440; // Frequency of the wave (A4 note)

// Time vector
t = (0:1/fs:duration-1/fs)';

// Sine wave
sine_wave = sin(2 * %pi * f * t);

// Triangular wave
triangular_wave = asin(sin(2 * %pi * f * t)) * (2 / %pi);

// Square wave
square_wave = sign(sin(2 * %pi * f * t));

// Normalize to -1 to 1 range
sine_wave = sine_wave / max(abs(sine_wave));
triangular_wave = triangular_wave / max(abs(triangular_wave));
square_wave = square_wave / max(abs(square_wave));

// Save as audio files


wavwrite(sine_wave, fs, 16, "sine_wave.wav");
wavwrite(triangular_wave, fs, 16, "triangular_wave.wav");
wavwrite(square_wave, fs, 16, "square_wave.wav");

// Plot the waveforms

11 / 19
5_Synth_LMMS1.md 2024-08-17

clf;
subplot(3,1,1);
plot(t, sine_wave);
title('Sine Wave');
xlabel('Time (s)');
ylabel('Amplitude');

subplot(3,1,2);
plot(t, triangular_wave);
title('Triangular Wave');
xlabel('Time (s)');
ylabel('Amplitude');

subplot(3,1,3);
plot(t, square_wave);
title('Square Wave');
xlabel('Time (s)');
ylabel('Amplitude');

Amplitude Modulation (AM) Synthesis as Low frequency Oscillator (LFO)

// AM Synthesis Parameters
mod_freq = 5; // Modulation frequency (Hz)
mod_index = 0.5; // Modulation index

// Modulating signal
mod_signal = (1 + mod_index * sin(2 * %pi * mod_freq * t)) / 2;

// Apply AM to sine wave


am_wave = sine_wave .* mod_signal;

// Save AM wave
wavwrite(am_wave, fs, 16, "am_wave.wav");

// Plot AM waveform
clf;
plot(t, am_wave);
title('AM Sine Wave');
xlabel('Time (s)');
ylabel('Amplitude');

Frequency Modulation (FM) Synthesis

// FM Synthesis Parameters
mod_freq = 5; // Modulation frequency (Hz)
mod_index = 100; // Modulation index

// Modulating signal

12 / 19
5_Synth_LMMS1.md 2024-08-17

mod_signal = mod_index * sin(2 * %pi * mod_freq * t);

// Apply FM to sine wave


fm_wave = sin(2 * %pi * (f * t + mod_signal));

// Save FM wave
wavwrite(fm_wave, fs, 16, "fm_wave.wav");

// Plot FM waveform
clf;
plot(t, fm_wave);
title('FM Sine Wave');
xlabel('Time (s)');
ylabel('Amplitude');

ADSR Curve

// ADSR Parameters
attack_time = 0.1; // seconds
decay_time = 0.2; // seconds
sustain_level = 0.7; // amplitude
release_time = 0.2; // seconds

// Create ADSR envelope


attack_samples = int(attack_time * fs);
decay_samples = int(decay_time * fs);
sustain_samples = int((duration - attack_time - decay_time - release_time) * fs);
release_samples = int(release_time * fs);

envelope = [linspace(0, 1, attack_samples), ...


linspace(1, sustain_level, decay_samples), ...
sustain_level * ones(1, sustain_samples), ...
linspace(sustain_level, 0, release_samples)];

// Apply envelope to sine wave


adsr_wave = sine_wave(1:length(envelope)) .* envelope';

// Save ADSR wave


wavwrite(adsr_wave, fs, 16, "adsr_wave.wav");

// Plot ADSR waveform


clf;
plot(t(1:length(envelope)), adsr_wave);
title('ADSR Envelope on Sine Wave');
xlabel('Time (s)');
ylabel('Amplitude');

Web-based Synthesizers
13 / 19
5_Synth_LMMS1.md 2024-08-17

Web-based synthesizers leverage the Web Audio API, JavaScript, HTML5/CSS3, and sometimes WebAssembly,
to create interactive, real-time sound synthesis tools that run directly in a web browser. These technologies
work together to provide a seamless, high-performance experience for users, enabling them to create and
manipulate sound without needing to install any additional software.

The Web Audio API is the foundation for audio processing in modern web browsers. It provides tools for
audio playback, synthesis, and processing, allowing developers to create complex audio applications.

Here are some popular web-based synthesizers that showcase the capabilities of modern web technologies:

Playground: Explore basic concepts


WebSynths: WebSynths is a powerful and intuitive polyphonic synthesizer that runs entirely in your web
browser. It offers various oscillators, filters, effects, and modulation options.
Audiotool: Audiotool is a comprehensive, cloud-based music production studio that includes
synthesizers, drum machines, and effects. It's more than just a synthesizer; it's a full digital audio
workstation (DAW) in your browser.
Curated collection of free music creation resources: Webbased Syntheizers.
Yamaha DX7
Juno 106

Chrome Music Lab


What is Chrome Music Lab? Chrome Music Lab is a website that makes learning music more accessible
through fun, hands-on experiments.

What can it be used for? Many teachers have been using Chrome Music Lab as a tool in their classrooms to
explore music and its connections to science, math, art, and more. They’ve been combining it with dance and
live instruments.

Can I use it to make my own songs? Yes. Check out the Song Maker experiment, which lets you make and
share your own songs.

LMMS - Linux Multimedia Studio


LMMS (formerly Linux MultiMedia Studio) is a digital audio workstation application program. It allows music
to be produced by arranging samples, synthesizing sounds, entering notes via computer keyboard or mouse
(or other pointing device) or by playing on a MIDI keyboard, and combining the features of trackers and
sequencers. It is free and open source software, written in Qt and released under GPL-2.0-or-later (wikipedia)

Main Features

Compose music on Windows, Linux and macOS


Sequence, compose, mix and automate songs in one simple interface
Note playback via MIDI or typing keyboard
Consolidate instrument tracks using Beat+Bassline Editor
Fine tune patterns, notes, chords and melodies using Piano Roll Editor
Full user-defined track-based automation and computer-controlled automation sources
Import of MIDI files and Hydrogen project files

14 / 19
5_Synth_LMMS1.md 2024-08-17

Downoad and Install

https://lmms.io/

Online user Manual is qite complete for you to start learning this tool.

LMMS Showcase

showcase

LMMS GUI

Getting start

15 / 19
5_Synth_LMMS1.md 2024-08-17

Basic components

16 / 19
5_Synth_LMMS1.md 2024-08-17

17 / 19
5_Synth_LMMS1.md 2024-08-17

Basic operations 1 Basic operations 2

18 / 19
5_Synth_LMMS1.md 2024-08-17

LMMS sharing platform

Practical (5%)

Create the Rasa Sayang song again, but this time using LMMS to create.

Requirements
Melody track: Use the piano roll to create the melody for the whole song.
Accompaniment track: Use the piano roll to create the accompaniment.
Rhythm track: Traditional drum or hand clap to create basic rhythm.
Art direction: traditional style i.e., use percussive sound of Marimba, Vibraphone, Xylophone, etc.,
for melody and accompaniment.

Useful Resources

Bibliography
https://docs.lmms.io/user-manual

https://en.wikipedia.org/wiki/MIDI

https://en.wikipedia.org/wiki/Sound_module

https://en.wikipedia.org/wiki/Speech_synthesis

https://en.wikipedia.org/wiki/Articulatory_synthesis

19 / 19

You might also like