2.5 Binary Instruction: Synthesis
2.5 Binary Instruction: Synthesis
Standard systems for software synthesis normally generate samples from a relatively high-
level specification of acoustically related components and parameters, such as oscillators,
envelope, frequency and amplitude. In binary instruction, samples are generated from the
specification of low-level computer instructions with no reference to any pre-defined
synthesis paradigm. The sound is described entirely in terms of digital processes. The
rationale for this radical approach is that the sounds should be produced using the low-level
‘idiom’ of the computer. If there is such a thing as a genuine ‘computer timbre’, then binary
instruction would be the most qualified technique to synthesise it.
Binary instruction functions by using basic computer instructions such as logical functions
and binary arithmetic to process sequences or binary numbers. The result of the processing
is output as samples through an appropriate digital-to-analog converter (DAC). The
compelling aspect of this technique is the speed in which samples are processed. Since there
is no need to compile or interpret high-level instructions, sounds can be produced in real
time on very basic machines. This was a very great achievement for the 1970s.
Different sounds are associated with different programs coded in the assembler language of
the computer at hand. The idea is interesting but it turned out to be a damp squib because
assembler is a very low-level language and musicians would hardly have a vested interest
in this level of programming. In order to alleviate the burden of writing assembler programs
to produce sounds, Paul Berg and his colleagues at the Institute of Sonology developed a
language called PILE (Berg, 1979) for the specification of binary instruction instruments.
Similarly, at the University of Edinburgh, Stephen Holtzman developed a sophisticated
system for generating binary instruction instruments inspired by research into artificial
intelligence (Holtzman, 1978).
43
Computer Sound Design
synthesis. There are at least four major distinct mechanisms that are commonly employed
to implement wavetable synthesis: single wavecycle, multiple wavecycle, sampling and
crossfading.
In fact, most synthesis languages and systems, especially those derived from the Music N
series, function using wavetables. In order to understand this, imagine the programming of
a sinewave oscillator. There are two ways to program such oscillators on a computer. One is
to employ a sine function to calculate the samples one by one and output each of them
immediately after their calculation. The other method is to let the machine calculate only one
cycle of the wave and store the samples in its memory. In this case, the sound is produced
by repeatedly scanning the samples as many times as necessary to forge the duration of the
sound (Figure 2.27). The memory where the samples are stored is technically called wavetable
or lookup table; hence the scope for the ambiguity we commonly encounter with this term.
Wavetable
+1
-1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Index values
Figure 2.27 The samples for a waveform are stored on a wavetable and the sound is produced by
repeatedly scanning the samples
44
Loose modelling approaches: from modulation and waveshaping to Walsh and wavetable
Figure 2.29 Single wavecycle concatenating between straight and reverse playback
In order to introduce some variance over time, wavecycle oscillators should be used in
conjunction with signal modifiers such as envelopes and low-frequency oscillators (LFO).
45
Computer Sound Design
Decay
Attack
Sustain Release
Figure 2.30 Multiple wavecycle using two different sound sources for different portions of the sound
space in random access memory (RAM) to add a few others, either sampled by the user or
provided by a third party (e.g. on CD or from the Internet). Additional effects and simple
sound-processing tools are often provided to allow some degree of user customisation. For
example, an envelope can be redrawn or a low-pass filter can be used to dump higher
partials. Other sophisticated mechanisms, such as using a sequence of partial wavecycles to
form a complete sound, are commonly associated with the multiple wavecycle approach; for
example, when the attack, decay, sustain and release portions of the sound are taken from
different sources (Figure 2.30).
2.6.3 Sampling
In some cases it may be advantageous to use longer wavetables. The difference between
multiple wavecycling and sampling is that the latter generally uses longer wavetables. Even
though the multiple wavecycle approach works with more than one cycle, the size of the
wavetables is relatively short and may need several loops to produce even a note of short
duration. One advantage of sampling over multiple wavecycling is that longer wavetables
allow for the use of pointers within a sample in order to define internal loopings (Figure
2.31). Due to this flexibility, sampling is normally used for creating sonorities and effects that
would not be possible to create acoustically.
2.6.4 Crossfading
Crossfading is an approach which combines elements of multiple wavecycling and
sampling. This works by gradually changing the sample outputs from one wavetable by the
46
Loose modelling approaches: from modulation and waveshaping to Walsh and wavetable
Loop A Loop B
Figure 2.31 Sampling with two internal loopings
Figure 2.32 Crossfading works by gradually changing the sample outputs from one wavetable to the
sample outputs of another
sample outputs of another in order to produce mutating sounds (Figure 2.32). Crossfading
is a simple way of creating sound morphing effects, but it does not produce good results all
the time. Some architectures let musicians specify more than only two sources to crossfade,
and more sophisticated set-ups allow for the use of manual controllers, such as a joystick, to
drive the crossfading. Variants of the crossfading approach use alternatives techniques (e.g.
sample interpolation) in order to make the transition smoother.
47
Computer Sound Design
Figure 2.33 The Spectral Sketch Pad module of Virtual Waves allows for the creation of sounds
graphically on a sketch pad area
This module allows for the creation of sounds graphically on a sketch pad area where the
horizontal axis corresponds to time control and the vertical axis to frequency control. There
are three sketching tools available:
! The paintbrush: for drawing freehand lines
! The straight line pencil: for drawing straight lines
! The airbrush: for creating patches of spots, or ‘grains’ of sound
48